Better late than never: Orac comments on the hijacking of evidence-based medicine

It's no secret that I'm a fan of John Ioannidis. (If you don't believe me, just type Ioannidis' name into the blog search box and see how many posts you find.) Over the last couple of decades, Ioannidis has arguably done more to reveal the shortcomings of the medical research enterprise that undergirds our treatments, revealing the weaknesses in the evidence base and how easily clinical trials can mislead, than any other researcher. Indeed, after reading what is Ioannidis' most famous article, "Why Most Published Research Findings Are False", back in 2005, I was hooked. I even used it for our surgical oncology journal club. This was long before I appreciated the difference between science-based medicine (SBM) and evidence-based medicine (EBM). So a couple of weeks ago it was with much interest that I read an article by Ioannidis framed as an open letter to David Sackett, the father of evidence-based medicine, entitled "Evidence-based medicine has been hijacked: a report to David Sackett." Ioannidis is also quoted in a follow-up interview with Retraction Watch.

Before I get to Ioannidis' latest, I can't help but point out that, not surprisingly, quacks and proponents of pseudoscientific and unscientific medicine often latch on to Ioannidis' work to support their quackery and pseudoscience. They've been doing it for years. Certainly, they're already latching on to this article as vindication of their beliefs. After all, their reasoning—if you can call it that—seems to boil down to: If "conventional" medicine is built on such shaky science, then their pseudoscience isn't wrong after all, given that the same scientific enterprise upon which conventional medicine is based produces the findings that reject their dubious claims and treatments. Of course, whenever I hear this line of argument, I'm reminded of Ben Goldacre's famous adage, seen in one form on Twitter here:

The adage can be generalized to all EBM and SBM as well. Just because big pharma misbehaves, EBM has flaws, and conventional medicine practitioners don't always use the most rigorous evidence does not mean that, for example, homeopathy, acupuncture, or energy medicine works.

Still, when Ioannidis publishes an article with a title provocatively declaring that EBM has been "hijacked," I take notice.

Has EBM been "hijacked"?

In his "report" to David Sackett, Ioannidis does touch on a number of pertinent and interesting points regarding the adoption of EBM but, as you will see, pretty much ignores the one huge elephant in the room. Before you hear the pounding of elephant feet, though, let me just say that I think Ioannidis is being a bit too hard on himself to have declared himself a "failure" in 2004 and an "even bigger failure" twelve years later. I suppose by the metrics he uses he is a failure, but I would argue that those are the wrong metrics. First, however, let's see where Ioannidis is coming from. He believes that EBM met considerable resistance in the US:

EBM met with substantial resistance in the 1990s and 2000s. Even in the USA, the mecca of biomedical research, EBM, and any serious biomedical research that may help intact humans was largely unwanted. As a clinical research fellow, I remember that every week we were waiting to hear whether the Agency for Health Care Policy and Research (AHCPR, which subsequently became AHRQ) would be axed. The agency had hurt the interests of some powerful professional surgical society: one of its guidelines threatened the indications for an expensive and possibly largely useless surgical procedure. AHCPR/AHRQ survived, but has always had to fight valiantly for its existence since then. EBM is widely tolerated mostly when it can produce largely boring evidence reports that are shaped and endorsed by experts. Many years later, the Patient-Centered Outcomes Research Institute was launched, an equally valiant effort to cover some of that vast space. It was also restricted by a mandate not to deal with cost effectiveness of interventions. I was asked to participate in its Methodology Committee along with great colleagues. I contributed far less than anyone else to the effort before deciding to quit out of shame for my lack of contribution. Most of our tasks seemed to require experts rather than evidence. More than 7 billion of people would be better qualified than me to lead expert-based activities.

One can't help but wonder to which surgical procedure Ioannidis is referring to. Could it be vertebroplasty, for which negative trials have not changed practice as much as they should have? Or is Ioannidis referring to the 1990 report by the AHRQ that found rest and pain medications worked as well as surgery for back pain? I think that's probably it. Be that as it may, it is true that political forces in medicine have long detested the AHRQ. Critics argue that its research is duplicative with the NIH, even though the NIH does not generally fund studies of the health care delivery system as the AHRQ does, and PCORI, even though the PCORI funds only comparative effectiveness research. (Comparative effectiveness research compares the effectiveness of already approved and utilized medical interventions.) Moreover, the agencies all work very hard to avoid duplication.

On the other hand, in organized medicine there is a large constituency that supports the AHRQ, which was in evidence last year during the latest attempt to defund the organization, when dozens of health care organizations raced to the agency's defense and more than 100 health groups signed a letter in June and July urging the House and Senate to maintain AHRQ and the research it funds, including the American College of Surgeons and the American Association of Orthopedic Surgeons. I also can't help but note the development of Choosing Wisely four years ago, where physician professional societies consciously agreed to start making lists of interventions that remain common but are not evidence-based in an effort to decrease their use. It hasn't always gone as planned, but it's a concerted effort.

So, yes, there is resistance to the AHRQ. However, these days it is far more business interests, such as drug and device manufacturers, than physicians groups who want to abolish both the AHRQ and the PCORI, mainly because the AHRQ and PCORI's research threatens these companies' bottom lines by showing which treatments work better in the "real world" and influencing the Centers for Medicare & Medicaid Services (CMS) regarding which new drugs and devices will be paid for. In fact, I'd argue that, while Ioannidis is correct that drug and device manufacturers want to kill AHRQ and PCORI, he's missed a sea change in attitude among physicians towards such government agencies whose purpose is to evaluate and compare treatments for effectiveness after they've been approved. It might have been true that EBM was not popular 15 or 20 years ago, but as new generations of medical students have been inculcated with its principles and importance, EBM has been "baked in" to physician education, with a resultant change in attitude towards efforts to promote EBM. That's not to say that physician groups don't protect their turf. Just look at how radiologists, for example, react to new guidelines that increase the recommended age at which to start mammography or how primary care physicians react to legislation expanding the scope of practice of advanced practice nurses. However, extreme hostility to comparative effectiveness research and EBM-based guidelines has mostly retreated to fringe physician groups like the American Association of Physicians and Surgeons. Unfortunately, resistance to EBM as a constraint on physician autonomy is still fairly common, particularly among older physicians.

I view this as one victory of John Ioannidis and others like him. In other words, I agree with much of what he says, but I tend to have a different interpretation, although not always. For instance, it's very clear that the attacks on the AHRQ and PCORI have little or nothing to do with EBM being "co-opted," but rather powerful business interests that see EBM-based guidelines promulgated by AHRQ as inimical to their financial interests. Ioannidis even relates a story of a senior professor of cardiologist telling a friend of his that Ioannidis should not be too outspoken, "otherwise Albanian hit men may strangle him in his office." To me, that's evidence of success.

Methodolatry and the rumbling of elephants

Here's where the elephant in the room that I mentioned earlier starts getting closer. Specifically, what Ioannidis complains about next are definitely signs of EBM's success, but I also agree that they are signs of EBM being co-opted or "hijacked." After all, if EBM weren't successful, then why would anyone want to hijack it? The problem is that somehow Ioannidis, as brilliant as he is, doesn't seem to realize one part of the solution to the problems of EBM that he's been relating and relates here.

In any case, here's the next part of Ioannidis' tale:

Now that EBM and its major tools, randomized trials and meta-analyses, have become highly respected, the EBM movement has been hijacked. Even its proponents suspect that something is wrong [[10], [11]]. The industry runs a large share of the most influential randomized trials. They do them very well, they score better on "quality" checklists [12], and they are more prompt than nonindustry trials to post or publish results [13]. It is just that they often ask the wrong questions with the wrong short-term surrogate outcomes, the wrong analyses, the wrong criteria for success (e.g., large margins for noninferiority), and the wrong inferences [[14], [15], [16]], but who cares about these minor glitches? The industry is also sponsoring a large number of meta-analyses currently [17]. Again, they get their desirable conclusions [18]. In 1999 at the closing session of the Cochrane Colloquium in Rome, among the prevailing enthusiasm of this benevolent community, I spoiled the mirth with my skepticism. I worried that the Cochrane Collaboration may cause harm by giving credibility to biased studies of vested interests through otherwise respected systematic reviews. My good friend, Iain Chalmers, countered that we should not worry—plus there were many topics where the industry had not been involved. He mentioned steroids as one example. It was not very reassuring. Now even the logo of the Collaboration, the forest plot for prenatal steroids, has been shown to be partially wrong due to partial reporting [19]—let alone reviews of trials done with vested interests from their very conception.

It is, of course, not surprising that industry would run the lion's share of randomized clinical trials (RCTs). The FDA requires RCTs demonstrating efficacy and safety before it will approve a drug or device, and the drug and device industries are in the business of marketing new drugs and devices. It is thus in their financial interest to run the most "rigorous" trials (from an EBM standpoint) that they can and publish them as quickly as they can, because that will speed up FDA approval. Unfortunately, it is also in their interest to ask questions and use endpoints that are most likely to produce "positive" trials. What Ioannidis neglects to mention is that the pressure to do this also comes from non-financial motivations. You have only to look at the activism of AIDS victims in the 1980s and the rise of the "right to try" movement to realize that there is a large and strong constituency demanding faster drug approval. This constituency has influence, too. As a result, legislators are willing introduce legislation that would weaken scientific standards for drug and device approval, like the Saatchi bill in the UK and the 21st Century Cures Bill in the US. Not coincidentally, these bills align with industry interests as well as patients looking for faster approval of new drugs. Of course, the answer to this problem is not necessarily a problem with EBM itself. Rather, it is the regulations that drug approval agencies in the US and other countries use to approve drugs and devices.

As for the Cochrane Collaborative, we've discussed problems with this group right from the very beginning, most recently when Tom Jefferson of the Cochrane Acute Respiratory Infections Group argued that childhood vaccination schedules are not evidence-based because, in essence, there are not RCTs testing every schedule and RCT evidence refuting various claimed adverse reactions doesn't exist. This is a problem with EBM that I, in particular, have discussed multiple times before on this blog, the problem of methodolatry, defined as the "profane worship of the randomized controlled clinical trial as the only valid means of investigation." While it is true that RCTs are the "gold standard" in clinical evidence of efficacy and safety, there are many questions that for either ethical or practical reasons can't be studied by RCTs. Unfortunately, methodolatry is a feature, not a bug, of EBM that consigns all evidence below that of RCTs and meta-analyses of RCTs to a category of inferior evidence or not evidence at all! That includes the epidemiological studies that have shown vaccines to be safe and not to be a cause of autism, as well as the entire corpus of basic science that tells us homeopathy is impossible pseudoscience. In EBM, it's RCT evidence and, in particular meta-analyses über alles!

This is why I frequently refer to the "blind spot" of EBM. From my perspective, EBM has indeed been "hijacked." I, for example, agree with Ioannidis that industry has to some extent hijacked EBM, but I also add that advocates of quackery have also done so, hence the creation of a whole new field of medicine dedicated to "integrating" pseudoscience with medicine, an NIH center (the National Center for Complementary and Integrative Health), and many programs in academic medical centers dedicated to what I like to call quackademic medicine.

What's frustrating is that Ioannidis seems at some level to recognize this problem but doesn't explicitly address it. For instance, he mentions in his article that there are "so many quacks ranging from television presenters and movie stars turned into health trainers [40] and pure science denialists (e.g., climate, HIV, vaccine denialists, and religious fundamentalists) that one has to tread carefully" and that we should "avoid a civil war on how to interpret evidence within the health sciences when so many pseudoscientists and dogmatists are trying to exploit individuals and populations and attack science," but seems unwilling to address the elephant in the room, how EBM has been hijacked by CAM promoters as well as industry.

Science-Based Medicine: At least part of the cure

Personally, I really like John Ioannidis because I, like he, am dedicated to improving the scientific rigor of medicine. What we have a problem with in EBM is the methodolatry that has come to dominate the core of EBM that leads EBM purists to refuse to dismiss obvious quackery like homeopathy and "energy medicine." In other words, EBM does not adequately take into account prior probability, and as a result demands RCTs for just about everything. I'm not saying that EBM ignores basic science considerations and the prior probability derived from them; it's just that EBM relegates basic science to the very lowest rung of the EBM hierarchy. So, on the one hand, EBM (mostly) dismisses basic science considerations, and its methodolatry leads to unethical trials of homeopathy and cancer quackery like the Gonzalez protocol, as well as the widespread acceptance of "integrative medicine" at universities and massive research programs studying quackery (a.k.a. quackademic medicine).

In fact, you can see some of this attitude in Ioannidis' quoting of Sackett:

We value the study of adenine, thymine, cytosine, and guanine far above the study of childhood diarrhoea, Chagas' disease, community health, and patients' decision making. The issue is not the potential usefulness of basic research at some distant future date (although you might even dare question whether it is being oversold). The issue is that basic medical scientists have hijacked the granting bodies and have erected research policies that place greater value in serving their own personal curiosities than in serving sick people.

Perhaps this was true in 2004, although I was around then and am still not sure I buy Sackett's take even then. It's certainly not true now, where getting a basic science grant funded by the NIH is damned near impossible without a clear disease or translational research angle. There's a reason why scientists are calling for more basic science. Whether he realizes it or not, by quoting Sackett, Ioannidis inadvertently appears to be echoing the discounting of basic science at the heart of EBM.

Of course, Ioannidis is not wrong when he observes:

With clinical evidence becoming an industry advertisement tool and with much "basic" science becoming an annex to Las Vegas casinos, how about the other pieces of EBM, for example, diagnosis and prognosis and individualizing care? I have had great excitement about the prospects of omics, big data, personalized medicine, precision medicine, and all. Much of my effort has been to put together these efforts with rigorous statistical methods and EBM tools. But I am tired of seeing the same overrated promises recast again and again. For example, several years ago I gave an invited lecture at a leading institution on the danger of making inflated promises in personalized medicine. Right after my talk, everybody rushed to hear the launch of a new campaign, where the leader of the institution singled out this unique historic moment: that institution would single-handedly eliminate most major types of cancer within a few years. Several years have passed, and none of these cancer types have disappeared. I recently tried to find the name of that campaign online but realized that this institution has launched many similar campaigns. Which among many was the unique historic moment that I happened to be at? Multiply this by thousands of institutions, and there are already millions of unique historic moments where cancer was eliminated. Same applies to neurologic diseases and more. I do not understand why academic leaders and politicians need to make such self-embarrassing announcements now and then.

I do. It's the same reason why industry has tried to co-opt EBM, and Ioannidis is quite correct to condemn this sort of behavior. Over the years we've written many posts criticizing the same sorts of claims. I myself have questioned the hype over "precision medicine," "liquid biopsies," and the like. I remember a director of the National Cancer Institute making the claim that we would eliminate the suffering and death from cancer by the year 2015. Well, it's 2016, and cancer is still with us. I still see a lot of suffering and death due to it. I can still remember attending a surgery meeting and shaking my head as I watched a talk touting this initiative.

So Ioannidis has described much of the disease of EBM, and we've tried to describe the rest. What's the cure?

Our fearless leader Steve Novella argues that SBM is the answer. I agree, but not entirely. To me, SBM is a large part of the answer. Much of the problem with EBM that allows it to be hijacked is EBM's blind spot with respect to basic science. Steve is correct that SBM is designed to take a "big picture", global look at any clinical claim and that we need to consider the scientific plausibility of the claim, not just RCTs. However, there is a limit to what SBM can do in correcting the co-optation of EBM.

To understand why, just consider: How do we estimate the basic scientific plausibility of a medical claim? It's not nearly as straightforward as one might think. How plausible a claim is worth investigating with an RCT? 10% likely to be true? 50% likely to be true? 75% likely to be true? What SBM is great for is rejecting ridiculously implausible health claims, like those made by homeopaths, on the basis of basic science considerations alone that tell us that they are impossible. It's also great for increasing the rigor of clinical trials by taking scientific mechanism into consideration. It can also be useful for prioritizing which clinical trials to do. I would also argue that SBM can counter the overselling of basic science and early clinical trial results that Ioannidis complains about. Finally, I agree with Ioannidis about how many risk factors for disease have been promoted, based on rather shaky evidence. It's not for no reason that I thoroughly enjoyed his way of illustrating this point by going through a cookbook to look for which food ingredients have been linked with either increasing or decreasing the list of cancer and finding that most of them have, although the evidence was very shaky.

It has to be noted, however, that, as Ioannidis himself noted, industry clinical trials are often among the most rigorous out there, "checking off all the EBM boxes" for rigor. The same observation also applies to the basic science foundations of these trials. Before a drug company even does its first clinical trial of a new drug, it has reams of basic science data. The mechanism is known and documented. It has to be that way, because the FDA requires it. SBM alone is unlikely to have much of an impact on the hijacking of EBM by industry. Sure, it would help to decrease the number of trials that ask the wrong questions or use the wrong analyses. Similarly, applying prior probability to clinical trial analysis would likely decrease the number of seemingly "positive trials" whose results really aren't clinically significant. However, I suspect that SBM would be less useful with respect to surrogate outcomes, because choosing surrogate outcomes is a question EBM is already well-equipped to deal with, and, aside from clinical trials of "complementary and alternative medicine" treatments with very low prior probability, the same can be said with regard to the criteria for success and how to derive inferences.

What's far more likely to have an effect are two things: Transparency, and the requirement of more clinically relevant endpoints by both journal editors and government regulatory agencies. Of the two, transparency is likely to be very important, which is why the AllTrials movement is so important, to make sure that the results of every RCT is recorded somewhere, including the negative ones that all too often are not published. As hard as making sure the results of all clinical trials are reported in a timely fashion is, herding the cats to get them to agree on what constitutes a more "clinically relevant endpoint" and a sufficiently rigorous trial design will be more difficult, given the number of journals that publish clinical trial results and the number of government agencies that oversee drug and device approval in various countries.

That's where the work of people like John Ioannidis will be most important. The first step in solving a problem is to identify it, and no one has been able to identify the problems with how science is applied to medicine as effectively and prolifically as John Ioannidis. I only wish he were more cognizant of the blind spot of EBM with respect to basic science than he appears to be.

Categories

More like this

Except this time it's from the right! Richard Dolinar of the Heartland Institute (a crank tank) writes in TCS Daily that evidence-based medicine (EBM) is bad for patients. A new buzzword entered the medical lexicon in 1992 when the Evidence-Based Medicine Working Group published one of the first…
I've noticed that whenever I have the temerity to suggest (e.g., here and here) that maybe the word of the Cochrane Collaboration isn't quite the "last word" on the subject and indeed might be seriously flowed, I hear from commenters and see on other sites quelle horreur reactions and implications…
I keep getting asked about the Atlantic Magazine article, Does the Vaccine Matter? by Shannon Brownlee and Jeanne Lenzer, two reporters whose particular bias is that we as a nation are "over treated." As a generalization that's probably true, and finding examples isn't hard. Unfortunately by taking…
[This is a very long post, a reply to Orac's (my respected SciBling at Respectful Insolence) equally long response to my also long original post that invited him to tell us what he thought separated his brand of medicine from the "alties" he frequently posts about. Probably most of you won't have…

Science based medicine is an answer only if you use the concept of science properly. For instance, for homeopaths, there is a science of homeopathy. For most of the scientists, science is a collection of papers. Experimental papers. Wrong or right. Relevant or not. If there is no new data, or not obtained with high-tech method, this is not science. Einstein's papers would not be science for them.They pretend to move knowledge forward, but they ignore what they should know, they are only interested in their own business, publishing the data for which they have obtained grants in high impact factor journals.

By Daniel Corcos (not verified) on 04 Apr 2016 #permalink

I agree with Daniel. (#1). The assumption that people who are scientists are more interested in how things work and using their discoveries to benefit humanity is one that gets forgotten. In grad school I worked on a very simple (relative to ours) nervous system studying a very basic behavior in an invertebrate. It was very clear, however, that if you went at all against what the leader in that field said (which was that learning and memory in this behavior was driven by a monosynaptic sensory cell to motor neuron connection), you would be blackballed. This "leader" later got himself a Nobel prize, though I've yet to see anything worthwhile come out of his finding outside of his grants and fame. I took a postdoc working for someone looking at this same system with a different technique. His conclusion was that even this simple behavior could not be understood because it was spread across to many neurons to delineate the neural pathways. I was also told if I chose to study this system after my postdoc I would not be given any access to the necessary equipment to study it and that I should go find a new system to study. Neither group really wanted to use science to understand what was happening--they just wanted their piece of pie and fame/tenure/stability. Now in medicine I see the same nonsense. I see it in all the quacks that cherry pick bad papers to support whatever nonsense they are selling. I see it in medical boards with this MOC nonsense. I see it in "meaningful use" requirements for physicians to get paid fully by medicare/medicaid plans. Lots of doctors are "checking off the boxes" on their electronic health records--not for rigor, but for reimbursement. I see stupid everywhere.

By Chris Hickie (not verified) on 04 Apr 2016 #permalink

Not a fan of Kandel, then?

By herr doktor bimler (not verified) on 04 Apr 2016 #permalink

Isn't that sort of a given though? Kinda like saying rocket science is only the answer if you use the concept of rocketry properly.

Or something. I'm not too smart, nor a scientist of any colour. :3

@ Amethyst
I think that most people use the concept of rocket properly. I don't mean they use rockets properly ;-)
But most people do not use the concept of science properly. Most of them think science is something which they can trust unquestioningly, but religion is better for that.

By Daniel Corcos (not verified) on 04 Apr 2016 #permalink

For most of the scientists, science is a collection of papers. Experimental papers. Wrong or right. Relevant or not. If there is no new data, or not obtained with high-tech method, this is not science.

I can understand how this happens in biomedical fields, where often there is no theory to organize the data.

My background is in physics, where we too often see the opposite problem: elegant theories with little or no constraint from data. String theory is the obvious poster child, but I see similar things in my field. People construct elaborate models of how the system supposedly works. This is actually quite hard to do, because many of the relevant systems are what's known in the business as "stiff": to model it accurately you have to keep track of temporal or spatial scales much smaller than the ones you are actually interested in, so often the modelers will make approximations (not always valid) to deal with this issue. Actual data are quite scarce, and seldom from the location you want (orbital mechanics limits our ability to do this). There are people who say that they believe no data until confirmed by theory, and some of those people are not joking.

Of course it spills over into funding. There is a controversy in my field which has been raging since before I started grad school, and the terms of the debate have hardly altered in the 25 years I have observed it. NASA review panels give disproportionate weight to one primary reviewer--if your proposal takes one side of the aforementioned debate and your primary reviewer is on the other side, you won't get funded.

By Eric Lund (not verified) on 04 Apr 2016 #permalink

Chris Hickie says (#2),

I see stupid everywhere.

MJD says,

If seeing stupid causes you pain the mega pharmaceutical company Johnson & Johnson has a product called BENGAY that is highly recommended.

Basically, BENGAY is a skin irritant that one copiously spreads onto the skin in an effort to induce pain-desensitization.

Warning: BENGAY may be considered alternative medicine in that the application of natural ingredients (e.g., methyl salicylate, menthol, camphor) are designed to distract the mind without actually fixing the underlying problem.

@ben goldacre,

Scientists citing problems with alternative medicine make me skeptical. One can't take the gay out of BENGAY.

By Michael J. Dochniak (not verified) on 04 Apr 2016 #permalink

@ Eric
I agree that in the absence of data, you cannot do science (maybe philosophy?). But I can understand people saying they believe no data until confirmed by theory (depending of course on the theory), because it is exactly how I react with homeopathy. The theory against homeopathy is strong enough that no new data can challenge it.

By Daniel Corcos (not verified) on 04 Apr 2016 #permalink

@Daniel #5: "Most of them think science is something which they can trust unquestioningly, but religion is better for that."

I would say, rather, that most people think science is something that is to be believed unquestioningly; if it's science it has to be right. If it's wrong, then people conclude scientists either are stupid, on the take, or inventing a religion.

Because of this assumption that if it is called science it has to be right and unchanging, when new evidence changes our understanding, people get very upset. It gives traction to people who want science to fail because their own belief systems don't cut the mustard (anti vaxers, climate deniers, and special interests whose profit centers are based on science NOT having all the answers).

To speak to the issue of RCT with "wrong" endpoints: time is a terrible constraint. Yes, everyone would like to look at the most meaningful endpoint: overall survival. But no one is interested in waiting 20 years to find out if a trial worked or not.
So you pick other endpoints you hope will tell you the same thing but sooner. And then you hope people are willing to be part of the post-approval surveillance. And you hope that your company will exist long enough to collect the data. And you hope that your intermediate endpoints were good.

It's not that the people who design RCT are trying to cheat. They're trying to get answers in the time that is allotted.

By JustaTech (not verified) on 04 Apr 2016 #permalink

I think that most people use the concept of rocket properly.

Rocket science has proven that the optimum proportion in a salad is 22%. No more, no less.

By herr doktor bimler (not verified) on 04 Apr 2016 #permalink

hdb@11 -- For some reason I'm reminded of Tyrone Slothrop in Gravity's Rainbow, escaping from an ambush in his nose-cone helmet and green cape : "Fickt nicht mit dem Raketenmensch!"

And of course Eric would agree that "believe no data until confirmed by theory" is sometimes appropriate. The supposed faster-than-light neutrinos a year or so back is a perfect example; one was faced with a choice of either (a) throwing over the tenets of special relaivity, which has been spectacularly successful for over a century and is woven deep into the warp and woof of all of contemporary physics; or (b) thinkng that maybe the experimenters screwed up somehow. Of course, (b) turned out to be the case -- it came down to an unaccounted for timing error caused by a cable.

By palindrom (not verified) on 04 Apr 2016 #permalink

Oh, and just to toot Eric's horn a little, he is a real rocket scientist! Really.

By palindrom (not verified) on 04 Apr 2016 #permalink

And of course Eric would agree that “believe no data until confirmed by theory” is sometimes appropriate.

Well, yes. Special relativity is a theory that has been tested by many independent means, so experimental data that seem to contradict it have to be checked thoroughly for mistakes like loose cables in the detector. Quantum mechanics is another such theory, and is one of the things that would have to be wrong in order for homeopathy to be right.

The problem is that some physicists (and biologists, if Chris@2 is to be believed) try to force data into theoretical paradigms that are not so well proven. There is an anecdote in Surely You're Joking, Mr. Feynman where Feynman is looking over experimental results that differed significantly from what the then accepted theory of that system predicted. When he looked into the matter, he found that the accepted theory was based on a single data point at the extreme end of an earlier experiment. When he calculated the predictions of the alternative model that the earlier experiment had supposedly ruled out, he found that the predicted value and the later experimental result agreed within the errors of the later experiment.

By Eric Lund (not verified) on 04 Apr 2016 #permalink

"Oh, and just to toot Eric’s horn a little, he is a real rocket scientist! Really."

I used to be one, also. I am always amused at the "You don't need to be a rocket scientist" comments. Especially when they think researching Google has more weight than actually studying and working in a particular scientific/technical field. I really love asking "energy healing" folks about the difference between kinetic and potential energy (which are actually high school physics level questions).

@ Eric
"Quantum mechanics is another such theory, and is one of the things that would have to be wrong in order for homeopathy to be right." The reverse is likely to be true, explaining why homeopathy believers have difficulty with quantum mechanics ;-)

By Daniel Corcos (not verified) on 05 Apr 2016 #permalink

Daniel @16 --- Hey, everybody has difficulty with quantum mechanics, even people who do it for a living (with tiny, tiny little wrenches, no doubt). So that's not something to hold against homeopathy.

Of course, there are plenty of other things to hold against homeopathy ...

By palindrom (not verified) on 05 Apr 2016 #permalink

@ palindrom
Well, this time, I put the joking smiley, but apparently something went wrong.

By Daniel Corcos (not verified) on 05 Apr 2016 #permalink

Daniel@18 -- Don't worry, I didn't think you were serious!

By palindrom (not verified) on 05 Apr 2016 #permalink

Palindrom says " *everybody* has difficulty with quantum mechanics"

Which is precisely why the woo-meisters like to toss it about so much. Mikey usually adds cognitive psychology/ physiology as well which he botches as badly as he botches everything else.

By intending to impress, he neatly illustrates his paucity of education and judgment. The other idiot attempts to impress with his catalogue of quotations from French and German philosophers whose names he usually mispronounces.

Important Rule of Woo-
Try to convince your audience of your brilliance because they'll never come to that conclusion on their own.

By Denice Walter (not verified) on 05 Apr 2016 #permalink

the woo-meisters like to toss [quantum mechanics] about so much

I think it was from this blog that I saw a link to an anecdote about a woman who was inspired by Deepak Chopra to study quantum mechanics...and one of the things she learned from studying quantum mechanics is that Deepak Chopra is full of bull.

Frequently, sounding smart is more effective--and too often, easier--than actually being smart. This is how people like Chopra and Adams are able to make a living at what they do.

By Eric Lund (not verified) on 05 Apr 2016 #permalink

@ Eric Lund:

You know, of late, I've come to think of woo-meisters' actions as performance art.
They mime what they think scientists/ intelligent people do
( Mike), how they speak and write ( Null) and how they investigate problems ( Andy in his film).

It's a cargo cult- an assemblage of actions, costumes and props.
Mike has a LAB; Null describes his RESEARCH, prescribing PROTOCOLs; Andy shows us the steps he takes to ferret out the TRUTH!

By Denice Walter (not verified) on 05 Apr 2016 #permalink

palindrom: "(with tiny, tiny little wrenches, no doubt)"

Nanowrenches?

All the physics I ever had to do were well within the limits of Newton, so I am not too cognizant of teeny tiny metrics. So I looked them up --- would yoctowrenches be small enough (10^-18)?

@MJD: do you know why some pain management doctors prescribe items that are counter-irritants? Why might they be effective?

@Chris: Depends what exactly you are doing. Atomic nuclei are on the femtometer (10^-15 m) scale. Atoms and small molecules start around 0.1 nm (sometimes known as an ångström, after the Swedish chemist) and go up to nanometer scales. Electrons are not known to have intrinsic size, but quantum effects give them a probability distribution spread over atomic or molecular scales. Cutting edge research these days (at least what I have seen of it in colloquia) is looking at somewhat larger systems, but nanometer scales are still the scales of interest. Ideal atomic-size transistors involve inducing a defect at a precise point in your semiconducting monolayer. So "nanowrench" would be an appropriate term (unless you are doing nuclear physics, for which a femtowrench would be more appropriate).

But beware the Heisenberg Uncertainty Principle. There is a tradeoff between how well you know the position of something and how well you know its momentum--specifically, their product must be the order of or greater than Planck's constant. (There is a precise factor I am forgetting because I don't have my quantum textbooks handy right now.) Energy and time are subject to the same constraint. In wave mechanics this is mathematically equivalent to the bandwidth theorem, which is why Heisenberg (who preferred matrix mechanics) got to attach his name to the phenomenon.

By Eric Lund (not verified) on 05 Apr 2016 #permalink

There is a tradeoff between how well you know the position of something and how well you know its momentum...

So, what, when we use a femtowrench, we can know how tight to torque it, but not if it has right or left hand threads?

"So, what, when we use a femtowrench, we can know how tight to torque it, but not if it has right or left hand threads?"

After locating the femtonut when I attempt to put the femtowrench onto it it's no longer there. Smart femtomechanics entangle themselves with what they're working on.

Eric Lund: "@Chris: Depends what exactly you are doing. Atomic nuclei are on the femtometer (10^-15 m) scale. Atoms and small molecules start around 0.1 nm (sometimes known as an ångström, after the Swedish chemist) and go up to nanometer scales."

Thank you very much. That helps me put a scale to the tiny things. And this is why I love this blog.

MI Dawn (#26) asks,

do you know why some pain management doctors prescribe items that are counter-irritants?

MJD says,

Please advise...

For public safety, I'd like to retract the term "copiously" in post #7 because too much BENGAY can cause methyl salicylate poisoning.

http://www.nytimes.com/2007/06/10/nyregion/10cream.html?_r=0

By Michael J. Dochniak (not verified) on 05 Apr 2016 #permalink

So, what, when we use a femtowrench, we can know how tight to torque it, but not if it has right or left hand threads?

Ahem.

Eric Lund,

Can I give you the internet? I learned something good today with your comment :D

Alain

# Narad

I'm applying today.

By jrkrideau (not verified) on 06 Apr 2016 #permalink

herr doktor bimler (#3): I don't like Kandel one bit. To me he is the antithesis of science--a perversion of wanting to understand turned into a "slash and burn" conquest for fame and fortune. And pity the fool who got in his way.

By Chris Hickie (not verified) on 06 Apr 2016 #permalink

@MJD: in very simplistic terms: there are 2 sets of neural pathways: pain and sensation. The counterirritants ramp up the sensation side, slowing down or blocking the pain pathways, so the person feels a decrease in pain. Kinda like a highway when it's rush hour.

Very interesting take on the problem areas of EBM. I'm fairly well known in some circles as an advocate for those who have been disabled by exposure to biocontaminants in water damaged buildings. (aka Toxic Mold Issue). But that's not what I really do. I'm an advocate for integrity in health marketing when setting public policies and designing physician educational materials for the sake of public health.

You are right about EBM causing quackery. I know for a fact that when EBM is not based on science, but is sold as proper physician education by gov't funded "nonprofit" medical associations -- it causes two-sided MASS quackery.

Over the Toxic Mold Issue we've got quacks on one side, who based on greatly flawed (fraudulent) EBM position, tell their patients that it's proven it's not possible the biocontaminants are harming them.

This quackery causes patients to seek alternative medical advice. In some instances it drives them into the arms of more quacks of the opposite persuasion who tell them all their ills are caused by Toxic Mold.

If Evidence Based Medicine is not based on evidenced-science, it's not evidence based anything. When non-evidence based medicine is crammed down the throats of the public by physicians who are relying upon DHHS funded "nonprofits" to get their information, the public loses faith in mainstream health care advice.

"Vaxxed" is a good example of the underlying problem. It's being promoted that it's proven vaccines never harm prior healthy people. But that's not true. There have been many settlements for disabilities caused by vaccines. Does that mean it's proven vaccines are the cause for the ever-increasing autism rate? No. I just means that it's false to sell the concept that vaccines are proven to never harm healthy people.

What is really being scientifically proven from this heated- vaccine debacle is "for every action there is an opposite and equal reaction." The more people are being falsely told it's proven vaccines are always safe, the more they are going to believe the claim it's a main cause for autism (and the gov't is hiding this "known fact" from them).

I don't know what the answer is, but I do know that it's extremely important that anyone being allowed to use the label of EBM to lend credibility to their position, better be able to back it up with solid science. Unfortunately, that is not currently happening and its causing a lot of problems in many areas of medicine and public health policy.

PS. Because of problems with how EBM is sold and marketed in the U.S. I'm not a big fan of Choosing Wisely.

"Nov 2015 American College of Medical Toxicology, Choose Wisely to Sunset Your Mold Statement" https://katysexposure.wordpress.com/nov-15-american-college-of-medical-…

By Sharon Kramer (not verified) on 08 Apr 2016 #permalink

@Sharon Kramer, I have never heard a reputable medical professional make such a claim. Usually, the only people I've seen making such a claim are quacks or anti-vaxxers so that they can then knock down the strawman they just created. The most you will hear reputable medical professionals or supporter of vaccination claim, is not that they are risk free but that the possible harm from vaccines are orders of magnitude less than the possible harm from the diseases they protect against.

By John Phillips (not verified) on 09 Apr 2016 #permalink