Is evidence-based medicine "sufficient" for alternative medicine research?

One of the consistent themes of this blog since the very beginning has been that alternative medicine treatments, before being accepted, should be subject to the same standards of evidence as "conventional" medical therapies. When advocates of evidence-based medicine (EBM) like myself say this, we are frequently treated with excuses from advocates of alternative medicine as to why their favored treatments cannot be subjected to the scientific method in the same way that medicine has increasingly applied it to its own treatments over the last few decades, in the process weeding out treatments of dubious efficacy that had been handed down through they years as dogma. To me, these excuses usually sound like a case of special pleading, and I have rarely found them even mildly convincing. However, I try to remain open-minded, and periodically I see a variant of the same sorts of arguments that catches my interest. So it is with this press release:

Science Daily -- Evidence-based medicine (EBM), is widely accepted among researchers as the "gold-standard" for scientific approaches. Over the years, EBM has both supported and denied the value of allopathic medicine practices, while having less association with complementary and alternative medicine (CAM) practices. Since most CAM practices are complex and focus on healing rather than cure the question arises as to whether EBM principles are sufficient for making clinical decisions about CAM. That is the focus of this special issue of Integrative Cancer Therapies by SAGE Publications.

"While evidence-based medicine's emphasis on randomized controlled trials has many benefits, researchers and clinicians have found that this focus may be too limited for complex systems such as complementary and alternative medicine (CAM), and other approaches to healing," said Wayne B. Jonas, MD, president and chief executive officer of the Samueli Institute and this special issue's guest editor.
The December special issue of Integrative Cancer Therapies presents articles that explore EBM and alternative strategies to EBM for evaluating CAM and in particular, options for conducting CAM research on cancer. This issue discusses whether clinical research on CAM using randomized placebo-controlled trial designs is the best strategy for making evidence-based decisions in clinical practice, and describes strategies that use "whole systems" and "integrated evaluation models" as potential new standards for research on CAM for cancer.

Sounds like more excuse-making to me. However, just to be fair, I went to the issue and downloaded two of the key articles on this theme, an editorial "Top of the Hierarchy" Evidence for Integrative Medicine: What Are the Best Strategies? by Keith I. Block and Wayne B. Jonas and Evidence Summaries and Synthesis: Necessary But Insufficient Approach for Determining Clinical Practice of Integrated Medicine? by Ian D. Coulter.

I'm not impressed.

Let's look at Dr. Coulter's article first, because it's the more substantive of the two.

The heart of evidence-based practice is in fact to be found in the use of evidence gained from systematic reviews or more correctly in the synthesis of evidence from systematic reviews. But just as studies vary in the quality of the design so do systematic reviews, and it is therefore necessary for those wishing to make clinical decisions based on this evidence to evaluate the evidence summaries and synthesis themselves. This article examines the criteria available for evaluating the quality of the evidence summary and synthesis. It provides a set of questions for doing this: who did the review; what was the objective of the review; how was the review done? Together these questions allow us to determine the trustworthiness of the review. However, that by itself is insufficient for making clinical decisions. The article suggests that this occurs because the very studies that improve the quality of reviews, that is, the randomized controlled trials, deal with efficacy and not effectiveness. Because they tend to be conducted under ideal conditions, they seldom provide the type of information needed to make a decision vis-à-vis an individual patient. The article suggests that observation studies provide much better information in this regard. The challenge here, however, is to develop standards for judging quality observation studies. In conclusion, systematic reviews and syntheses of evidence are a necessary but an insufficient method for making clinical decisions.

In actuality, Dr. Coulter's article is a reasonably good summary of the strengths and weaknesses of randomized trials, metanalyses, and EBM. It's also a pretty good summary of how one should look at a systematic review of the literature or a metanalysis to determine how valid that it may or may not be, and he concedes some of the precepts of EBM, such as the one that states that accumulated "experience" and "expert opinion" are often the weakest form of evidence. (Never mind that most evidence cited for the efficacy of alternative medicine consists of nothing but "accumulated experience.") Where he veers into questionable assertions is when he starts harping on "effectiveness" versus "efficacy." To me it's a bit of an artificial distinction. Efficacy simply means that a treatment has been shown to work in a randomized clinical trial (RCT); effectiveness means that a treatment works in normal clinical practice. He argues that different treatments with the same or similar efficacy can show wildly different effectiveness when put into clinical practice. Fair enough, and he cites two six year old New England Journal of Medicine articles (1, 2) that concluded that well-designed observational studies do not overestimate treatment effects later identified in RCTs. However,I would point out that treatments that fail to show efficacy in RCTs are highly unlikely to do any better in "real life" practice, and the vast majority of alternative therapies fall into that category. Thus, bringing up the point that observational studies may be an alternative to RCTs is irrelevant to whether EBM is up to the task of evaluating the efficacy or effectiveness of alternative medicine. Besides, as an accompanying editorial pointed out:

Any systematic review of evidence on a therapeutic topic needs to take into account the quality of the evidence. Any study, whether randomized or observational, may have flaws in design or analysis. Both types of study may have quirks in methods of recruiting patients, in the clinical setting, or in the delivery of the treatment that can cast doubt on the generalizability of the results. And for some studies, the reports are never published at all, especially if the findings are negative. These problems of heterogeneity and publication bias are relevant to all comparisons of evidence from randomized, controlled studies and observational studies. However, all observational studies have one crucial deficiency: the design is not an experimental one. Each patient's treatment is deliberately chosen rather than randomly assigned, so there is an unavoidable risk of selection bias and of systematic differences in outcomes that are not due to the treatment itself. Although in data analysis one can adjust for identifiable differences, it is impossible to be certain that such adjustments are adequate or whether one has documented all the relevant characteristics of the patients. Only randomized treatment assignment can provide a reliably unbiased estimate of treatment effects.

Thus, even though observational studies can in some cases provide a fairly reliable estimate of the treatment effect, that does not mean that they are inherently as good as RCTs. Indeed, the vast majority of them are not, particularly in the realm of alternative medicine studies. In any case, it's very clear that this editorial is simply a much more sophisticated version of the same whine that we've been hearing from advocates of alternative medicine for years over RCTs that fail to find any efficacy. I'll give Dr. Coulter credit for at least putting together a semi-reasonable case that observational studies can be of considerable value in assessing the effectiveness of a therapy. It just doesn't demonstrate that observational studies that are claimed to be evidence for the effectiveness of this or that alternative therapy rise to the level necessary to be considered potentially as valid as an RCT. At least Dr. Coulter dismissed the article I castigated a few months ago about "microfascism" as seeming "a little extreme."

The editorial, on the other hand, is simply a rehash of the common whine we hear from alties. The statement that perhaps best epitomizes where they're coming from is this, a complaint about ranking RCTs at the top of the hierarchy of clinical scientific evidence for the efficacy of any therapy and how that means that such trials are weighted much higher than case series or anecdotal reports:

It is clear that this approach to EBM misses emergent properties of complex systems when those system components lose their power if separated into parts. Healing approaches and many complex integrative systems of medicine present exactly such complex systems. Healing is defined as the process of recovery, repair, and reintegration that persons and biological systems continually invoke to establish and maintain homeostasis and function.7 These processes are the most powerful force we have for recovery from illness and the maintenance of well-being and so the most important for clinical practice. Healing models do not postulate specific or direct casual links to disease, because they target inherent adaptogenic responses and assume that redundancy and multiple pathways are an inherent characteristic of every system. We know from placebo and behavioral medicine research, for example, that manipulation of the social and cultural context, practitioner-patient-family communication strategies, the physical environment, and simpler verbal and nonverbal information can markedly change
outcomes, often to a much greater extent than specific drugs or surgical treatments, especially in chronic disease. In fact, integrative treatment systems may explicitly use such behavioral research in adapting to the needs of their patients, for example, determination of whether patients are "monitors" or "blunters" before delivering news of a cancer diagnosis or recurrence, so that the level of detail can be tailored to the preferred style of handling distressing information.

All of this is well and good as far as it goes (albeit laden with altie jargon), but there are two problems. First, no convincing evidence is presented that EBM can't deal with "emergent properties of complex systems" (whatever the authors mean by that) or that the "interaction" of such alternative medical systems is necessary to their efficacy. Second, this discussion assumes that somehow alternative practitioners are better at dealing with "complex" systems of disease than "conventional" doctors. This is an implication that I find unconvincing at best and risible at worst. Alternative medicine practitioners claim to be better at looking at the "whole patient" and "complex chronic diseases," but I have yet to see any evidence that they are. After all, "conventional" internists might routinely deal with a patient with diabetes, hypertension, heart disease, and Parkinson's, for example. They have to be aware of the interactions of all these diseases and how changing the therapy for one of them might impact the state of the others. Can an "alternative" practitioner do better? I doubt it. Scientific medicine may have its shortcomings, and indeed sometimes get lost in the complexities of managing multiple chronic conditions, losing the "forest for the trees," so to speak. Even so, when proposing the use of remedies not supported by any compelling evidence of efficacy, the onus is on the alternative medical practitioner to show that he or she can do as well or better, and that is where alternative medicine routinely fails.

Of course, it's pretty clear that the real agenda behind these complaints is not a rational discussion of the strengths and shortcomings of EBM. After all, that sort of discussion goes on all the time in the medical literature, in medical and surgical conferences, and among academic and nonacademic physicians. It's a lively debate, unlike the "monolithic" stance that purveyors of alternative medicine also represent us "conventional doctors" as having. In fact, the real purpose of this discussion seems to be to make the claim that alternative medicine is just too darned complicated to be studied by scientific methods, and that's why scientific methodology and RCTs routinely fail to identify efficacy better than that of a placebo in the vast majority of studies of alternative medical therapies. Those pesky RCTs, you see, miss the "nuances" of the "complexity" of "emergent systems" or the "heterogeneity" of the patient population (each member of which, we are told by alties, must have his or her treatment "individualized") with these treatments are claimed to be dealing. Such complaints will exaggerate, for example, a known shortcoming of RCTs (that the patient populations are relatively homogeneous, so that the treatment effect may diminish when the treatment is applied to the population at large) and use it to claim that RCTs are inadequate to study alternative medicine modalities.

This is, as Douglas Adams would put it, a load of dingo's kidneys. There is nothing inherent in either the diseases or conditions for which alternative medicine themselves that is any less amenable to examination by scientific medicine and RCTs than any "conventional therapy," unless one invokes a treatment that conflicts with huge swaths of chemistry, physiology, physics, and pharmacology: a treatment like homeopathy, for example. In such a case, the burden of proof still remains on the person arguing that such therapies should be taken seriously, given their extreme scientific implausibility. Indeed, in the case of any alternative therapy, the burden of proof to justify using a different standard of evidence other than what is accepted for conventional therapies, lies on the person advocating such special pleading. Simply pointing out the problems with EBM and RCTs, problems that are well-appreciated in the medical community, is not enough.

In the end, much of this edition of Integrative Cancer Therapies boils down to nothing more than a case of special pleading, albeit in one case particularly sophisticated special pleading that is not immediately evident as such. No convincing argument has made that "alternative" or "complementary" therapies should be judged under scientific and clinical standards any different than those used to evaluate new conventional therapies.

And they shouldn't be.

Advocates of alternative medicine should also consider that what's good for the goose is good for the gander. If alternative medicine advocates are going to claim that they should be evaluated under a "systems" or "wholistic" approach, then they had better be careful what they ask for. They might get it, and, if conventional medicine were also be subject to looser standards of evidence, questionable "conventional" therapies will also appear to have better efficacy as well, to the detriment of science and, more importantly, the patients who rely on it for improved therapies. That the methodology and standards of EBM have deficiencies is not a reason to water them down or alter them to accommodate alternative medicine; rather they should be improved and made more rigorous in order to apply them to all medicine.

Categories

More like this

[This is a very long post, a reply to Orac's (my respected SciBling at Respectful Insolence) equally long response to my also long original post that invited him to tell us what he thought separated his brand of medicine from the "alties" he frequently posts about. Probably most of you won't have…
Except this time it's from the right! Richard Dolinar of the Heartland Institute (a crank tank) writes in TCS Daily that evidence-based medicine (EBM) is bad for patients. A new buzzword entered the medical lexicon in 1992 when the Evidence-Based Medicine Working Group published one of the first…
Over the last few decades, the nature of medical knowledge has changed significantly. Before the revolution in evidence-based medicine, clinical medicine was practiced as more of an art (in the "artisan" sense). Individuals were treated empirically with a strong knowledge of medical biology, and…
I've noticed that whenever I have the temerity to suggest (e.g., here and here) that maybe the word of the Cochrane Collaboration isn't quite the "last word" on the subject and indeed might be seriously flowed, I hear from commenters and see on other sites quelle horreur reactions and implications…

Somehow, right after I saw the word "allopath", I knew that the issue would be just as you described.

And, your point that alties rely on the fallacious special pleading arguments is well taken. I know of several.

Is "adaptogenic" a real word with a medical defintion, or just altie jargon?

Blech! I work in the complex systems field. I've done math on the problem of "emergent properties". This is hyped-up woo jabber designed to give a special plead a veneer of scientific respectability.

All diseases are, philosophically, "emergent properties of complex systems". Can we describe smallpox in terms of interactions at the atomic level? No, but that didn't stop medicine from stamping it out. Rule of thumb: every time you see the word "emergent", replace it with "magical" and see if the content of the sentence is affected. All too often, saying one word means no more than saying the other, because the speaker is invoking "emergence" to cloud the listener's thinking and conceal the shallowness of the speaker's ideas.

Orac wrote:

That the methodology and standards of EBM have deficiencies is not a reason to water them down or alter them to accommodate alternative medicine; rather they should be improved and made more rigorous in order to apply them to all medicine.

But if we water down our standards until not a single molecule is left, won't they be even more potent?

I wonder if complex systems and "emergent properties" will be adopted into the same anti-science jargon which has appropriated quantum mechanics and chaos theory. All three subjects can appear, wrongly, to support the contention that Traditional Science is just one narrative among many, which is closely allied to the idea that alternative ways of knowing lead to alternative medicines. Mix this with a cup of stale postmodernism, seasoned with a tablespoon of Strong Programme, and you get a full course of woo.

Maybe because my native language is not English, but I don't see any diference between "heal" and "cure". In Spanish can be traced a similar (non-)distinction between "sanar" and "curar", but in common use "curar" applies to virtually every form of fighting pain and disease, including EBM and some traditional CAMs. "Sanar" is used almost exclusively in quasi-mystical or religious contexts. I can't remind of anyone who's not an altie (at least in the language used in western Mexico during 20th and 21st centuries) using "sanar" instead of "curar" or "aliviar".

Just for the record.

By MartÃn Pereyra (not verified) on 16 Jan 2007 #permalink

While reading this, a potential study structure for some forms of naturopathic medicine -- especially homeopathy -- occurred to me. You'd need an office with two identical meeting/treatment rooms: one with "actual" naturopathic medicines available and one with pure placebos. Take two naturopaths, of the same type and school, and randomly assign one to each room, each day. Randomly assign patients to each one as well, with follow-up surveys.....

I am pleased to see that you're thinking along these lines, Ahistoricality. A similar thought occurred to me some time ago when I Googled for "lancet homeopathy" and followed the top link out of curiosity. It's a bit of homeopathic apologetics aimed at undercutting the conclusion of the Lancet's meta-analysis, viz.: that homeopathic remedies work as well as placebos. The author, Cathy Wong, objects that the study failed to examine homeopathy as it really is:

The problem is there is no such thing as clinical homeopathy. No one trained and licensed in homeopathy would recommend a single, identical remedy for patients with a certain disease or condition.
...
To give everyone with a certain disease or condition the same remedy is not considered homeopathy. The Lancet meta-analysis included studies that may have been statistically sound, but should have been excluded because they lacked a fundamental understanding of what homeopathy is.

OK, I thought -- fair enough. If the problem is simply that we haven't applied the practitioner's holistic method properly, let's do the following: Let m homeopaths examine n patients, conscientiously applying their homeopathic diagnostic skills to their fullest. Prepare prescriptions for all n patients, but replace a randomly selected n/2 with placebos. See if there is a statistically significant difference in outcomes.

What could be simpler? Has anyone tried it?

If their methods are too complex to study systematically how do they develop or practice them? If there is no consistent criteria, what allows one to select a treatment? How can you even teach something that is inconsistent? A system without method and standards isn't much of a system at all. I find it hard to swallow that something is so complex it can't be assessed, but can fairly easily be put to use.

What Matt said above.

Whoever wrote the article seems to have the common logical disability that afflicts so many in the alternative medicine field, "most CAM practices are complex and focus on healing rather than cure". Well if you know something is healing, then you can test whether it heals better than placebo!
And how is "evidence-based medicine" a gold standard? If it's not evidence-based it's is simply made up stuff. It's the quality of evidence that counts. Maybe all those therapies that are not in the evidence-based category should be labelled in large letter as "Non-evidence-based therapy"

Sure, but you can probably see yourself why making that argument is a losing strategy. As long as your objection is to an alternative methodology, that objection can always be countered by an endless series of obfuscatory explanations:

  • Hey, our methods are complex because the body/mind is complex.
  • We do, too, systematize -- we just don't use the systems of Western Medicine.
  • Our criteria need not be "consistent" within your rigid, linear conception of that term.
  • We successfully teach our students a flexible, holistic method of evaluating the patient's entire person, and we graduate hordes of zoomeopaths every year.

And so on. You will never, ever get any serious traction by telling an alt-med advocate that his entire system is a nebulous pile of crap. If you don't believe me, try it some time. On the other hand, asking for objective evidence of results, and insisting on rigorous methods of assessing that evidence, at least keeps the debate within a defined boundary. As we've seen above, that's no guarantee of success -- but at least there's hope.

I am not a native speaker of Spanish, as Martin is - in fact, I am a lifelong Chicagoland resident. However, I have to agree with his assertion; neither he nor I see any difference between the verbs "to heal" and "to cure." To me, "to heal" is "to cure." I realize that doctors cannot always either heal or cure us. I myself am the recipient of much medical therapy, but I can't be healed or cured by modern medical methods given my medical history. But to me, the two verbs are the same, no matter how much the New Age woomeisters might try to define them differently.

By Robin Peters (not verified) on 16 Jan 2007 #permalink

This is pretty elementary stuff. "Cure" increases current hit points by some fixed quantity. "Heal" removes status conditions, such as poison, but sometimes has no effect on extreme conditions (such as zombification).

What, am I really the only person here who plays Final Fantasy? ^_^

Nope. And you forgot the original: There, Cure recovered hitpoints for one character. Heal did smaller amounts for all characters.

Oh, and elsewhere, heal recovers all hp and nearly all status conditions, while cure (x) wounds merely restores hp.

Nope. And you forgot the original: There, Cure recovered hitpoints for one character. Heal did smaller amounts for all characters.

Yes, that's right. I am shamed by my omission.

Oh, and elsewhere, heal recovers all hp and nearly all status conditions, while cure (x) wounds merely restores hp.

Bah! D&D is a pseudoscience. Those spells are all cast by Clerics.

Re "healing" vs. "curing": I also do not get the dstinction; in german the word "cure" ("kurieren") is even not really used health-related, only in a ironic fashion ("Das hat mich echt kuriert...").
But I also wondered about this statement:

Healing models do not postulate specific or direct casual links to disease, because they target inherent adaptogenic responses and assume that redundancy and multiple pathways are an inherent characteristic of every system.

For me that translates to: We always apply the same "treatment", regardless of the disease, because in the end the multiple pathways will take care of the healing anyway. This means that the patients will in fact not be treated as an individual but everyone will get the same cure-all as the disease does not matter. This opens in fact a very easy way to test CAM as you do not need to take the individual in account: The same energy balancing should work for everybody.
These folks are dangerous. They are only a step away from Christian Science.

But there is at least one difference: you can cure a ham, but you can never heal it.

By Christophe Thill (not verified) on 17 Jan 2007 #permalink

No! They are even worse than the Christian Scientists! The CS at least admit to themselves that they're leaving it up to a higher power to fix, and are prepared to 'live with the consequences' (i.e. let their loved one die, often painfully) if it doesn't work (because it was 'all for a reason, even if we don't understand'). "These folks" are actually convinced that they are taking useful action.

By Justin Moretti (not verified) on 17 Jan 2007 #permalink

With regards to "curing" and "healing"; "curing" would usually refer to fixing a specific, single problem, like a disease, whilst in this context "healing" is used to connote a more holistic approach. You know, like the way "allopaths" only treat illness, and don't care about your wellness.

Also, "curing" is something done to a patient, while "healing" is something the body does to itself (for example, you don't cure a cut, it just heals), and it's that process that alties say they help along.

Oh, and also, you guys have it all wrong. Heal increases hit points, and Cure removes status conditions, but only "Disease" type conditions. Only paladins can fix those.

Steevl, level 60 shadow priest