One of the consistent themes of this blog since the very beginning has been that alternative medicine treatments, before being accepted, should be subject to the same standards of evidence as “conventional” medical therapies. When advocates of evidence-based medicine (EBM) like myself say this, we are frequently treated with excuses from advocates of alternative medicine as to why their favored treatments cannot be subjected to the scientific method in the same way that medicine has increasingly applied it to its own treatments over the last few decades, in the process weeding out treatments of dubious efficacy that had been handed down through they years as dogma. To me, these excuses usually sound like a case of special pleading, and I have rarely found them even mildly convincing. However, I try to remain open-minded, and periodically I see a variant of the same sorts of arguments that catches my interest. So it is with this press release:
Science Daily — Evidence-based medicine (EBM), is widely accepted among researchers as the “gold-standard” for scientific approaches. Over the years, EBM has both supported and denied the value of allopathic medicine practices, while having less association with complementary and alternative medicine (CAM) practices. Since most CAM practices are complex and focus on healing rather than cure the question arises as to whether EBM principles are sufficient for making clinical decisions about CAM. That is the focus of this special issue of Integrative Cancer Therapies by SAGE Publications.
“While evidence-based medicine’s emphasis on randomized controlled trials has many benefits, researchers and clinicians have found that this focus may be too limited for complex systems such as complementary and alternative medicine (CAM), and other approaches to healing,” said Wayne B. Jonas, MD, president and chief executive officer of the Samueli Institute and this special issue’s guest editor.
The December special issue of Integrative Cancer Therapies presents articles that explore EBM and alternative strategies to EBM for evaluating CAM and in particular, options for conducting CAM research on cancer. This issue discusses whether clinical research on CAM using randomized placebo-controlled trial designs is the best strategy for making evidence-based decisions in clinical practice, and describes strategies that use “whole systems” and “integrated evaluation models” as potential new standards for research on CAM for cancer.
Sounds like more excuse-making to me. However, just to be fair, I went to the issue and downloaded two of the key articles on this theme, an editorial “Top of the Hierarchy” Evidence for Integrative Medicine: What Are the Best Strategies? by Keith I. Block and Wayne B. Jonas and Evidence Summaries and Synthesis: Necessary But Insufficient Approach for Determining Clinical Practice of Integrated Medicine? by Ian D. Coulter.
I’m not impressed.
Let’s look at Dr. Coulter’s article first, because it’s the more substantive of the two.
The heart of evidence-based practice is in fact to be found in the use of evidence gained from systematic reviews or more correctly in the synthesis of evidence from systematic reviews. But just as studies vary in the quality of the design so do systematic reviews, and it is therefore necessary for those wishing to make clinical decisions based on this evidence to evaluate the evidence summaries and synthesis themselves. This article examines the criteria available for evaluating the quality of the evidence summary and synthesis. It provides a set of questions for doing this: who did the review; what was the objective of the review; how was the review done? Together these questions allow us to determine the trustworthiness of the review. However, that by itself is insufficient for making clinical decisions. The article suggests that this occurs because the very studies that improve the quality of reviews, that is, the randomized controlled trials, deal with efficacy and not effectiveness. Because they tend to be conducted under ideal conditions, they seldom provide the type of information needed to make a decision vis-à-vis an individual patient. The article suggests that observation studies provide much better information in this regard. The challenge here, however, is to develop standards for judging quality observation studies. In conclusion, systematic reviews and syntheses of evidence are a necessary but an insufficient method for making clinical decisions.
In actuality, Dr. Coulter’s article is a reasonably good summary of the strengths and weaknesses of randomized trials, metanalyses, and EBM. It’s also a pretty good summary of how one should look at a systematic review of the literature or a metanalysis to determine how valid that it may or may not be, and he concedes some of the precepts of EBM, such as the one that states that accumulated “experience” and “expert opinion” are often the weakest form of evidence. (Never mind that most evidence cited for the efficacy of alternative medicine consists of nothing but “accumulated experience.”) Where he veers into questionable assertions is when he starts harping on “effectiveness” versus “efficacy.” To me it’s a bit of an artificial distinction. Efficacy simply means that a treatment has been shown to work in a randomized clinical trial (RCT); effectiveness means that a treatment works in normal clinical practice. He argues that different treatments with the same or similar efficacy can show wildly different effectiveness when put into clinical practice. Fair enough, and he cites two six year old New England Journal of Medicine articles (1, 2) that concluded that well-designed observational studies do not overestimate treatment effects later identified in RCTs. However,I would point out that treatments that fail to show efficacy in RCTs are highly unlikely to do any better in “real life” practice, and the vast majority of alternative therapies fall into that category. Thus, bringing up the point that observational studies may be an alternative to RCTs is irrelevant to whether EBM is up to the task of evaluating the efficacy or effectiveness of alternative medicine. Besides, as an accompanying editorial pointed out:
Any systematic review of evidence on a therapeutic topic needs to take into account the quality of the evidence. Any study, whether randomized or observational, may have flaws in design or analysis. Both types of study may have quirks in methods of recruiting patients, in the clinical setting, or in the delivery of the treatment that can cast doubt on the generalizability of the results. And for some studies, the reports are never published at all, especially if the findings are negative. These problems of heterogeneity and publication bias are relevant to all comparisons of evidence from randomized, controlled studies and observational studies. However, all observational studies have one crucial deficiency: the design is not an experimental one. Each patient’s treatment is deliberately chosen rather than randomly assigned, so there is an unavoidable risk of selection bias and of systematic differences in outcomes that are not due to the treatment itself. Although in data analysis one can adjust for identifiable differences, it is impossible to be certain that such adjustments are adequate or whether one has documented all the relevant characteristics of the patients. Only randomized treatment assignment can provide a reliably unbiased estimate of treatment effects.
Thus, even though observational studies can in some cases provide a fairly reliable estimate of the treatment effect, that does not mean that they are inherently as good as RCTs. Indeed, the vast majority of them are not, particularly in the realm of alternative medicine studies. In any case, it’s very clear that this editorial is simply a much more sophisticated version of the same whine that we’ve been hearing from advocates of alternative medicine for years over RCTs that fail to find any efficacy. I’ll give Dr. Coulter credit for at least putting together a semi-reasonable case that observational studies can be of considerable value in assessing the effectiveness of a therapy. It just doesn’t demonstrate that observational studies that are claimed to be evidence for the effectiveness of this or that alternative therapy rise to the level necessary to be considered potentially as valid as an RCT. At least Dr. Coulter dismissed the article I castigated a few months ago about “microfascism” as seeming “a little extreme.”
The editorial, on the other hand, is simply a rehash of the common whine we hear from alties. The statement that perhaps best epitomizes where they’re coming from is this, a complaint about ranking RCTs at the top of the hierarchy of clinical scientific evidence for the efficacy of any therapy and how that means that such trials are weighted much higher than case series or anecdotal reports:
It is clear that this approach to EBM misses emergent properties of complex systems when those system components lose their power if separated into parts. Healing approaches and many complex integrative systems of medicine present exactly such complex systems. Healing is defined as the process of recovery, repair, and reintegration that persons and biological systems continually invoke to establish and maintain homeostasis and function.7 These processes are the most powerful force we have for recovery from illness and the maintenance of well-being and so the most important for clinical practice. Healing models do not postulate specific or direct casual links to disease, because they target inherent adaptogenic responses and assume that redundancy and multiple pathways are an inherent characteristic of every system. We know from placebo and behavioral medicine research, for example, that manipulation of the social and cultural context, practitioner-patient-family communication strategies, the physical environment, and simpler verbal and nonverbal information can markedly change
outcomes, often to a much greater extent than specific drugs or surgical treatments, especially in chronic disease. In fact, integrative treatment systems may explicitly use such behavioral research in adapting to the needs of their patients, for example, determination of whether patients are “monitors” or “blunters” before delivering news of a cancer diagnosis or recurrence, so that the level of detail can be tailored to the preferred style of handling distressing information.
All of this is well and good as far as it goes (albeit laden with altie jargon), but there are two problems. First, no convincing evidence is presented that EBM can’t deal with “emergent properties of complex systems” (whatever the authors mean by that) or that the “interaction” of such alternative medical systems is necessary to their efficacy. Second, this discussion assumes that somehow alternative practitioners are better at dealing with “complex” systems of disease than “conventional” doctors. This is an implication that I find unconvincing at best and risible at worst. Alternative medicine practitioners claim to be better at looking at the “whole patient” and “complex chronic diseases,” but I have yet to see any evidence that they are. After all, “conventional” internists might routinely deal with a patient with diabetes, hypertension, heart disease, and Parkinson’s, for example. They have to be aware of the interactions of all these diseases and how changing the therapy for one of them might impact the state of the others. Can an “alternative” practitioner do better? I doubt it. Scientific medicine may have its shortcomings, and indeed sometimes get lost in the complexities of managing multiple chronic conditions, losing the “forest for the trees,” so to speak. Even so, when proposing the use of remedies not supported by any compelling evidence of efficacy, the onus is on the alternative medical practitioner to show that he or she can do as well or better, and that is where alternative medicine routinely fails.
Of course, it’s pretty clear that the real agenda behind these complaints is not a rational discussion of the strengths and shortcomings of EBM. After all, that sort of discussion goes on all the time in the medical literature, in medical and surgical conferences, and among academic and nonacademic physicians. It’s a lively debate, unlike the “monolithic” stance that purveyors of alternative medicine also represent us “conventional doctors” as having. In fact, the real purpose of this discussion seems to be to make the claim that alternative medicine is just too darned complicated to be studied by scientific methods, and that’s why scientific methodology and RCTs routinely fail to identify efficacy better than that of a placebo in the vast majority of studies of alternative medical therapies. Those pesky RCTs, you see, miss the “nuances” of the “complexity” of “emergent systems” or the “heterogeneity” of the patient population (each member of which, we are told by alties, must have his or her treatment “individualized”) with these treatments are claimed to be dealing. Such complaints will exaggerate, for example, a known shortcoming of RCTs (that the patient populations are relatively homogeneous, so that the treatment effect may diminish when the treatment is applied to the population at large) and use it to claim that RCTs are inadequate to study alternative medicine modalities.
This is, as Douglas Adams would put it, a load of dingo’s kidneys. There is nothing inherent in either the diseases or conditions for which alternative medicine themselves that is any less amenable to examination by scientific medicine and RCTs than any “conventional therapy,” unless one invokes a treatment that conflicts with huge swaths of chemistry, physiology, physics, and pharmacology: a treatment like homeopathy, for example. In such a case, the burden of proof still remains on the person arguing that such therapies should be taken seriously, given their extreme scientific implausibility. Indeed, in the case of any alternative therapy, the burden of proof to justify using a different standard of evidence other than what is accepted for conventional therapies, lies on the person advocating such special pleading. Simply pointing out the problems with EBM and RCTs, problems that are well-appreciated in the medical community, is not enough.
In the end, much of this edition of Integrative Cancer Therapies boils down to nothing more than a case of special pleading, albeit in one case particularly sophisticated special pleading that is not immediately evident as such. No convincing argument has made that “alternative” or “complementary” therapies should be judged under scientific and clinical standards any different than those used to evaluate new conventional therapies.
And they shouldn’t be.
Advocates of alternative medicine should also consider that what’s good for the goose is good for the gander. If alternative medicine advocates are going to claim that they should be evaluated under a “systems” or “wholistic” approach, then they had better be careful what they ask for. They might get it, and, if conventional medicine were also be subject to looser standards of evidence, questionable “conventional” therapies will also appear to have better efficacy as well, to the detriment of science and, more importantly, the patients who rely on it for improved therapies. That the methodology and standards of EBM have deficiencies is not a reason to water them down or alter them to accommodate alternative medicine; rather they should be improved and made more rigorous in order to apply them to all medicine.