Sadly (with regards to vacation) and not-so-sadly (with regards to the events of last week), it's time to dive headlong back into the "real world" at work, starting with clinic today. It also means it's time to get back to my favorite hobby (blogging) in a much more regular way, although I will say that a relatively prolonged break from the blog was good, and my traffic only suffered mildly for it. I may have to do it more often, if only to keep things fresher.
One of the tasks that confronted me this weekend as I got ready to face a full week back at work was to try to catch up on all the literature that I had been ignoring for nearly three weeks. Fortunately, PubMed now lets you set up customized RSS feeds for any search you might want to set up. Unfortunately, thanks to that, approximately a thousand results were waiting for me when I finally got up the nerve to fire up NetNewsWire and let it download the results of all the feeds that I had set up. Faced with such abundance, instead of my usual practice of skimming the titles and the abstracts, I ended up just skimming the titles and marking anything that didn't immediately catch my attention and hold it as "read." There was, however, one article that did catch my attention and hold it almost immediately, an article by McAlister et al in PLoS Medicine entitled How Evidence-Based Are the Recommendations in Evidence-Based Guidelines?
Excellent! Something right up my alley and the perfect topic to start out my first full week back.
One of the consistent themes of this blog ever since it began as an itty-bitty ego trip on Blogspot back in late 2004 has been to emphasize evidence-based medicine and to advocate applying the same standards of evidence to alternative medicine that we expect of "conventional" or "evidence-based" medicine. Indeed, I've tended to resent the entire term "alternative medicine," mainly because it's becoming more and more clear to me that "alternative medicine" is nothing more than a politically correct term used by its advocates to describe a large body of non-evidence-based medicine and frame this description in such a way as to downplay the lack of evidence for efficacy of these treatments or, in some cases, the evidence against their efficacy. These days, my preference has been simply to refer to evidence-based versus non-evidence-based medicine or, alternatively, "scientific" versus "non-scientific" medicine. (How's that for "re-framing" the term "alternative medicine"?) Any "alternative" (a.k.a. "non-evidence-based") medicine that becomes "evidence-based" ceases to be "alternative" and is added to the armamentarium of scientific medicine, just as many medicines derived from plants and herbs have been over the last couple of centuries.
Admittedly, however, it's often unclear exactly what is meant by "evidence-based" medicine. After all, anecdotes are "evidence," albeit a very weak form of evidence prone to a number of confounding biases, and critics invoking postmodernism have even gone so far as to refer to an insistence on evidence-based medicine as inherently "fascist" in nature. That's why, before delving into the article, I'll review one commonly used definition of "evidence-based medicine":
- Evidence-based medicine is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence-based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research.
- Evidence-based medicine is neither old-hat nor impossible to practice.
- Evidence-based medicine is not "cook-book" medicine.
One common misconception about evidence-based medicine is that randomized clinical trials are the only form of evidence that matters. Sometimes this takes the form of a misrepresentation used as a straw man argument by advocates of alternative medicine to claim that a large proportion of "scientific medicine" is not evidence-based because there are no RCTs supporting it. (You'll often hear the claim bandied about on altie websites that only "10%" of conventional medicine is supported by RCTs). It's true true that RCTs are considered the strongest form of evidence (i.e. the "gold standard"), which is as it should be, but they are not the only acceptable form of evidence:
Evidence-based medicine is not restricted to randomised trials and meta-analyses. It involves tracking down the best external evidence with which to answer our clinical questions. To find out about the accuracy of a diagnostic test, we need to find proper cross-sectional studies of patients clinically suspected of harbouring the relevant disorder, not a randomised trial. For a question about prognosis, we need proper follow-up studies of patients assembled at a uniform, early point in the clinical course of their disease. And sometimes the evidence we need will come from the basic sciences such as genetics or immunology. It is when asking questions about therapy that we should try to avoid the non-experimental approaches, since these routinely lead to false-positive conclusions about efficacy. Because the randomised trial, and especially the systematic review of several randomised trials, is so much more likely to inform us and so much less likely to mislead us, it has become the "gold standard" for judging whether a treatment does more good than harm. However, some questions about therapy do not require randomised trials (successful interventions for otherwise fatal conditions) or cannot wait for the trials to be conducted. And if no randomised trial has been carried out for our patient's predicament, we follow the trail to the next best external evidence and work from there.
The problem with the vast majority of alternative medicine is that there is either (1) no solid clinical evidence that it works (most alternative medicine); (2) no plausible scientific reason to think that it should work (i.e. homeopathy, Reiki therapy); (3) worst of all, evidence that it doesn't work (i.e., laetrile, chelation therapy for cardiovascular disease); or (4) combinations of #1, 2, and 3 (homeopathy, chelation therapy, high dose vitamin C, etc.). Given this backdrop, it is of interest to know how much of "evidence-based" practice guidelines are in actuality truly evidence-based. Part of the need for such studies is that what is meant by "evidence-based" may be interpreted differently by those who write such guidelines and those who use them. McAlister et al note:
There has been a rapid expansion in the number of clinical practice guidelines over the past decade and, as a result, clinicians are frequently faced with several guidelines for treatment of the same condition. Unfortunately, recommendations may differ between guidelines, leaving the clinician with a decision to make about which guideline to follow. While it is easy to say that one should follow only those guidelines that are "evidence based," very few guideline developers declare their documents to be non-evidence based, and there is ambiguity about what "evidence based" really means in the context of guidelines. The term may be interpreted differently depending on who is referring to the guideline--the developer, who creates the guidelines, or the clinician, who uses them. To their developers, "evidence-based guidelines" are defined as those that incorporate a systematic search for evidence, explicitly evaluate the quality of that evidence, and then espouse recommendations based on the best available evidence, even when that evidence is not high quality. However, to clinicians, "evidence based" is frequently misinterpreted as meaning that the recommendations are based solely on high-quality evidence (i.e., randomized clinical trials [RCTs]).
The authors decided to evaluate the most recent guidelines for the management of diabetes mellitus, dyslipidemia, and hypertension and focused on evaluating the evidence base for cardiovascular risk management interventions only, leaving out the evidence for other recommendations in those guidelines. They rated the quality evidence behind each guideline using an online tool (the CHEP scheme, which is based on the GRADE working group and the AGREE instrument using a scheme that they outlined here.
Hearteningly, it was found that two-thirds of the cardiovascular risk management therapeutic guidelines were based on evidence from RCTs. Less hearteningly, it was estimated that only one-half of these RCT-based guidelines were of "high quality." What this means is not that the studies used to support these guidelines were not of high quality. Rather, the reason that half of the studies were downgraded from "high" quality when analyzed in the context of the recommendations is because of applicability. The most frequent reason was that an RCT designed to answer a particular question was being generalized to justify the recommendation it supports in a different clinical scenario. Alternatively, results of studies that were carried out in very defined populations were being used to support recommendations in a more general population. In other words, although high quality RCTs can be the basis for several recommendations, the evidence from a single RCT will not support all of the recommendations derived from it equally well, and sometimes developers of guidelines are forced to extrapolate beyond what the RCTs say simply because there is no better evidence available.
The bottom line is that, in this one area at least, if you believe this study, only around 1/3 of the recommendations in a set of consensus guidelines about how to manage cardiovascular risk in three different conditions are based on "high quality" RCT evidence. The study does have significant flaws, such as a small sample size of guidelines examined and only looked at therapeutic interventions, but it's probably not all that far from the truth, at least as far as it is able to go. I'm surprised that I haven't yet seen this study trumpeted on websites like NewsTarget or Whale.to as "evidence" that "evidence-based" medicine is not really evidence based, the implication being that so-called "alternative" medicine does just as well.
Does this study demonstrate that "evidence-based" medicine is nothing of the sort? Of course not! What it really demonstrates is the difficulty inherent in practicing evidence-based medicine and coming up with truly evidence-based guidelines. There are just too many holes in our clinical evidence ever to obviate the need for "filling in the gaps," and, because of the rapid advances (or, to the more cynical, "changes") in treatments over the years, there will always be such gaps. In the case of evidence-based guidelines, this "filling in" of the gaps usually involves extrapolating studies further than would normally be done, to encompass wider study populations or clinical scenarios that might not exactly match the questions asked in the RCTs from which the evidence supporting the guideline was drawn. It also involves considering the impact of other comorbid conditions, something that is not in general well done in these sorts of guidelines. In the case of individual clinicians, this is where clinical experience and acumen come into play, specifically the ability to synthesize the evidence from "evidence-based medicine" and apply it to the treatment of individual patients.
Now for the second question: Does this study give comfort to advocates of alternative medicine that their modalities are just as "evidence-based" as those of scientific medicine.
In a word, no.
Alties may have some pleasure in pointing a finger at this study and claiming that "only" 33% of "evidence-based" treatments are in fact evidence-based, but that would be a misrepresentation. In fact, all of the treatments in evidence-based guidelines are based on some evidence, it is only that 33% of them are based on high quality evidence that is precisely applicable to the clinical situation for which they are being made. Given the biological variability of disease and between patients, that's actually not too bad. Certainly we can and should work to do better, and the authors' suggestion that future guidelines explicitly grade the quality of evidence for each individual guideline is a good one. Indeed, I'm starting to see just that in the literature more and more often; for example, in the recent American Cancer Society guidelines for the use of MRI in screening for breast cancer, where each guideline was graded as "based on high quality evidence," "based on expert consensus opinion," or "insufficient evidence for or against."
Contrast this to "alternative" medicine. Indeed, "evidence-based" guidelines in alternative medicine, until recently, have been nonexistent. Those that I've come across that claim to be "evidence-based" guidelines for the use of alternative medicine have virtually all been of extremely poor quality, with little or no attempt to base them on high quality RCT evidence. And, of course, the thought of even applying the term "evidence-based" guidelines to woo such as homeopathy, Reiki, various "detoxification" regimens, or applied kinesiology causes an intense urge in me to break out in hysterical laughter, an urge that I sometimes cannot resist given how often I come across articles by alties claiming the RCT is not an appropriate study design to identify therapeutic effects due to alternative medical modalities. It may be a legitimate criticism of scientific medicine that not enough of its recommendations are based on high quality RCT evidence and that we could do better, but in the alternative medicine world, there is an antiscientific attitude that downplays the value of evidence-based medicine in general and the RCT in particular.
If you think about it, it makes perfect sense, given how poor the evidence base is behind most alternative medicine modalities (particularly the ones not based on herbs from which actual pharmaceutical medicines might be derived), where the larger and higher quality RCTs almost invariably show much smaller or, more commonly, nonexistent effects than the usual panoply of poorer quality studies touted by alternative medicine aficianados as evidence for their woo. In other words, I'll put the evidence base and results supporting "evidence-based medicine" up against the evidence base supporting "alternative" medicine any day of the week.
I think it would be great if altie groups came out with practice guidelines for things like homeopathy.
That would give us a written document that would (presumably) indicate when to use which homeopathic "remedy." That could be the basis for an RCT, which I'm confident would prove no effect beyond placebo.
Unfortunately, I think most homeopaths are smart enough not to pin themselves down thus.
Hospital administrators set the air conditioning to cold so that, one, doctors won't sweat, get uncomfortable, or start stinking, and, two, so that the rest of the staff cannot take naps.
Meanwhile the patients in beds get one thin blanket and wake up cold. If the patient causes enough trouble, enough blankets will be brought so that the patient will get warm enough to sleep. An hour later, the extra blankets will be gone.
Evidence based? Yes, indeed. Test for yourself. Run one ward at 65 F and one ward at 85 F and see which doctors complain.
Word of the day: iatrogenic
The good news is that once the alties get a hold of this, i'll see an uptick in my blog traffic, the bad news is that they'll use it as you stated, to try to justify their lack of evidence. I've never been impressed with the argument of "yours sucks, so mine can suck too".
This is an interesting topic. Case in point, recently there were news that tonsil removal can "cure" ADHD. It sounded mainstream and plausible, therefore credible, and that's how I believe most people took it. But I don't buy it. ADHD is a very subjective diagnosis that can lend itself to all sorts of biases. I'd like to see something a lot more rigorous before I can believe it.
In the UK, we have NICE, which provides central state-funded practice guidelines based on a review of the evidence. To a layperson, they look both rigorous and transparent. I would be quite interested to know whether the same is true from a professional point of view, and whether US and UK guidelines for the same medical issue come out with substantially the same answer, and if not why not.
Some doctors in the UK (I think a minority) seem to feel that NICE is a threat to their individual professional judgment. The reason may be that, because of the nature of the UK healthcare system, NICE guidelines are used to decide whether certain interventions should be funded or not.
Orac - just a minor point before the alties do some quote-mining: you might want to change your wording re: chelation therapy "...or (4) combinations of #1, 2, and 3 (homeopathy, chelation therapy, high dose vitamin C, etc.). " and clarify the use of chelation therapy for treatment of diseases other than documented heavy metal poisoning.
Otherwise you might have some of your trolls, like JBJr all over you, and that would be a horrible way to start your week back.
Welcome back, by the way. While I enjoy the reposts, I like the new ones, too.
The alties will make up any argument that suits them, but quotes to mine does make it easier for them.
I'm certainly not an altie, hanging out with scientists as much as I do...
But I have to say this points up a reason so many people (especially women) I know turn to altie woo -- the stuff their conventional doctors have been feeding them as "reasonable" is unsupported and based on their own biased crap. Your first visit to a gynecologist is enough to turn a thinking woman into an altie ("Did you know that cramps are the psychological result of you resisting your womanhood?") and if not, the first trip through labor and delivery will do it. ("Oh, episiotomies are completely necessary in most of my first time mothers. They prevent painful tears -- by being painful slashes! Lie here, flat on your back, and push against gravity. I'm just going for my scalpel and forceps....")
Compared to that, a bit of woo about waving a flowers essence through purified water or wearing magnets on your ankles to balance your chakras seems almost *reasonable*. At least none of them are applying knives to your tiddly bits.
Sara if your OB/Gyn told you that about cramps, find another one. I've never heard that (though I go to a family doc).
Don't know about episiotomy, because I did rip from stem to stern. If the doc sliced, it wasn't enough... the kid's really big head created a very large rip, if memory serves me a "level 4" rip. The doctor spent lots of time sewing me back up. To add further insult to injury, the tear became infected.
Fortunately, the next two kids took after my family... not the big Dutch head of their dad. They both slipped out very smoothly, and fast (last one was arrive, barely set down on birthing bed (which is not flat, it has a definite incline that can be adjusted), pop out baby!).
Young women, if you are looking for a potential mate to have children with... Check for head size!