I've frequently discussed the difference between what has come to be known as "evidence-based medicine" (EBM) and "science-based medicine" (SBM). Basically, SBM is EBM in which prior probability and plausibility of proposed medical and surgical therapies are considered along with clinical trial evidence. I don't plan on getting into that specific issue in detail right here. Rather, I bring it up because the best medicine is based on science and evidence. However, I've also pointed out that medicine, while it should be science-based, is not and can never be, strictly speaking, a science. (Look for a quack to quote mine that; it's coming.) The reason is that we in medicine must take into account other things besides just scientific evidence when we decide what treatments we recommend to our patients. That's why I found a recent survey about patient attitudes toward health care that I learned about from the Health Affairs blog quite fascinating. The report is entitled Communicating with patients on health care evidence, which is described thusly on the blog:
But as we have learned in the years since, one person’s evidence-based guideline is another person’s cookbook. For some, a sound body of evidence is fundamental to sound medical decisions. After all, as Jack Wennberg and Dartmouth researchers have pointed out for decades, if the practice of medicine varies so widely from place to place in this country, everyone can’t be right. Yet for others, evidence connotes not just “cookie-cutter medicine,” it is only one step shy of a trip to the death panel. This heavy baggage influences the way evidence-based medicine is discussed from the doctor’s office to the clinic to Capitol Hill.
All of which is true, but incomplete. The issue is, of course, that there are a variety of nonscientific issues that impact medical care not only for each individual patient, whose values, other health history, and desires are different from every other patient, but for patients in general. Yes, the practice of medicine does vary far more than it should from region to region, but there is no way that there will ever be an iron clad standard that will reduce that variation to a very low level. This is all the more true given that for many medical conditions there are multiple science- and evidence-based treatments that are roughly equally effective and that these are often changing as new scientific evidence comes in. Moreover, these practices can change at different rates in different places depending on local circumstances. One example that I like to use is mastectomy rates for breast cancer. As hard as it is for those of us living in large urban areas to believe, there are huge swaths of this country where a radiation oncology facility could be 50 or more miles away. To do breast conserving therapy (i.e., lumpectomy) without an unacceptably high rate of recurrence in the region of the lumpectomy requires radiation therapy. If going to a radiation therapy facility every day for six or seven weeks (which is what typical radiation therapy regimens for breast cancer entail) is not feasible, then choosing a mastectomy becomes more likely. It's an entirely rational decision, given that survival will be the same.
None of this is to say that medicine shouldn't be standardized as much as possible around the treatment modalities that have the highest degree of evidence; rather, it's simply to say that there's more than just science involved in the choice of medical care. That's why we refer to the best medicine as "evidence-based" and "science-based" rather than trying to make medicine into a science. Science informs what we do and arguably should be the dominant determinant of what we do, but other issues can become quite important. That's why this survey is interesting; it tells us what patients believe to be important. The results, not surprisingly, echo an old Queen song that I happen to like, I Want It All, as evidenced by this figure included in the report:
This was accompanied by this conclusion:
The research revealed the importance of seating more specific campaigns about medical evidence within the context of a clinical encounter that takes into account three vital—and equally important—elements: the expertise of the provider, the medical evidence, and the patient’s preferences (goals and concerns). These three aspects—which we depict as three separate but interlocking circles that, when combined, result in an informed medical decision—were posited to be the best framework for raising awareness about the role and importance of medical evidence for future communication and patient-engagement strategies.
And:
Key themes that emerged from the interviews and focus groups included that people want to be involved in treatment decisions, want their options to be clearly communicated, and expect the truth—the whole truth—about their diagnoses and treatments.
All of which is reasonable on the surface, but how do the pieces fit together?
The authors of the study surveyed 1,068 adults who had seen at least one health care provider in the last 12 months, the majority of whom (88%) saw a physician and were also satisfied with their health care provider (68%). Drilling down right to the "meat" of the survey, this is how the respondents rated the importance of the three main legs of the consent "stool":
It's good to learn that patients consider medical evidence to be either important or very important and that they value it at least somewhat more than their own goals and expectations. That's a good thing. Of course, what patients perceive as being the "best evidence" and what doctors perceive as the best evidence are not necessarily the same thing. Sometimes, they might only be related by coincidence, as I like to put it sometimes. This brings us to the second part of the study, namely what sorts of framing of issues resonates the most with patients. The sorts of phrases that seemed to produce the most confidence in the people surveyed include (percentage represents level of confidence expressed by respondents in the phrase):
- What is proven to work best (79%)
- The most up-to-date medical evidence, including information about the risks and benefits, about what works best (76%)
- Best practices in the medical field (75%)
- What medical science shows about each option’s benefits and risks (71%)
- What the research shows (68%)
- Guidelines developed by national medical experts about what works best (65%)
These are all useful phrases to keep in my back pocket when describing what scientific evidence supports as the standard of care. On the other hand, most physicians, particularly physicians practicing in academic settings, are wired to know and discuss the evidence with patients, at least in my experience. They already probably use variants of these phrases in describing evidence. What would actually be more useful to me as a physician is what sorts of phrases turn patients off from science-based treatments. I can probably guess to some extent. For instance, I doubt that donning a paternalistic "doctor knows best" attitude would be well-received by most patients, but when trying to persuade patients what the evidence shows they should do what would really be more helpful to me is to know how not to inadvertently make it less likely that they will be receptive to my message.
Steve Novella makes another point about this study with respect to "complementary and alternative medicine" (CAM):
Patients, for example, frequently ask me what I think about acupuncture, a particular supplement, or some other “alternative” treatment for their condition. I tell them – without waffling or watering down my opinion. I don’t editorialize or express judgmentalism because that is inappropriate in a therapeutic context, but I tell them my understanding of the published scientific evidence and the plausibility of the treatment, and then give them my bottom-line recommendation. I do this as if I were talking about any treatment option. Even when patients are starting from a very different opinion about the treatment, they appreciate the fact that I have taken the time to look into the research and to communicate that to them.
This is exactly why the “shruggie” approach to unconventional treatments is counterproductive and a huge disservice to patients. Saying, “I don’t know” when asked about an implausible and ineffective treatment, or even giving a dismissive response, is not going to be convincing to patients.
Indeed, I do think that this study suggests that the best way to deal with such inquiries is to be completely frank in as nonjudgmental a way as possible. I admit that there are times when I have failed at this, but in my defense I would counter that the patient was pushing me to tell her whether I thought that a certain "alternative" practitioner that I've discussed on this blog before was a quack or not. Finally, I just blurted out that, yes, I think he is a quack. Not particularly "non-judgmental," but it was what the patient demanded. Be that as it may, my approach is similar to Steve's in that I don't pull any punches, but I try not use the "q" word or any term or phrase that comes across as dismissive. Being "insolently" dismissive (respectfully or not) is fine for a blog for education and entertainment of both my readers and myself, but it has no place in one-on-one doctor-patient encounters. Like Steve, I find that most patients who ask actually appreciate that I know about such things and can summarize the evidence.
Of course, the problem is that most doctors don't know—and therefore can't summarize—the data for various CAM modalities. For example, in my experience most physicians don't even know what homeopathy actually is. Like most lay people, they tend to think it's just "herbal" or "natural" remedies and are often amazed when I explain to them the principles of homeopathy. The point is, if we are to be effective at communicating what is the best science-based therapy to our patients, we need to know not just the evidence for what does work but the evidence for what doesn't work. More importantly, we need to be able to communicate that evidence.
- Log in to post comments
John Cook (who runs the climate science/climate denial-debunking website Skeptical Science) and Stephan Lewandowsky teamed up to write The Debunking Handbook which, while geared towards debunking climate myths, may be of general use when discussing medical treatments with patients, especially in the case of discussing sCAM.
Regarding CAM:
- Don't tell me that "there haven't been any studies" of a plant that's had ten clinical trials. Don't tell me that a plant that's had nine positive trials and one negative trial has been Proven Worthless. If I know that you've read less of the literature than I have on one issue, I won't believe your claims to expertise on other questionable points.
- Tell me, if that's your real belief, that if acupuncture provides more relief for some condition in direct competition with a drug, it's still Only a Placebo. But don't then tell me that it is therefore wrong for me to use it. If I know that you expect me to accept not only your facts but your values, I will not be able to accept your advice even on the merits of any conventional treatment at face value.
- Don't try to terrorize me out of doing something you don't like, or into doing something you do, with anecdotes or horror stories. I've seen loved ones severely harmed by fearmongering MDs, so I assume that an attempt to inspire unreasoning fear is motivated by the knowledge that reason isn't on the fearmonger's side.
I still feel like "doctor knows it best". He or she, has studied medicine, while I'm dependant of a whole bunch of websites, with opposing views, while a lot of them only contain false information, some of which I might not recognise. So I trust myself to the person specialised in these things.
Yes SBM can be miraculous but there is the dark side to which
ORAC seems to be in denial.
http://www.thedailybeast.com/newsweek/2012/09/16/are-hospitals-less-saf…
You do realize that one large, well-designed, randomized clinical trial trumps nine crappy clinical trials, don't you? Apparently you don't. SBM is more than just numbers of clinical trials on a particular topic.
ken, that editorial shows that those who do real medicine actually research and evaluate their practices, and work to improve. I don't see anything like that out of "alt-med" world.
Re Dr. Oz -but I guess he's not qualified to evaluate
any CAM practices.
Ken, you are not making any sense. Please try using full sentences.
I won't tell you that it is therefore wrong for you to use it, but I WILL tell you that it is wrong (and should constitute criminal fraud leading to serious jail time) to take money from people for providing acupuncture based on claims of specific effect.
Jane, you say that like it's rare for a new drug candidate to fail on the basis of a final critical clinical trial, after having successfully navigated previous trials--do you have any idea how many new drug candidates fail in Phase III clinical trials every year?
What sense does it make to run a series of increasingly rigorous trials, find that the potential drug or therapy was not in fact safe or effective, and then ignore that evidence in favor of "Wow! It doesn't work after all. But let's still convince people to take it. After all, we used to think it worked before we had the latest results.?"
JGC: Why not? It works for sCAMmers.
ken, I'm sure you can provide some examples of Dr Oz critically evaluating some sCAM procedure, with ample reference to high-quality studies in the published literature.
After all, that's what it would take for him to be qualified. I cannot recall seeing any defender of Dr Oz yet provide such evidence of his competence in evaluating sCAM claims.
Nice Venn Diagram of real medicine.
Odd that "ken" would link to an article appearing on the Daily Beast blog and Newsweek Magazine, which mentions N.Y. State Department of Health tracking patient outcomes for major heart surgeries and percutaneous coronary intervention procedures.
This site is available for cardiac patients to check each hospital in the State and every physician who performs cardiac surgery or cardiac interventional procedures.
This is the report I used to check out the hospitals where my husband had several Percutaneous Coronary Interventions( separate left and right atrial cardiac ablations and one cardiac artery stent placement). I was also able to check out the two interventional cardiac physicians who did these PCI procedures:
http://www.health.ny.gov/statistics/diseases/cardiovascular/docs/pci_20…
So ken, wouldn't you agree that access to hospitals' and individual cardiac surgeons' cardiac surgeries and PCIs "outcome" statistics" is a good thing?
O/T Has anyone seen the reports of fungal meningitis/deaths in several states?
http://www.nytimes.com/2012/10/05/health/news-analysis-a-question-of-ov…
This is interesting...
"The nation’s growing outbreak of meningitis, linked to spinal injections for back pain, was a calamity waiting to happen — the result of a lightly regulated type of drug production that had a troubled past colliding with a popular treatment used by millions of Americans a year.
The outbreak, with 5 people dead and 30 ill in six states, is thought to have been caused by a steroid drug contaminated by a fungus. The steroid solution was not made by a major drug company, but was concocted by a pharmacy in Framingham, Mass., called the New England Compounding Center. Compounding pharmacies make their own drug products, which are not approved by the Food and Drug Administration."
I would not like to see the end of compounding pharmacies -- I needed one to make up the appropriate doses of (ironically) a fungicide for my cat. They are very useful in feline medicine, since there are many drugs not commercially produced in cat-appropriate formulations (more options for dogs, and even then. of course, lots less than humans). However, they certainly do need to be held to strict standards. The FDA needs more funding for oversight!
@ Vasha: I would hope that your veterinarian would not use any injection medicines that are concocted in compounding pharmacies. This is what happens when pain centers buy unregulated medicines from compounding pharmacies and inject them into humans (or pets):
http://nj1015.com/is-recalled-epidural-steroid-injection-to-blame-for-m…
(Anecdotal) There was a period of time when my son was taken off Dilantin Infatabs (crushable 50 mg. brand name of diphenylhydantoin). Additional medications were used to try to control his intractable-to-treatment seizures. His pediatric neurologist who had two dogs with seizure disorders, was happy to get my stash of Dilatin Infatabs for her pets. :-)
Cookie-setting.
People like Renate is why medical quackery keeps happening.
There's something about medical school that seems to wash away any sense of scientific knowledge right out of many of the students. I've met scores and scores of MDs who don't seem to remember basic scientific principles: Correlation is not causation; exacerbate is not the same as cause; "In my practice I see.." is not scientific proof, even if your buddy sees it in his practice, too.
When you try to point out "That's now how science works" you're treated like either a fool or a uncooperative patient. Or both.
The best MDs I've ever known all have scientific graduate degrees as well as the MD degree.
Lilady: *Raises hand* Two cases in my state today.