Does thinking make it so? The placebo myth rears its ugly head again.

Blogging is a funny thing. Sometimes the coincidence involved is epic. For instance, as I do on many Mondays, yesterday I crossposted a modified and updated version of a post from a week ago from my not-so-super-secret other blog. This time around, it just so happened to be a post about what I like to refer to as the placebo narrative. As is my wont, I described in the usual ridiculous level of detail why that narrative is so popular among promoters of pseudoscientific medical treatments and, more importantly, why that narrative is approaches black hole density bullshit. It’s something that various studies and publications that I encounter every so often require me to revisit, re-explain, and elaborate more upon based on new information. Common threads include having to point out that thinking doesn’t make it so, and that the placebo narrative as recounted by “integrative medicine” partisans has an uncomfortable resemblance to The Secret and its Law of Attraction. Integrative medicine advocates even twist epigenetics to imply that The Secret is real in medicine and that thinking makes it so.

Indeed, there’s a reason why I’ve referred to The Secret as, in essence, the central dogma of alternative medicine. Basically, as more and more rigorous clinical trials fail to find specific effects of the various alternative medicine quack modalities (but I repeat myself) that “integrative medicine” mavens want to “integrate” into real medicine, rather than abandon those methods as ineffective, as we do for drugs and other treatments that fail to produce specific effects greater than placebo, they move the goalposts and switch their rationale. Now, it doesn’t matter if an alternative medicine quack modality “works” or not because, like Rudie, it can’t fail because—voilà!—it always works through placebo effects.

But there’s a problem. Placebo effects depend upon the patient’s having an expectation of what a treatment will do, and there’s no way to convince a patient that an inert sugar pill (or whatever else is being used as a placebo) will do anything useful to relieve their symptoms without lying to the patient. That’s why, lately, it’s been very, very important to add to the placebo myth the myth of “placebo without deception.” Here’s where the coincidence comes in. Yesterday, on the very same day I reposted my article about placebo effects from last week, what to my wondering eyes should appear but the Grand Poobah of “placebos without deception,” Ted Kaptchuk, publishing an op-ed in the LA TIMES entitled ‘Honest placebos’ show medicine can work without any actual medicine.

Groan.

At first, I was tempted just to Tweet a “compare and contrast” between Kaptchuk’s latest spew and my post from yesterday, but then I realized that learning requires repetition. The cliche goes that you tell them what you’re going to tell them, tell them, and then tell them what you told them. I doubt I need three posts this week on the placebo narrative, but Kaptchuk’s article suggested to me that delving a bit more into the “placebo without deception” myth couldn’t hurt and might help. So let’s dig in.

Kaptchuk, predictably, starts with the “placebo without deception” myth:

Placebo effects have a bad reputation in the medical world. Physicians are trained to dismiss them as misleading — as in, “it’s only a placebo effect,” or “it’s no different from a placebo effect.” Placebo is a label that marks a drug as ineffective and disqualifies research subjects who respond to “bogus” treatments.

But what if patients who take “honest placebos” — meaning they are told explicitly that they are swallowing sugar pills — can still experience relief from discomfort and disability? That’s been the result of a number of studies by my research group at Harvard Medical School and other teams around the world over the last few years. While these trials were relatively small and short in duration, they collectively challenge our greatest assumption about placebos: that they require deception in order to be effective.

No. They. Do. Not.

No. No. No. No.

Kaptchuk even contradicts himself in describing the experiments. Basically, his description of the experiments hint at the reason why his experiments in fact demonstrate exactly the opposite of what he claims, that placebo effects do require deception:

In our research group’s experiments, patients with illnesses such as irritable bowel syndrome, chronic low back pain, and episodic migraine attacks were randomly assigned to one of two groups: one got an honest placebo while the other was given no treatment. Participants were generally told that placebo effects are powerful in double-blind clinical trials (in which neither patients nor researchers know what the patient is getting), but that this study would examine whether placebos still work when patients know what they are getting. We also told them they didn’t have to believe it would work.

Many laughed and suggested we were nuts. But they agreed to try it out; most had been ill for years and were desperate for relief. The results upended the conventional wisdom. Many patients treated with an honest placebo felt significantly better. On average, irritable bowel patients reported 60% adequate relief, chronic low back suffers had 30% improvement in both pain and disability, and migraine pain was 30% lower in two hours.

Notice how Kaptchuk characterizes what the patients were told: That placebo effects are “powerful” in double-blind randomized clinical trials. Not exactly. I discussed all of those studies at one time or another. Here’s what he really told patients, first in the “open label placebo study” in patients with irritable bowel syndrome:

...patients were told that “placebo pills, something like sugar pills, have been shown in rigorous clinical testing to produce significant mind-body self-healing processes.

With recruitment fliers saying:

Participants were recruited from advertisements for “a novel mind-body management study of IBS” in newspapers and fliers and from referrals from healthcare professionals. During the telephone screening, potential enrollees were told that participants would receive “either placebo (inert) pills, which were like sugar pills which had been shown to have self-healing properties” or no-treatment.

Telling patients that placebos have powerful or significant “mind-body self-healing” properties is a bit different from saying vaguely that placebos are “powerful” in randomized clinical trials. Then, in the study looking at placebos versus the drug Maxalt for migraines, this is what subjects were told:

Our first goal is to understand why Maxalt makes you pain-free in one attack but not in another. Our second goal is to understand why placebo pills can also make you pain-free. Our third goal is to understand why Maxalt works differently when given in double-blind study vs. real-life experience when you take it at home.

I repeat for emphasis: “Our second goal is to understand why placebo pills can also make you pain-free.” Not to see if placebo pills can make you pain free or to understand why placebo pills might be able to make you pain-free or could possibly make you pain free. “Can make you pain free.” To be fair, this isn’t quite as blatant as the IBS study in which subjects were told that placeboes could produce “powerful mind-body effects.” It’s still “priming the pump,” though, rather blatantly.

Not as blatantly as Kaptchuk’s most recent study, though, looking at placebos for low back pain:

After informed consent, all participants were asked if they had heard of the “placebo effect” and explained in an approximately 15-minute a priori script, adopted from an earlier OLP study,18 the following “4 discussion points”: (1) the placebo effect can be powerful, (2) the body automatically can respond to taking placebo pills like Pavlov dogs who salivated when they heard a bell, (3) a positive attitude can be helpful but is not necessary, and (4) taking the pills faithfully for the 21 days is critical. All participants were also shown a video clip (1 minute 25 seconds) of a television news report, in which participants in an OLP trial of irritable bowel syndrome were interviewed (excerpted from: http://www.nbcnews.com/video/nightly-news/40787382#40787382).

I know. I know. I just used this one yesterday, but it’s worth repeating. Compare and contrast. Compare and contrast, my friends. And repetition, but hopefully not too much repetition.

Kaptchuk also exaggerates the level of symptom relief experienced. For example, here is how the results were described in the actual IBS paper:

Open-label placebo produced significantly higher mean (±SD) global improvement scores (IBS-GIS) at both 11-day midpoint (5.2±1.0 vs. 4.0±1.1, p< .001) and at 21-day endpoint (5.0±1.5 vs. 3.9±1.3, p = .002). Significant results were also observed at both time points for reduced symptom severity (IBS-SSS, p = .008 and p = .03) and adequate relief (IBS-AR, p = .02 and p = .03); and a trend favoring open-label placebo was observed for quality of life (IBS-QoL) at the 21-day endpoint (p = .08).

I find it rather interesting that the way Kaptchuk chose to frame his results in the actual manuscript, compared to how he describes his results in this op-ed (and has described it in pretty much every interview in the lay press that I’ve seen where he mentiones this study). One wonders whether saying that 60% of subjects taking placebos felt better compared to 35% receiving regular care feeling better sounds more convincing that citing improvement scores as unimpressive as the ones listed above. The reason is that I very much wonder whether the improvements reported are clinically significant. For instance, in the main result reported, those in the notreatment arm reported an average IBS-GIS of 4 (no change). In the Open Placebo arm, the average reported was 5 (Slightly Improved). How clinically relevant is this? I don’t know, but I have suspicions that such a small change skirts the borders of clinical relevance and might not even achieve it.

As far as the migraine study with Maxalt and placebo, let’s go to the paper as well, to look at something Kaptchuk tends not to mention, namely the second endpoint examined, specifically whether or not the subject was pain free after 2.5 hours:

Unlike the primary endpoint, the proportion of participants who were pain-free during the no-treatment condition (0.7%) was not statistically different from when participants took open-label placebo (5.7%). As with the primary endpoint, the proportion of participants pain-free after treatment was not statistically different between Maxalt treatment mislabeled as placebo (14.6%) and placebo treatment mislabeled as Maxalt (7.7%). The resulting therapeutic gain (that is, drug-placebo difference) was 8.8 percentage points under “placebo” labeling [odds ratio (OR), 2.80], 26.6 percentage points under “Maxalt or placebo” labeling (OR, 7.19), and 24.6 percentage points under “Maxalt” labeling (OR, 5.70).

As I noted at the time, the critical finding here is that Maxalt beat any sort of placebo effect, and not by a little bit, either. For all the Maxalt groups, the percentage of subjects who were pain free was 25.5% compared to 6.7% for all the placebo groups. That’s nearly a four-fold difference. Also note that the no treatment condition was not statistically different from the open-label placebo condition. The error bars were quite large, as well. Another problem with the study was that the authors made no effort to assess expectancy because they were afraid of causing patients to question the accuracy of the information provided on the envelopes. The lack of assessment of expectancy greatly decreases the utility of this study and the ability to generalize from it. Worse, no assessment of blinding was performed because the investigators were worried that this, too, would provoke suspicions in an in-study design. Quite frankly, I did not find this a convincing excuse.

As for the third study, I didn’t discuss the magnitude of pain relief from “open label placebo” as much as I should have in my original discussion of the study. I more or less took Kaptchuk’s description at face value, which was a failing on my part that I’m happy to remedy today. First, Kaptchuk used a composite scale that assessed pain intensity by asking participants to rate their pain using 3 standard Numeric Rating Scales, ranging from 0 (“no pain”) to 10 (“worst pain imaginable”), scoring maximum pain, minimum pain, and usual pain. The mean of the 3 measures was their primary pain outcome. If you dig into the actual tables, the results are less impressive. The changes in pain in each of the three measures used to construct the composite score ranged from 0.54 to 2.15 on a scale of 10. Even the authors point out that these are likely to be barely clinically significant, pointing out that a 30% reduction has been recommended as an indication of clinical significance and open label placebo just barely achieved that. And, of course, there’s still the sticky issue of having to lie to the patient.

Kaptchuk concludes with a distillation of the placebo narrative that contains the seeds of its own refutation:

Patients are open to safe self-healing methods such as honest placebos, according to survey research. But are doctors? Even if the evidence for honest placebos continues to grow, physicians may resist despite the obvious advantages: lower cost, lower risk, no side effects. Placebo treatment just goes against their years of training and reliance on medications. Patients likely will have to ask for placebo treatment and get their doctors on board.

That said, prescribing sugar pills is not the only way physicians can harness the power of self-healing. Placebo effects are most pronounced when patients interact with caring and empathetic doctors and nurses; when they feel skilled hands touch them; when they perform time-honored medical rituals and observe tools and symbols of healing; and when they are comforted with reassurance, support and hope.

See? It’s not the patients who are resistant to using placebos! It’s those hidebound, dogmatic doctors who believe that treatments should be science-based and who have the temerity to consider it unethical to lie to patients (or to grossly exaggerate or mischaracterize placebo effects). Now here’s the refutation. Yes! We know placebo effects can be enhanced by empathy and the “human touch.” In other words, good bedside manner matters. That means that we don’t have to lie to patients or exaggerate by calling placebo effects “powerful mind-body self-healing” or other such woo babble (again, like technobabble in Star Trek, only with woo). All we have to do is to use empathy and the human touch in concert with real, honest-to-goodness treatments shown through science to be effective against whatever the patient has, no need even for a little shading of the truth or lying about sugar pills (or alternative medicine treatments) to patients.

Of course, Ted Kaptchuk and his acolytes will never, ever accept that solution, because doing so would require them to admit that the quackery they so badly want to “integrate” into science-based medicine is ineffective, which would basically eliminate the specialty of integrative medicine.

Categories

More like this

This article is cross-posted at Science-Based Medicine. Check it out. --PalMD In a previous post, I argued that placebo is an artifact of certain clinical interactions, rather than a treatment that we can exploit. Apparently, there are a whole lot of doctors out there who don't agree with me.…
Are placebo's really effective? So asks Darshak Sanghavi in Slate, citing this study from 2001 that shows the placebo effect, compared to passive observation, to be relatively minor for improvements in pain or objective measures of health. This is an interesting topic, but unfortunately, a really…
I'm not even going to mention why it took fifteen hours to get from DC to Boston. By plane. Except that US Airways sucks. Anyway, you might have heard about the placebo-effect article recently published in PLoS One. I was going to blog about this yesterday, but events overtook my schedule (by…
In the course of reading the comments in the last several posts, I've come upon many mentions of the "placebo effect". Steve Novella has a few good posts on the placebo effect, but I'd like to take a look at the clinical view. The placebo effect is a phenomenon often observed in clinical studies…

Am I right to infer that these are all subjectively experienced clinical features patients giving responses to 'please' researchers in the context of the study rather than actually feeling better would all get rolled up into the claimed 'placebo effect' in Kaptchuk's studies?

He has also, by his recruitment strategy seeded his trial with suggestible/compliant subjects who are prone to give confirmatory responses.

And with all those advantages, the placebo effect looks pretty puny.

By BadlyShavedMonkey (not verified) on 20 Dec 2016 #permalink

[Damn me for formatting replies on a little iPhone screen]

Am I right to infer that these are all subjectively experienced clinical features? Patients giving responses to 'please' researchers in the context of the study rather than actually feeling better would all get rolled up into the claimed 'placebo effect' in Kaptchuk's studies.

He has also, by his recruitment strategy seeded his trial with suggestible/compliant subjects who are prone to give confirmatory responses.

And with all those advantages, the placebo effect looks pretty puny.

By BadlyShavedMonkey (not verified) on 20 Dec 2016 #permalink

It is interesting that these researchers chose to look at GI symptoms, which are strongly influenced by emotional factors, even in patients with actual diseases. They also tend to be quite subjective. I suspect that if they had chosen to look at an objective end point in some other condition, perhaps one based upon a laboratory value, they would have had much more difficulty spinning their results.

Orac or his double should send a long letter to the editor to the LA Times in response. It would be interesting to see if it got published. I suspect that someone there has an agenda, so perhaps it would not.

By Michael Finder, MD (not verified) on 20 Dec 2016 #permalink

And yes, I misspelled my own name in the previous comment. Groan.

By Michael Finfer, MD (not verified) on 20 Dec 2016 #permalink

@ Orac
"There’s no way to convince a patient that an inert sugar pill (or whatever else is being used as a placebo) will do anything useful to relieve their symptoms without lying to the patient. "
- There's a way: the doctor must believe in the power of the sugar pill! What we need is to train physicians stupid enough to believe in the power of inert sugar pills. Maybe one day Universities funded by the Homeopathy pharma will select their MD students according to this criterion.

By Daniel Corcos (not verified) on 20 Dec 2016 #permalink

Part of the problem is that during the consent process, patients hear what they want to hear. See PMID: 27174578. In Kaptchuk's trial, he overloaded the consent with "good news" about the power of treatment, and de-emphasized accurate description of the lack of content of his placebo. So, of course, many patients heard only that they would receive a pill that will make them better. What was missing was a test of patient comprehension. In essence, what he proved is that you can hide a complex true statement in a thicket of confounding and misleading statements.

Michael Finfer, MD @ 4, that smell clunker gets us all from time to time.
Oh! That's spell checker!
Dammit. ;)

@Orac, so a statistically insignificant number of a statistically insignificant number of pre-selected participants, selected for suggestibility and psychosomatic complaints, managed to prove a week - at a level of noise to signal in this study and hence, that is significant.
Yes?
One ponders what any peer review would say.
I'm thinking, "Bird cage liner" for the ultimate destination of the paper.

There are placebos and then there are woocebos.

By Lighthorse (not verified) on 20 Dec 2016 #permalink

There’s a way: the doctor must believe in the power of the sugar pill!

Ah yes, the old "it's not lying if you believe it" dodge. I'm not familiar with the laws in your jurisdiction, so maybe it would work there. Here in the US the typical phrasing of such laws is, "... knew, or should have known, ...." A medical doctor should know that a placebo will have no pharmaceutical effect, so just because he believes it will doesn't mean he is justified in making claims that it will.

To illustrate this distinction, consider the contrast between claiming in 1998 that vaccines might cause autism versus making the same claim today. In 1998 the Wakefield et al. Lancet study had just been published, and the fact that it was based on fraudulent data would not be discovered for several years. So in 1998 a reasonable person could have believed that there was evidence of vaccines causing autism. Now that Wakefield et al. has been thoroughly debunked (in addition to being based on fraudulent data, the finding was never replicated by any researcher not connected with Wakefield), it is no longer reasonable to believe that claim.

By Eric Lund (not verified) on 20 Dec 2016 #permalink

Placebo narrative and patents:

System and method for reducing the placebo effect in controlled clinical trials Fava, et al. 8,219,419, July 10, 2012

http://www.google.ch/patents/US8219419

The inventors write, "It has been suggested that addressing the placebo response issue is one of the most important challenges facing the future of industry-sponsored psychopharmacologic drug development. "

By Michael J. Dochniak (not verified) on 20 Dec 2016 #permalink

@ Eric
"A medical doctor should know that a placebo will have no pharmaceutical effect". What about homeopathy? If a medical doctor thinks that homeopathy has an effect, what does the US law say? Is there any information that has been concealed to him as in the Wakefield case?

By Daniel Corcos (not verified) on 20 Dec 2016 #permalink

psychopharmacologic

It's difficult to know exactly what you're saying here. It's just that your history suggests that it's shit.

By Rich Woods (not verified) on 20 Dec 2016 #permalink

Daniel@12: Anybody who has had high school chemistry should know that Avogadro's number is finite, and therefore that homeopathy is bunk. I tend to let Hahnemann of the hook because in his day Avogadro's number wasn't known (Avogadro, who was a contemporary, did not know the number, only that it was finite). More recent homeopaths--especially the ones who came after quantum mechanics was shown to explain almost all of chemistry and a big chunk of physics--are not entitled to that charity.

Of course, IANAL, TINLA, and even if it were it would be worth at most what you paid for it.

By Eric Lund (not verified) on 20 Dec 2016 #permalink

@ Eric
Even if they are told that Avogadro's number is finite, they may think that molecules do not explain everything and since they believe there is an effect, it is not a problem for them. I am convinced that most of the doctors specialized in homeopathy really believe it works.

By Daniel Corcos (not verified) on 20 Dec 2016 #permalink

Rich - that goes to mood and behavior effects, whereas neuropsycho would be changes on neural cell function. Not that mjd actually has an understanding, as he's just cutting and pasting.

Way OT here, but a *very* common printer, which is extremely popular with hospital pharmacies and retail pharmacies has a recall on the power supply for the label printer.
Zebra printers manufactured between October 1, 2010 and December 31, 2011 may be effected.
https://www.zebra.com/us/en/power-supply-recall.html

Three fires have been reported, with one fire damaging the workspace and printer.

@ Daniel #15 Which really doesn't make it any better. Being self deluded is not a defense for selling overpriced water.

1) Selling overpriced: all the luxury industry is based on overpricing.
2) If a MD is self deluded why wouldn't a judge be deluded too?
3) The initial post was about the fact that the idiot is not lying.
Finally, as I said in RI already, if a MD believes in homeopathy, it is better if he does not prescribe active drugs.
The real problem is to give degrees to students lacking judgement.

By Daniel Corcos (not verified) on 20 Dec 2016 #permalink

BadlyShavedMonkey @1 & 2

IIRC, and I really cannot face rummaging back through Kaptchuk's papers at the moment (it's still dark here and I haven't finished my first coffee), they are all self-report...

Cue rant about use of self-report in any medical research without the back up of even a semi-objective measure...