Back when advocates of “alternative” medicine were busily trying to legitimize their quackery by first renaming it “complementary and alternative medicine” (CAM), long before CAM “evolved” into “integrative medicine,” they really believed that if their favorite woo were to be studied scientifically it would be shown to be efficacious. Thus was born the Office of Alternative Medicine in the NIH, which later became the National Center for Complementary and Alternative Medicine (NCCAM), which more recently became the National Center for Complementary and Integrative Health (NCCIH), thus utterly purging itself of the word “alternative.” Both reflecting and influenced by NCCIH, over the last 25 years, increasingly academic medical centers have been feverishly studying quackery, often funded by NCCIH, and “integrating” it into real medicine. Thus was born the phenomenon of quackademic medicine, the infiltration of quackery into medical schools and academic medical centers.

There’s just one problem. As the actual double blind placebo-controlled randomized clinical trials (RCTs) of acupuncture, reiki, homeopathy, and various forms of the quackery advocates associated with NCCIH and investigators in bastions of quackademic medicine were done, it rapidly became very clear that these treatments produced effects indistinguishable from placebo controls; in other words, they didn’t work. Normally in medicine, when that happens the treatments that fail randomized clinical trials are abandoned. True, the process of abandoning such treatments is messier and longer than we might like to admit, but eventually they are abandoned.

When it comes to integrating quackery into medicine, however, because it was never science that motivated quackademics to begin with, when RCTs come up negative for acupuncture, homeopathy, reiki, or whatever, they cannot accept that their woo doesn’t work and simply walk away. So instead, for example, they make excuses. They do “pragmatic studies” without placebos, even though such studies are only appropriate after a treatment has been validated as efficacious in RCTs. Alternatively, they basically concede that their treatment doesn’t do anything better than placebo but then start arguing that they “work” by “harnessing the power of placebo” effects to “induce natural healing,” misrepresenting placebo effects as the power to use one’s mind to heal oneself. Never mind that this argument never flies with conventional science-based medicine and represents a flagrant double standard in which CAM is held to a much lower standard of evidence. Again, this is about belief, not evidence. Never mind that lying to patients to invoke placebo effects is the resurrection of medical paternalism.

The need for deception is, of course, a major problem with arguing that physicians should be able to use placebos, to tell them that a sugar pill is something effective. Ethically, this has always been a dubious thing for a doctor to do, but in the era of shared decision-making between doctors and their patients that has emerged from the paternalistic “doctor knows best” era of 50 or 60 years ago it’s even more problematic than before. To combat this narrative, there has emerged the narrative of “placebo without deception.” The foremost advocate of this narrative is our old buddy Ted Kaptchuk, who first promoted it in a big way six years ago with an open label study of placebo pills plus standard of care versus standard of care alone on irritable bowel syndrome. I discussed this study when it came out.

Unfortunately, Kaptchuk is back in the news saying virtually identical things:

Conventional medical wisdom has long held that placebo effects depend on patients’ belief they are getting pharmacologically active medication. A paper published in the journal Pain is the first to demonstrate that patients who knowingly took a placebo in conjunction with traditional treatment for lower back pain saw more improvement than those given traditional treatment alone.

“These findings turn our understanding of the placebo effect on its head,” said joint senior author Ted Kaptchuk, director of the Program for Placebo Studies and the Therapeutic Encounter at Beth Israel Deaconess Medical Center and an associate professor of medicine at Harvard Medical School. “This new research demonstrates that the placebo effect is not necessarily elicited by patients’ conscious expectation that they are getting an active medicine, as long thought. Taking a pill in the context of a patient-clinician relationship — even if you know it’s a placebo — is a ritual that changes symptoms and probably activates regions of the brain that modulate symptoms.”

I was half tempted to direct you to my discussion of Kaptchuk’s first “placebo without deception” study and leave it at that because this study suffers from virtually the same flaw as the last one, but that’s not what you come to this blog here. You come here because you like your Insolence Insolent and verbose detailed.

One thing I’ve learned when evaluating these studies is that you really, really have to look long and hard at the methodology, because when you do you will almost always find that there is deception involved. For example, like this new trial, Kaptchuk designed his IBS trial so that the patients knew they were getting placebos, even going so far as to label the pill bottles “Placebo.” Back then, then he and his team observed a difference in reported symptoms between those taking the placebo pills plus usual care compared to those with usual care and proclaimed that they had successfully invoked placebo effects without deception. There was just one problem, and it was in the script doctors used when discussing placebo effects with patients:

Patients who gave informed consent and fulfilled the inclusion and exclusion criteria were randomized into two groups: 1) placebo pill twice daily or 2) no-treatment. Before randomization and during the screening, the placebo pills were truthfully described as inert or inactive pills, like sugar pills, without any medication in it. Additionally, patients were told that “placebo pills, something like sugar pills, have been shown in rigorous clinical testing to produce significant mind-body self-healing processes.”

And the study was advertised thusly:

Participants were recruited from advertisements for “a novel mind-body management study of IBS” in newspapers and fliers and from referrals from healthcare professionals. During the telephone screening, potential enrollees were told that participants would receive “either placebo (inert) pills, which were like sugar pills which had been shown to have self-healing properties” or no-treatment.

So basically, right off the bat the investigators recruited patients who were interested in “mind-body” effects and then told them that placebo effects could produce powerful “mind-body self-healing processes” in rigorous clinical testing. There was the deception, because even the most generous and sympathetic characterization of the rigorous research existing on placebo effects would not justify such a description. As I said at the time, not only did Kaptchuk et al deceive their subjects to trigger placebo effects, whether they realized or would admit that that’s what they did or not, but they might very well have specifically attracted patients more prone to believing that the power of “mind-body” interactions. Yes, patients were informed that they were receiving a placebo, but that knowledge was tainted by what the investigators told them about what the placebo pills could do.

So what about this study? This time around Kaptchuk is not the first author; Cláudia Carvalho is. He’s also not the senior listed author. Irving Kirsch is. Be that as it may, this study has Kaptchuk’s fingerprints all over it. Entitled Open-label placebo treatment in chronic low back pain: a randomized controlled trial, it is basically that. The verbiage is very similar:

Administrating fake pills to harness placebo effects poses an ethical conundrum for physicians in clinical practice due to the widespread belief that deception is necessary for placebo pills to work (eg, pretending sugar pills are drugs or, more commonly, giving genuine medications that have no known effect on the condition).29 However, 4 studies have directly tested the effect of an open-label placebo (OLP) prescription, and all indicated that patients reported benefits after taking pills presented honestly as placebos. Three were small pilot studies.19,25,28 The fourth was a controlled trial in irritable bowel syndromeand showed significant, clinically meaningful benefits over no-treatment controls.17

The received wisdom is that clinical administration of a placebo requires deception (or double-blind conditions) to be effective. How is it that a placebo treatment is able to produce effects even when the participants know that the pill is inert?

This is the spin. Even though that controlled trial did indeed invoke deception to promote placebo effects by greatly exaggerating what placebo effects were, Carvahlo et al blithely ignore that and simply declare the study to be strong evidence that one can invoke placebo effects without deception. They even move on to assume that the point has been demonstrated and ask how such a thing could be true? Then they go on to do the same thing that Kaptchuk did six years ago.

From the methods section:

Participants were recruited from advertisements for “a novel mind–body clinical study of chronic low back pain” in flyers, posters, Facebook posts, magazine advertising, and referrals from health care professionals.


After informed consent, all participants were asked if they had heard of the “placebo effect” and explained in an approximately 15-minute a priori script, adopted from an earlier OLP study,18 the following “4 discussion points”: (1) the placebo effect can be powerful, (2) the body automatically can respond to taking placebo pills like Pavlov dogs who salivated when they heard a bell, (3) a positive attitude can be helpful but is not necessary, and (4) taking the pills faithfully for the 21 days is critical. All participants were also shown a video clip (1 minute 25 seconds) of a television news report, in which participants in an OLP trial of irritable bowel syndrome were interviewed (excerpted from:

So basically, they used the same talking points from the previous study, plus clips of happy study participants to prime the patients that placebo effects can be “powerful.” Again, people that patients tend to believe, doctors and nurses and other health care professionals, telling them that sugar pills coule invoke powerful healing effects. In other words, this study is just like the IBS study, only with back pain.

Its design was fairly simple. There were two groups: open label placebo (OLP) + treatment as usual (TAU) versus TAU. Participants were eligible if they were over 18 years old and had persistent back pain for more than three months’ duration. Exclusion criteria included usage of opioid pain medications within the prior 6 months or if they had a history of refusing to take oral medication. Other exclusion criteria included pain due to cancer, fratures, infections, prior back surgery, disk degeneration due to trauma or aging, conditions that make treatment difficult (e.g., paralysis or psychosis), and other conditions thought to interfere with interpretation, such as fibromyalgia, rheumatoid arthritis, etc. Then there were the pretty standard exclusions: pregnancy, breastfeeding, surgery within the last 30 days, and the like. So basically what investigators were left with were patients with chronic low back pain, almost certainly musculoskeletal in nature, and no major anatomic abnormalities from trauma, prior surgery, or cancer who had not taken opioids in the last six months. In other words, this is mild chronic back pain, a condition that’s very likely to be prone to placebo effects, which is likely why this group was chosen. Overall, 97 subjects were randomized, and ultimately there were 42 subjects in the TAU group, 41 in the TAU + OLP group. There were three dropouts in the TAU group and 4 in the TAU + OLP group, leaving 38 in each group, but an intent-to-treat analysis was used, so that all 83 of the original participants were analyzed. Outcomes were measured at baseline and after 11 and 21 days using questionnaires measuring total pain score and the Roland–Morris Disability Questionnaire.

One thing that bothered me looking at Table I. (The paper is open access; so you can look for yourself if you’re curious.) There are no statistics. By the “eyeball” test, the two groups look pretty comparable in all the baseline characteristics measured, but I like to see statistics. I’m funny that way. Be that as it may, the results of this study were utterly predictable, given its design:

Compared to TAU, OLP elicited greater pain reduction on each of the three 0- to 10-point Numeric Rating Scales and on the 0- to 10-point composite pain scale (P , 0.001), with moderate to large effect sizes. Pain reduction on the composite Numeric Rating Scales was 1.5 (95% confidence interval: 1.0-2.0) in the OLP group and 0.2 (20.3 to 0.8) in the TAU group. Openlabel placebo treatment also reduced disability compared to TAU (P , 0.001), with a large effect size. Improvement in disability scores was 2.9 (1.7-4.0) in the OLP group and 0.0 (21.1 to 1.2) in the TAU group. After being switched to OLP, the TAU group showed significant reductions in both pain (1.5, 0.8-2.3) and disability (3.4, 2.2-4.5).


A reduction in pain of 27.9% has been found to correspond to clinical ratings of “much improved” and a 30% reduction has been recommended as an indication clinical significance.7,9 There was a clinically significant 30% reduction in both usual and maximum pain in the placebo group compared to reductions of 9% and 16% in usual and maximum pain, respectively, in the continued usual treatment group. Open-label placebo reduced minimum pain by 16% compared to an increase in pain of 25% with TAU. There was also a 29% reduction in pain-related disability in placebo group compared to 0.02% in the TAU arm.

So, basically, both groups improved, but the OLP group improved more, just as one would expect from placebo effects. Yet, none of this stopped Carvalho et al from concluding:

Our data suggest that harnessing placebo effects without deception is possible in the context of a plausible rationale. More research on this possibility is warranted in cLBP and other conditions defined by self-appraisal.

As is frequently the case when investigators are interviewed by the press, no doubt excited about their results, they went beyond these claims in a non-peer-reviewed source. You saw Kaptchuk regurgitate pretty much the same nonsense he did six years ago with the IBS study. To that, he added more, and Carvalho chimed in:

“It’s the benefit of being immersed in treatment: interacting with a physician or nurse, taking pills, all the rituals and symbols of our healthcare system,” Kaptchuk said. “The body responds to that.”

“Our findings demonstrate the placebo effect can be elicited without deception,” said lead author, Claudia Carvalho, PhD, of ISPA. “Patients were interested in what would happen and enjoyed this novel approach to their pain. They felt empowered.” Kaptchuk speculates that other conditions with symptoms and complaints that are based on self-observation (like other kinds of pain, fatigue, depression, common digestive or urinary symptoms) may also be modulated by open-label treatment.

“You’re never going to shrink a tumor or unclog an artery with placebo intervention,” he said. “It’s not a cure-all, but it makes people feel better, for sure. Our lab is saying you can’t throw the placebo into the trash can. It has clinical meaning, it’s statically significant, and it relieves patients. It’s essential to what medicine means.”

“Taking placebo pills to relieve symptoms without a warm and empathic relationship with a health-care provider relationship probably would not work,” noted Carvalho.

One more time: No, Mr. Kaptchuk and Dr. Carvalho. Your results do not show that the placebo effect can be elicited without deception. They do not. You can keep saying that, keep spinning your results that way, but the exact same problem applies to this study as to the IBS study in 2010. You had to hype up what placebo effects were and tell participants, without justification, that they produced “powerful” mind-body effects, priming them with advertisements touting the same thing. The “placebo without deception” narrative was not supported by evidence in 2010, and this new study doesn’t support the narrative either.

Finally, we already know that interacting with a kind physician or nurse makes people think they feel better. That’s something that can be done honestly, without deception. Using placebos can’t.


  1. #1 Christine Rose
    Ann Arbor
    October 18, 2016

    I see another deception, which CAM advocates rarely talk about.
    People lied to the doctors because they felt they “should have” had better results and wanted to make all those nice people happy.
    Frequently they even lie to the themselves. People who are frustrated and in pain will do that.

  2. #2 tgobbi
    October 18, 2016

    Christine writes: “I see another deception, which CAM advocates rarely talk about.
    People lied to the doctors because they felt they “should have” had better results and wanted to make all those nice people happy.”

    Isn’t that what’s known as confirmation bias?

  3. #3 Todd W.
    October 18, 2016

    Now, I may be wrong about this, but I see a lot of confidence intervals that include 1 (e.g., the Numeric Rating Scale comparison). Doesn’t that suggest that there was no difference between groups? Also, what’s with listing the CI low-high for the OLP group and high-low for the TAU group?

    Finally, as with last time, you’re right that they actually showed that you can elicit placebo effects with deception, which we all already know. Why do they find it so difficult to not use deception in these studies?

  4. #4 Todd W.
    October 18, 2016

    Oh, and while we’re on the subject of Kaptchuk and his team’s penchant for deceiving study patients, let’s not forget the Wechsler et al. study of acupuncture vs. albuterol for asthma study, where subjects were told they might get one of the following treatments: albuterol, a placebo inhaler, “real” acupuncture, sham acupuncture. The only problem is that “real” acupuncture was never going to be given to any study subjects, and there was no indication that any of the subjects were ever told of this deception.

  5. #5 Christine Rose
    October 18, 2016


    “Isn’t that what’s known as confirmation bias?”

    I’d say no, it isn’t. Confirmation bias is the tendency to attribute the real improvements to the treatment and the real deteriorations to bad luck or missing a pill. This is a tendency to willfully overestimate the results, whether consciously or not.

    That may not be someone’s definitions, but I don’t think they are the same thing.

  6. #6 Helianthus
    October 18, 2016

    @ Christine, tgobbi

    “Isn’t that what’s known as confirmation bias?”

    I think that what Christine described is more akin to “being agreeable” (voluntary mis-spelling)
    i.e. saying “yes” to the nice person in front of you because anything else would be unpolite/antagonistic.
    Saying “yes I feel better” may also seem the fastest way to get out of the room and back in control of your life.

  7. #7 FB
    October 18, 2016

    I think the fact the double blind RCT of acupuncture, reiki etc can make it past an Ethics Review Board is kind of an admission that these treatments have no real effect outside of placebo.

  8. #8 Peebs
    October 18, 2016

    Okay lads and lasses, I’m a bit lost on one part of this. How can a trial on Reiki be Double Blinded?

    Surely the practitioner will be aware that it’s either bolllocks; or genuinely believes it. Which makes it a Single Blind.

    Or have I got this completely arse upwards?

  9. #9 Panacea
    October 18, 2016

    @Todd #4: I’m curious; WHY wouldn’t they use real accupuncture in their study? Were they afraid it would fail?

  10. #10 JustaTech
    October 18, 2016

    tgobbi @2: No, it’s not confirmation bias, it’s something else (which is of course escaping me at the moment). It’s a real issue in survey design, particularly when you have an in-person interviewer. The study subject will (intentionally or not) tell the interviewer what they think the interviewer wants to hear (or what is socially acceptable).

    It’s really hard to work around, which is why you might have to use a phone or paper survey for the kind of stuff people don’t want to admit in person (how much do you drink, do you hit your kids, how many sexual partners have you had).

    Socialization bias, maybe?

  11. #11 Robert L Bell
    October 19, 2016

    One obvious lesson of the whole NCCIH experience is that it’s important to stick to the Science when determining which treatments are safe and effective, and that the use of political muscle to ram approval through does not generally lead to good outcomes.

  12. #12 Todd W.
    October 19, 2016


    I’d have to go back and check, but I don’t think they explained that anywhere in the protocol.

  13. #13 Amber Brown Skylar
    United States
    October 19, 2016

    Wow. I was so distracted by all the “quackery” language that I got bored trying to extract the point of the article. I am passionate about science, many fields of science and I don’t project my ignorance, fear, or judgment on those that use alternative methods for healing. That’s because I am all grown up now, and my mind is free and open to – possibility, curiosity and imagination. Poorly written article and I lost all respect for what you are trying to say. Cocky and arrogant.

    • #14 Orac
      October 19, 2016

      So let’s see. I used the word “quack,” “quackademic,” or “quackery” a few times in the first three paragraphs. After that, I didn’t use the term once and soberly discussed the study. So ask yourself this: If I were to remove the first three paragraphs, would this post have made you think or change your mind? I rather suspect that the answer is no, and you just latched on to the language as an excuse to reject it.

      Alternatively, how about this? Try ignoring the first three paragraphs and then telling me what, in the rest of the post, I got wrong and why. I’m sure you can base your arguments on science and evidence, right?

  14. #15 Dangerous Bacon
    October 19, 2016

    Ironic that someone purportedly concerned about tone so readily engages in name-calling.

    Too bad. I was interested in hearing what Amber had to say about alternative methods for healing.

  15. #16 Joel
    October 19, 2016

    I found it very disappointing to read the obvious bias expressed in this narrow minded piece. I have to admit, that there us some ‘quackery’ associated with so-called. ‘complimentary medicine’, especially in so.e corners of TCM and other traditional approaches.

  16. #17 Joel
    October 19, 2016

    *Continued from above*

    However, to lump a manual therapy like acupuncture in with reiki and the like is a mistake. The sensations and therapeutic effect from acupuncture is profound, and comparable to that of the rarely criticized approaches of massage, chiropractic and physio therapy. So say that the insertion of a fine, sterile needle into appropriately sensitive areas carries no more than illegitimized placebo, or ‘quackery’ as you so eagerly put, only highlights your own preconceived notions of acupuncture. In actuality, there are distinctly measurable physiological, neurological and endocrine effects that are consistent with current neuroscience and pain science coming out of Australia, notably from the NOI group and affiliates.

    If you were to pay attention to the details, you would understand that placebo cannot apply when studying any manual therapy like acupuncture, massage, chiropractic, or PT, because it is impossible to pretend to stick an needle, massage, adjust, etc, let alone convince the patient they are receiving sham, or placebo approaches.

    I understand the need to unblind the patient and practitioner in growing our understanding of what the limitations and benefits of placebo are, but to use this as a podium to deride ‘quackery’ is far beside the point.

    You may be correct in pointing out biases in the above studies, but you miss the forrest for the trees.

    Sad, narrow minded, biased, and overall negative stance on an essential practice.

  17. #19 herr doktor bimler
    October 19, 2016

    so-called. ‘complimentary medicine’

    You’re looking good! Your liver is in amazing health for a man of your habits! Your kidneys are the greatest! That will be $50, please.

  18. #20 herr doktor bimler
    October 19, 2016

    Amber Brown Skylar
    Wow. I was so distracted by all the “quackery” language that I got bored trying to extract the point of the article.

    I sympathise with Amber’s distractability. With me it’s squirrels. And shiny objects.

  19. #21 Denice Walter
    October 19, 2016

    re ” Poorly written article”


  20. #22 Dangerous Bacon
    October 19, 2016

    “If you were to pay attention to the details, you would understand that placebo cannot apply when studying any manual therapy like acupuncture, massage, chiropractic, or PT, because it is impossible to pretend to stick an needle, massage, adjust, etc, let alone convince the patient they are receiving sham, or placebo approaches.”


    Placebo-controlled trials of acupuncture have been done, and have served to demonstrate that perceptions of improvement occur whether or not the needle punctures the skin (or is inserted in locations that have nothing to do with sites that supposedly improve the flow of “qi”.

    Placebo intervention is also possible with manual techniques like chiropractic.

  21. #23 Narad
    October 19, 2016

    The sensations and therapeutic effect from acupuncture is profound [sic]

    Do go on about these sensations.

  22. […] Rehashing the myth of “placebo without deception” – During the telephone screening … they heard a bell, (3) a positive attitude can be helpful but is not necessary, and (4) taking the pills faithfully for the 21 days is critical. All participants were … […]

  23. #25 Leigh Jackson
    October 20, 2016

    All hail the awesome miraculous power of mind/faith. Belief can cure. Or if not cure, make one feel better. Not actually be better but feel as if one is better. And that’s an amazing grace in itself, Kaptchuk thinks. It’s okay for doctors to tell their patients that this pill is a placebo, it’s powerless in itself, but if you put your faith in it – and in me – then you will feel better. Your faith will heal you. Trust me, it’s been scientifically proven.
    Only it hasn’t been. It’s just smoke and mirrors dressed up in white coats. It’s duplicitous hocus-pocus. It’s a lie.

  24. #26 Magpie
    October 20, 2016

    Additionally, patients were told that “placebo pills, something like sugar pills, have been shown in rigorous clinical testing to produce significant mind-body self-healing processes.”

    How did anyone think that was ok to tell patients? What the hell?

  25. #27 Ed Dwulet
    October 21, 2016

    ” … when RCTs come up negative for acupuncture, homeopathy, reiki, or whatever, they cannot accept that their woo doesn’t work and simply walk away. So instead, for example, they make excuses … ”

    It all sounds very familiar … for example in the above statement substitute PSA screening or mammography for the woo and the “they” would be a whole lot of unethical real doctor’s with m.d.’s after their names.

    It’s not about science … and its definitely not about “do no harm” … its about money.

  26. #28 Narad
    October 21, 2016

    for example in the above statement substitute PSA screening or mammography for the woo

    Do go back to the thread you started in. Randomly interjecting the PSA monomania because you’re not getting enough attention is thoroughly irritating.

  27. #29 Daniel Corcos
    October 22, 2016

    @ Ed Dwulet
    RCTs came up positive for mammography.

  28. #30 Ed Dwulet
    October 22, 2016

    Daniel @#29

    You could have gone all the way and said I was completely wrong by claiming there were positive RCT’s for PSA screening too.

    My response would be: barely “positive” … and had they all been performed BEFOREHAND — neither would or could ever be rationally used to justify implementation of widespread screening because the barely positive came at the cost of tremendous harm.

    The RCT’s were all performed long ‘after the fact” — not before — by a “system” with a financial interest in “proving” they were right all along — and to justify continuing the status quo.

    So every few years we get to read another “Study Says Mammograms Saves Lives” headline … OK, you’re right … after the statistical numbers games are played to generate an abstract artifact called “significance” — meanwhile lots of real lives are being ruined.


    I just read a news item that said someone was finally organizing a RCT for DCIS — about 40 years late, wouldn’t you say? A famous epidemiologist once advised “Randomize the FIRST patient.” I doubt that they’ll be able to recruit any patients from the USA after the 40 years of breast scare mongering that’s gone on here.

  29. #31 Daniel Corcos
    October 23, 2016

    @ Ed Dwulet
    Tabar et al. (Lancet, 1985) found a 31% reduction in breast cancer mortality, 7 years after the beginning of the trial. Nationwide screening began later in most countries.

  30. #32 Ed Dwulet
    October 24, 2016

    Daniel @#41

    Wow! 31% Sounds impressive! It isn’t! 31% of what?
    And at what costs in harm? No to mention that 31% completely disappears when the stats game is played another way.

    “Why cancer screening has never been shown to “save lives”

  31. #33 Daniel Corcos
    October 24, 2016

    @ Ed Dwulet
    31% of breast cancer mortality, as I said, and after 7 years of screening. Lumpectomy causes harm, but not death. And the benefit in breast cancer mortality vanishes in longer studies, because cancer incidence increases.
    The BMJ paper does not answer the question for breast cancer.

  32. #34 Eric Levine
    October 25, 2016

    The New York Times reported on the study in today’s paper and seems to have swallowed the conclusion hook line and sinker. No critical analysis in it.

  33. #35 Richard Morgan
    December 12, 2016

    ““It’s the benefit of being immersed in treatment: interacting with a physician or nurse, taking pills, all the rituals and symbols of our healthcare system,” Kaptchuk said. “The body responds to that.”
    Well, of course. That has already been well-documented. So why are there no studies where the “rituals and symbols” are also removed? A triple or quadruple blind RCT? Nobody in the role of patient or practitioner, with the scientists remaining deeply hidden beneath several layers of protocol. A candid camera kind of thing. “Move along please. No test going on here. Nothing to see.”
    “Surprise, surprise, Mr IBS. You’ve been wondering why your pooping is less painful? You’ve been secretly administered a sugar pill over the last 21 days and a hidden camera has been following you in the rest room…”
    I conducted an as yet unpublished study in 2008: “VooDoo, suggestibility and the placebo effect: real arsenic kills 100% whereas only 14% of subjects who were told they had been administered arsenic after swallowing sugar pills. actually died within two months.”
    Telling the patient that they were receiving sugar pills when being administered arsenic killed with the same frequency as those who were told it was arsenic AND the control group who were unaware of receiving a fatal dose.
    Our conclusion was that real arsenic is significantly more effective in killing people than VooDoo, suggestion or a hate-filled relationship.

  34. #36 Richard Morgan
    December 14, 2016

    “Finally, we already know that interacting with a kind physician or nurse makes people think they feel better.”
    This sentence would be more accurate with two edits:
    “Finally, we already know that interacting with a (*) physician or nurse makes people (**) feel better.”
    I demonstrate this in my book, “Help! OK.”

  35. […] or could possibly make you pain free. “Can make you pain free.” The other study, billed an open-label placebo trial for chronic low back pain, included this […]

New comments have been temporarily disabled. Please check back soon.