Neuroskeptic savages the "placebo gene"

I posted last week on a paper purporting to identify a genetic variant influencing the placebo response. The main message of my post was that given the terrible history of small candidate gene association studies, a paper describing an association with a sample size of just 25 individuals should be simply ignored - and certainly not described in the popular science press as "a milestone".

Now Neuroskeptic has a detailed critique of the paper up in which he argues that the problems in the study go much deeper than inadequate sample size - in fact, the study wasn't measuring the placebo effect at all:


So, calling the change from baseline to 8 weeks a "placebo response", and calling the people who got better "placebo responders", is misleading (at least it misled every commentator on this study so far). The only way to measure the true placebo effect is to compare placebo-treated people with people who get no treatment at all. This wasn't done in this study. It rarely is.

(For what it's worth, I plead guilty to being among the misled).

Neuroskeptic also has concerns about the likely post hoc nature of the study, which greatly increase the probability of false positive findings. I couldn't agree more, and indeed raised the same issue in the comments to my post:

I'll grant you that it's possible that the association is genuine, but here's what I see as the default explanation. We know that the authors took these samples from two somewhat larger studies looking at genotype, PET and clinical response to drugs in patients with seasonal affective disorder [should have been social anxiety disorder] performed back in 2003-2005. They presumably tested a whole bunch of different hypotheses: whether genotype affects PET activity in a whole bunch of different areas of the brain, whether genotype affects response to drugs, whether PET activity correlates with drug response, whether males differ from females, young from old, etc. With each of these hypotheses the probability that one of them would provide a statistically significant result purely by chance increased. In this case, the correlation between one particular genetic polymorphism and the placebo response was the one that came up (through a combination of chance, error and bias), and that's the association that got published. Hey presto: headlines, glory, and groupies for the authors.

Post-hoccery is rampant in the candidate gene association field; combine that with a widespread failure to correct for multiple testing and a publication system that only rewards positive findings, and you've got a great big factory for manufacturing false positives. This study is by no means an isolated example.

Finally, Neuroskeptic discusses the problems with science journalism that resulted in such a lame, under-powered study receiving such over-hyped publicity:

It's not [the science reporter's] fault, it's not even New Scientist's fault, it's the fault of the whole idea of science journalism, which involves getting non-experts to write, very fast, about complicated issues and make them comprehensible and interesting to the laymen even if they're manifestly not. I used to want to be a science journalist, until I realised that that was the job description.

Subscribe to Genetic Future.

T. Furmark, L. Appel, S. Henningsson, F. Ahs, V. Faria, C. Linnman, A. Pissiota, O. Frans, M. Bani, P. Bettica, E. M. Pich, E. Jacobsson, K. Wahlstedt, L. Oreland, B. Langstrom, E. Eriksson, M. Fredrikson (2008). A Link between Serotonin-Related Gene Polymorphisms, Amygdala Activity, and Placebo-Induced Relief from Social Anxiety Journal of Neuroscience, 28 (49), 13066-13074 DOI: 10.1523/JNEUROSCI.2534-08.2008

More like this

of course the study has to be replicated, that's the whole basis of science. However, I find it plausible that a serotonergic polymorphism, that has previously been shown to influence amygdala activity, plays a role in anxiety disorders. The design using a waiting list control group is ofcourse intresting, but none the less, a repeated measurment would be required in order to measure amygdala activity in anxiety. Thus, the critique is somewhat missleading. This was not "just an association study", but a possible linkage between a polymorphism, amygdala blood flow and placebo responce within the context of a real clinicla trial.

further, the neurosceptic got another major point wrong, it was not seasonal affective disorder, but social anxiety disorder.
Makes me wonder how closely the paper really was read.

ah, sorry, neurosceptic got the diagnosis right, you missed it.

Daniel: Cheers for the link & the kind comments. I don't want to go on record as thinking the study is lame, I should point out - I don't think that the analysis they performed measured the placebo effect, but that doesn't mean the whole experiment was without merit. In particular (hint to the authors if you're reading this) I'd be very interested to see the results of the trial, including the medication groups. Even if they are negative. Especially if they're negative, in fact.

c: Sure it's plausible. I find it plausible. However the methdological problems with the study mean that it's not evidence for it.

I agree that some kind of repeated measure would be necessary, but if you had two groups (placebo vs. no treatment) then it wouldn't be a problem because it would not longer be a confound, just noise

Hi again c,

Your comment actually neatly illustrates a major part of the problem with the field of association studies (and science in general, IMO): "plausibility" is given far more weight than it should be in evaluating the validity of a study. The fact that the authors manage to string together a cohesive narrative around their findings makes for a more entertaining read, but it doesn't make their data any more convincing.

Given the suggestive evidence that this is purely a post hoc analysis, its tiny sample size and Neuroskeptic's substantive criticisms, it simply doesn't matter how pretty the authors' story is.

(And yes, I mis-stated the diagnosis - I've added the correct diagnosis to both my original comment and the excerpt in this post. Thanks for pointing out the error.)

only, it's not just an association study or a narrative, but a "path analysis that supported that the genetic effect on symptomatic improvement with placebo is mediated by its effect on amygdala activity."

so the association was tested, it's not just a good story.

The authors also previously published a small study including a waiting list control: http://www.ncbi.nlm.nih.gov/pubmed/11982446?ordinalpos=14&itool=EntrezS…
There, they observed little effect in the waiting group, suggesting (yes, not testing) that the response in the current study is a "real" placebo.

None the less, I think you should write a letter to the journal of neuroscience summarizing your critique. It would be intresting to se the autors reply.

If the amygdala and serotonergic alleles are not a priori candidates in anxiety disorders, I don't know what is.
So how did your original speculation about post hoc analysis turn into "suggestive evidence"?

Best
C

c: Ah, that link is very interesting. A waiting-list control is just what you need. Now what they should do is to genotype the people in that study (assuming they got DNA) and see if TPH2 correlates with improvement in the active treatment groups. If so, it's a gene for the "tractability" of SAD, if not it might be specific to the placebo effect.

Although in fact, their data-set is probably too small for that, but you see what I mean.

yes, I agree

the problem with imaging genetics is false positives and money. On average, neuroimaging studies have 10-12 subjects, which is probably to low. Furmarks study had 25 subjects, which is a lot (but maybe not enough)in PET imaging.

I think the only way to proceed in this is to publish, and be prepaired to modify conclusions once alternative evidence accumulates. Hopefully, some big pharma company now will release their data on placebo response and genotype in large phase III studies.