I posted last week on a paper purporting to identify a genetic variant influencing the placebo response. The main message of my post was that given the terrible history of small candidate gene association studies, a paper describing an association with a sample size of just 25 individuals should be simply ignored – and certainly not described in the popular science press as “a milestone”.
Now Neuroskeptic has a detailed critique of the paper up in which he argues that the problems in the study go much deeper than inadequate sample size – in fact, the study wasn’t measuring the placebo effect at all:
So, calling the change from baseline to 8 weeks a “placebo response”, and calling the people who got better “placebo responders”, is misleading (at least it misled every commentator on this study so far). The only way to measure the true placebo effect is to compare placebo-treated people with people who get no treatment at all. This wasn’t done in this study. It rarely is.
(For what it’s worth, I plead guilty to being among the misled).
Neuroskeptic also has concerns about the likely post hoc nature of the study, which greatly increase the probability of false positive findings. I couldn’t agree more, and indeed raised the same issue in the comments to my post:
I’ll grant you that it’s possible that the association is genuine, but here’s what I see as the default explanation. We know that the authors took these samples from two somewhat larger studies looking at genotype, PET and clinical response to drugs in patients with
seasonal affective disorder[should have been social anxiety disorder] performed back in 2003-2005. They presumably tested a whole bunch of different hypotheses: whether genotype affects PET activity in a whole bunch of different areas of the brain, whether genotype affects response to drugs, whether PET activity correlates with drug response, whether males differ from females, young from old, etc. With each of these hypotheses the probability that one of them would provide a statistically significant result purely by chance increased. In this case, the correlation between one particular genetic polymorphism and the placebo response was the one that came up (through a combination of chance, error and bias), and that’s the association that got published. Hey presto: headlines, glory, and groupies for the authors.
Post-hoccery is rampant in the candidate gene association field; combine that with a widespread failure to correct for multiple testing and a publication system that only rewards positive findings, and you’ve got a great big factory for manufacturing false positives. This study is by no means an isolated example.
Finally, Neuroskeptic discusses the problems with science journalism that resulted in such a lame, under-powered study receiving such over-hyped publicity:
It’s not [the science reporter's] fault, it’s not even New Scientist’s fault, it’s the fault of the whole idea of science journalism, which involves getting non-experts to write, very fast, about complicated issues and make them comprehensible and interesting to the laymen even if they’re manifestly not. I used to want to be a science journalist, until I realised that that was the job description.
T. Furmark, L. Appel, S. Henningsson, F. Ahs, V. Faria, C. Linnman, A. Pissiota, O. Frans, M. Bani, P. Bettica, E. M. Pich, E. Jacobsson, K. Wahlstedt, L. Oreland, B. Langstrom, E. Eriksson, M. Fredrikson (2008). A Link between Serotonin-Related Gene Polymorphisms, Amygdala Activity, and Placebo-Induced Relief from Social Anxiety Journal of Neuroscience, 28 (49), 13066-13074 DOI: 10.1523/JNEUROSCI.2534-08.2008