Prozac and Placebos

Late last month, I put up a quick post, New-generation antidepressants do not produce clinically significant improvements in depression, that addressed a PLoS published metastudy of interest.

I was careful to use the phrasing from the paper as the title of my post, and to provide only the author’s summary, because I knew this was a tricky issue. I could have read the paper carefully and reported my opinion on it along side the information from the paper (a practice known as “blogging on peer reviewed research), but I did not have the time or interest to do so, yet I knew many of you would want to know about this.

This is the beauty of PLoS, by the way. Regular people can read the original paper because it is an Open Access journal.

Anyway, it turns out that this study was misinterpreted by the press more than most, and this has lead to the production of a commentary by Andrew Hyde on the PLoS site:

[The paper] on the front page of four UK national newspapers (the Guardian, the Telegraph, the Independent and the Times), was the leading item on the BBC News and prompted stories in Time, the Wall Street Journal and the Economist.

The paper not only posed questions about the benefits of antidepressants, it revealed how many clinical trial results do not see the light of day. But whilst the issues relating to it continue to be debated … some of the headlines in the media maelstrom misrepresented the study.

The headlines started appearing at 1am GMT on Tuesday 26th, as soon as the embargo ended. The front page of the Independent (“Antidepressant drugs don’t work – official study”), the Guardian (“Prozac, used by 40 million people, does not work”) and the Times (“Depression drugs don’t work, finds data review”) all opted for an outright statement that antidepressants don’t work. But the study does not show that antidepressants do not work. Rather, the evidence reviewed in this analysis did not show these antidepressants to produce enough of a beneficial effect over the placebo to be termed clinically significant. Language Log, a linguistics blog, gives an account of how some journalists got tangled up in their own sentences when trying to describe the results.

Time Magazine opted for a slightly different presentation (“Antidepressants Hardly Help”), pointing out that there is a difference between “statistical significance” and “clinical significance.” …

So some of the headlines were off the mark, but one good thing that can come out of the coverage is a re-ignition of the debate in the media about the importance of having access to all clinical trials data – not just the positive results pushed for publication by pharmaceutical companies.

Hopefully some informed debate about drugs and depression in general can come out of all of this coverage. Read some of the reader responses that we’ve received – in particular the compelling response by Jeanne Lenzer and Shanon Brownlee, which does a good job of suming up the most important implications of the study. …

The commentary quoted above can be read in full here. Therein you will find links to the paper and a number of other interesting sources.