At The NY Times Mag, is it really "bad science" or is it bad communication?

i-8daa63c1f2c9c1bfb04cebe01aa7a471-WideScreen.jpg
i-1d7ba9553ba9b9c7b0c35e4399ba1836-NYTimes magazine.jpg

In a cover story at this week's NY Times magazine, Gary Taubes digs deep into the world of epidemiological research on diet and health. It's an important topic to call attention to, but the article is framed in disastrous and irresponsible ways.

Instead of telling a detective story hung around just how amazingly complex it is to figure out the linkages between diet, drug therapies, and human health, Taubes and his editors go the unfortunate route of defining the article in terms of conflict, drama, and public accountability.

They readily translate their preferred interpretation by way of titles, headers, and call outs such as "bad science," "science vs. public health," "the flip flop rhythm of science," and "why we can't trust" science. Perhaps more powerfully, the art work featuring human miniatures in petri dishes and test tubes instantly relays a message of human experimentation and "big science."

To be clear, Taubes article offers valuable insight into the uncertainty and social side of science, but the misleading moral lesson is that science can't be trusted.

It's not to be unexpected from the NY Times magazine, a space at the newspaper where policy issues are often over-dramatized to stir attention and drive discussion (see this Public Editor column). There is nothing wrong with a magazine striving to set the public or policy agenda, but to do so in ways that unfairly trigger reader (and policymaker) distrust is irresponsible.

In fact, in holding the institution of science accountable, Taubes and his editors fail to turn a critical eye on their own profession. One reason health studies might appear so confusing is because of what fellow NY Times reporter Andrew Revkin describes as the "tyranny of the news peg."

As Revkin details in his excellent chapter at the Field Guide for Science Writers, one problem in the communication of uncertain science is that university research officers and journalists overwhelmingly define what's news in science as the release of a new scientific study. Everyone benefits from this negotiation of newsworthiness, as universities compete for prestige and future funding dollars while journalists file dramatic narratives on deadline and with easier effort than required in a more thematic backgrounder.

Yet often lost is context for what previous research might say or where the body of evidence might be leading. Everyone's a cognitive miser including the lay public, policymakers, and editors at magazines like the NY Times' weekly. Constant exposure to "true today, not true tomorrow" reporting on new health studies leads to the easy interpretation that something is wrong with the institution of science, rather than addressing a systematic bias in how research is communicated and then reported by journalists.

UPDATE:

Curtis Brainard at Columbia Journalism Review weighs in on Taubes' article and my critique.

Charlie Petit at the blog for MIT's Science journalism program sides with my criticism.

"End of Science" author John Horgan has a different take.

More like this

I argued (repeatedly!) that this problem is rooted in part in the misunderstanding widely afoot among the general public, and often among journalists, regarding the difference between textbook knowledge and science out at the frontiers where folks are learning new things.

We learn our science from textbooks (Kuhn discusses this at some useful length), which among other flaws offer up a fixed body of permanent knowledge. Journalists often treat The Latest Study (published this week in Science!) as a new piece of fixed knowledge, ready for the textbook. In this regard, the journalists themselves are clearly being cognitive misers too.

Ouff! I actually made it through the whole article, and it was a worthy and informative read, if a little dry and technical.

Taubes even seems to know about "tyranny of the news peg" risk when right in the conclusions (are conclusions more likely to be read by an impatient or time-starved reader? I would guess so, but I don't know) he notes:

--
«One is to assume that the first report of an association is incorrect or meaningless, no matter how big that association might be. After all, it's the first claim in any scientific endeavor that is most likely to be wrong. Only after that report is made public will the authors have the opportunity to be informed by their peers of all the many ways that they might have simply misinterpreted what they saw. The regrettable reality, of course, is that it's this first report that is most newsworthy. So be skeptical.»
--

Nowhere in the article I see implied bad faith or hidden interests on the part of the scientists - only that their work is complex and they may and do fail, which seems fair to me.

But it's true that the frame, that's the headings and the images that catch the eyes of the skimmer, may leave those not willing to engage with the whole text with an unwarranted fishy aftertaste.

Not to sound glib, but another reason that journalists choose to write about what's new in research is that our readership demands it. While some folks will go for the more considered approach -- Scientific American does have a readership, after all -- many others conflate interesting with new. I'm not sure this is an entirely avoidable human trait.

"conflict, drama, and public accountability."

These are often editorial demands, quite at odds with the detective story the science journalist would like to tell, absent all the relentless editorial interference which surrounds any writer working in a newspaper setting.

To esteemed Matthew C. Nisbet, John Fleck:
P-a-leeeeeease,
Do not shoot the messenger.

In regard to the "bad faith or hidden interests on the part of the scientists".

That is exactly the point. It's is not that scientists are bad or dishonest, it's the "science" itself flawed. Epidemiological studies by defintiion are conducted on a subset/limited/underrepresented number of case and control participants. No wonder that driven by notorious academic pressures to publish at any cost, researchers produce conclusions that can be easily overturned by the next study with - not better and not worth - just different group of participants.
The public scrutiny and fiscal audit of how the millions spent (wasted) on this "science" is long overdue.

Mark,
I hear you and definitely! In my talks and articles, I am the first one to defend science writers as normally doing a great job and emphasizing with scientists that they are often not the problem when it comes to communication.

But with the tyranny of the news peg, it's a two fold problem. Science institutions focus two narrowly on press release advocacy of single studies and reporters too readily rely on such news peg driven journalism.

How do we break the tyranny of the news peg? In an article with Chris Mooney, we take a page from recommendations by the Royal Society of the UK.

Science journals and editors can break the tyranny by publishing companion articles with new research authored by policy or ethical experts that contextualizes the implications of the study. See this link:

http://www.csicop.org/scienceandmedia/hurricanes/

Taubes' piece had a lot of valid criticisms of both the science and its reporting. In particular, he does a good job of describing in detail some of the potential confounding variables in many studies, which may explain how so many observational studies can be so wrong. What I found unfortunate was that the piece seemed to be out to paint "epidemiology" as a villian.

My understanding is that epidemiology encompasses both the observational studies that are prone to these confounding influences and also the randomized controlled trials that are the gold standard. The public, and especially journalists, should certainly attend to the differences between various types of studies. Taubes' recommendation to ignore everything but randomized-control trials and the studies with huge effects might not be bad advice. But that doesn't make epidemiology a pseudoscience.

Matt, I wish I lived on your planet, where Taubes' article is below average. Here on this one, it's more nuanced and balanced than 95% of the general-audience science writing I see. On a skimming re-read, I found at least five explicit acknowledgements of "bias in how research is communicated and then reported by journalists." You seem annoyed that his main point -- essentially an epistemological one, that we are all prone to convert probabilistic and correlative situations into cause and effect -- isn't the one you'd have put front and center.

By Monte Davis (not verified) on 19 Sep 2007 #permalink

Scientific findings are always provisional. They always require a complete understanding of their context and they are always subject to new findings which may elaborate or even contradict interpretations of earlier findings.

Reporting of scientific findings almost always fails to point this out well. It tends to err on the (spectacular) side of suggesting that the "latest finding" is somehow "final" and can be interpreted without any significant context. Alternatively it may err on the (equally spectacular) side of bashing science, suggesting that the former "truth" was a "lie" or "failure". Scientific statements, by their very nature are at best "partial truths".

It is also the nature of the Journalistic beast to want to be spectacular (as it is the nature of the Scientific beast to want to find positive rather than negative results in it's experiments) and to promote "new" News over "old" News.

Public understanding and reception of science writing is perhaps most skewed around medicine and health. The public is wanting science to produce "immortality" and is consistently grumpy when faced with the possibility that immortality is at least not going to be achieved in their lifetime. We deify our doctors and scientists in this, only to need to tear them down when they fail us in our pursuit of avoiding discomfort and seeking immortality.

The politics and economics of (big) science is also a contributor to these problems. Scientist's voices are often adjusted to avoid offending their patrons who often literally do not understand the science they have commissioned nor do they understand the processes and philosophy of science. They paid for the research, they imagine they get to present and interpret the results in a way which supports their agenda. The scientists who did the work, accepted the money, told the most truth they could uncover, often fail to effectively thwart this.

Go figure.

By steve smith (not verified) on 23 Sep 2007 #permalink

İPUCU Ödeme almaya hazır mısınız? Hesabınızı ödeme almaya hazırlamak için yapmanız gereken işlemleri öğrenmek için lütfen Ödeme Kılavuzu'muzu okuyun.