On cheerleaders and watchdogs - the role of science journalism

i-8c98176dac755569aaeb8054edd3fa18-LoisLane_ClaireBennet.jpgThe recent World Conference of Science Journalists included a wide variety of delegates, from professional hacks who cover science beats, to enthusiastic freelancers and bloggers like me, to people working in press offices and communications departments. And with such diversity, it was perhaps inevitable that people should discuss which of these gathered attendees actually counted as journalists? And what exactly does science journalism entail?

Certainly, the idea that journalism equated to talking or writing about science in any form was unpopular. In the opening plenary, Fiona Fox drew a fine line between science communication and journalism, the latter characterised foremost by a process of scrutiny. Gavin McFayden, Director of the Centre for Investigative Journalism, described most science news as "stenography" or a "cheerleading operation". Over Twitter, Ivan Oransky explained to me, "For many, including me despite my MD background, you can only be a journalist because of conflicts".

These comments espouse a division between science journalism as either a cheerleader or a watchdog, as noted in a recent Nature editorial. To many, journalists only succeed in blowing the trumpet of science, regurgitating content from published papers and press releases when they ought to be subjecting it to further scrutiny by questioning statistics and hype and exposing dodgy data, fraudulent practices, and conflicts of interest (some scientists would add that they don't even do the first job particularly well).

Tied into this idea is the concept of "investigative journalism", which I mentioned in an earlier post on the embargo process. That prompted other bloggers to ask what "investigative journalism" means in the context of science. Physioprof asked, "What kind of 'investigative journalism' could science journalists possibly engage in that relate to scientific discoveries per se, and not things like corruption or fraud?"

What we have here is a conflict about the very meaning about the word 'journalism' (and to a lesser extent, the phrase 'investigative journalism'). It's a semantic têtê-a-têtê fit for a place among even the nerdiest of scientific disputes. I am not even going to attempt to solve it here, but I do want to discuss the distinction between cheerleader and watchdog, between uncritically waving the banner for science and actually digging into it. And to me, there are two ways of watching the dog.

The first is to look behind-the-scenes, into the world of funding, the lives of scientists and the wider narrative behind the published papers, analysing and exposing as you go. These are the activities that I suspect most journalists would describe as "investigative journalism". Indeed, on the WCSJ's session on investigative science reporting, all the case studies presented fell into this category. Brian Deer spoke about his incredible investigation that exposed the fraudulent "data" included in Andrew Wakefield's Lancet paper on MMR vaccines and autism, while Luc Hermann talked about Pfizer's attempts to spin claims about its drug Zoloft.

To these tales of "corruption and fraud", I would add stories about the process of funding science as Mike the Mad Biologist describes, and the narrative features that Kim Hannula mentions. In her words, these include "stories that incorporate the history of ideas, that pull together a long series of studies into some kind of coherent tale that gives me a sense of what a community of scientists think, or have thought in the past, or currently disagree about". All well and good, and these are particularly the kinds of stories that the best journalists, with their "news noses" and ability to draw out narrative threads, should be best-placed to uncover.

The second aspect of the watchdog role comes into play when actually reporting on scientific discoveries. Some would argue that this function is only fulfilled by critical and sceptical analysis (see Ben Goldacre as an example), while simply reporting on new research (as I do on this blog) counts as mere cheerleading. But I think that stems from an assumption that a piece which doesn't criticise hasn't involved a process of critical analysis. It all depends on the point at which this scrutiny happens.

When I decide on which papers to write about, I consider not only how interesting they are to me and my readers, but crucially whether they seem decent or not. I read through every single primary paper that I write about before I put a single word down. Are the conclusions solid? Are the numbers big enough to make for statistically meaningful conclusions?  Are they actually asking any interesting questions in the first place? Then, having ditched all the evolutionary psychology and happy that I'm left with some half-decent papers, I actually get on with writing. If during the writing process, something seems askew, I abandon it. I don't always succeed in my selections, but when I fail, my commenters eagerly let me know, and I listen intently.

In this model, critical analysis takes place as a sort of quality control before the point of writing and it manifests not in the form of written diatribes, but in the very presence or absence of stories. Even if your goal is to simply communicate science, you can merge the cheerleading and watchdog functions, and fulfil the latter before a single word is written.

Part of the failure of science journalism is in cheerleading without this initial watchdog step, particularly through regurgitating PR material. That is churnalism and it's the mechanism through which crap reports about equations of happiness, superfood panaceas or evolutionary just-so stories proliferate in the press - bad news (pun intended) for both science and journalism. If the dogs were truly watchful, crap stories would either be covered critically or they wouldn't get any airtime or column inches at all. The price of failure is that the sheer mass of undeserving material is squeezing out other stories for space or reader attention, be they investigative exposés or more straightforward descriptions of genuinely interesting work.  

To sum up, we have two possible ways of acting as a watchdog - uncovering the hidden stories behind scientific discoveries, and casting a critical eye on those discoveries either visibly through the actual text or invisibly through the process of selecting what to cover.

Writing skills are essential for both routes. As I've said already, the first investigative route seems especially suited to good hardcore journalists, people who know a thing or two about snooping out stories, cultivating sources and getting the most out of interviews. The second analytic route requires either some form of expertise - enough scientific background and interest to tell the wheat from the chaff.

The problem, then, is two-fold - many journalists lack the expertise to go down the second route and most of them lack the time to devote to either one, as I discussed in greater length in my earlier post on Flat Earth News. Perhaps there are ways to fix both problems.

It's not just professional journalists who cover new discoveries any more. They are joined by an army of enthusiastic and passionate amateurs - bloggers - who not only have a degree of scientific nous necessary for critical analysis, but (through comments) a mechanism for receiving feedback on what they've written. These people (*waves hand*) are already covering new, breaking research and their numbers include many writers who are excellent at explaining science to the general public. All they lack are the audience figures of mainstream media and the same prior access to embargoed publications that working journalists have.

In terms of daily news, journalists could restrict their coverage of papers-of-the-day to brief summaries of key information, and link across to the best bloggers, who have the knowledge and space to unpack the science in more detail. The bloggers benefit from having mainstream audiences diverted to their sites, while the journalists benefit from having both more time and more column inches for unearthing the deeper stories that their profession should be known for.

(Incidentally, regular readers will know that I think that the worlds of bloggers and journalists aren't quite so clear cut as the somewhat simplified model above would suggest. There is a growing number of excellent journalists who are also bloggers, and I hope to see such hybrids become ever more prominent).

More on journalism: 

More like this

I find myself much freer to pursue both watchdog roles you write about - explain science in context and dig for conflicts - now that I don't write for a newspaper any longer. The appetite to explore science in a meaningful way is waning among U.S. newspaper brass. Junk gets printed, because it's faster and easier to rewrite a press release and you can have a warm body without training do it.

Considering how little science news gets covered at all, it's too bad science writers are fighting over who has more legitimacy, those who dig for dirt or those who dig for context. There should be room for both.

Considering how little science news gets covered at all, it's too bad science writers are fighting over who has more legitimacy, those who dig for dirt or those who dig for context. There should be room for both.

That's fantastic - couldn't have said it better myself. I'm totally going to quote that in the future.

I agree with your categorizationâthe content at Ars Technica falls almost entirely in the skeptical review of publications that you described partaking in. I'd just elaborate on your description in two ways:

You describe the skeptical review as a purely culling process, while we sometimes go ahead and write up the studies with limitations or questionable conclusions, but note what the problems are. For a public that's not well versed in scientific reasoning and process, i think that can be a valuable lesson in recognizing when something is indicative but not definitive.

The other thing is that I'd add a third category of science journalism, which is taking an investigative approach to science journalism itself. How does a story that's unsupported by the actual data wind up getting headlines everywhere? Where are the failures, how could they be prevented? What does the original publication actually say?

So far, that latter process has largely been the domain of the bloggers (including yourself), but i'd argue it's as valuable as the investigation of the scientific community.

Great points John.

I initially started writing about the process of skeptically reviewing papers instead of just pushing them aside. Clearly a worthy function, and one that other blogs do very well. I think the reason I avoid doing so here is that (a) I only have so much time and I just enjoy talking about good studies more and (b) I often have enough expertise to set my spider-sense tingling in response to a dodgy paper but not enough to take it apart in way I would be sure about. This is another area where scientist bloggers can bring soemthing valuable to the table.

As to investigating journalism (heh), I totally agree. One of the downsides of the WCSJ in my opinion was that, for all this talk about cheerleaders and watchdogs, too many speakers seemed far too happy to act as cheerleaders for their own profession without turning that vaunted scrutiny inwards.

I think John is spot-on in highlighting investigations of unwarranted hype as an area where the blogosphere has generally taken the lead - partly as a response to the reluctance of mainstream science journalists to turn on their own, but also (I suspect) partly because it provides an opportunity for yet more "careful bloggers vs sensationalist journalists" barracking.

The way science is being reported among professional scientistsâonline publishing, fast epub, and other trendsâ is changing the rate at which science is being disseminated to other scientists AND the general public. In many ways, these trends are great, bringing the latest research to a larger community of scientists more quickly, but in many ways this fast publication subverts the slow, back-and-forth, evaluation of scientific research. As a consequence some studies are being "picked up" by journalistic sources and reported to the general public before the study can be thoroughly vetted by the scientific community. The headline hype every time a "hot" individual study is published in a scholarly journal is particularly dangerous to the incremental process of science and to the public's understanding of how science works. No one study should ever be reported out of context of the other related work in a scientific field. Scientists and professional science journalists have a responsibility to be true to the incremental process of science; unfortunately the "bottom line" of news organizations (ratings and readership)seems to win out. I'm not sure what the solution is (a thinking public, perhaps?), but these are important conversations to have.

Excellent article, I find myself agreeing wholeheartedly. However, there is one point that you make, which although it is a good idea, it makes certain assumptions about people interested in science:

In terms of daily news, journalists could restrict their coverage of papers-of-the-day to brief summaries of key information, and link across to the best bloggers, who have the knowledge and space to unpack the science in more detail.

Whilst this makes perfect sense for an ICT savvy generation of bloggers and tweeters, it may mean even less science coverage for the scientifically interested Luddites (perhaps an oxymoron, but a valid description nonetheless) of our society; and there are many more out there than you might expect.

There is still a huge amount of cultural change that is required before traditional newsprint can devolve a significant proportion of its content to a different media. Response to market forces will favour the demands of the purchasers of newsprint - these are the people who don't rely on a computer to get their news, they pay daily for a wad of paper and I have a strong feeling that they are unlikely to respond well to being told to look at a blog if they want the information they thought they would get in hard copy.

In 30 years or so the idea will probably be viable, but by then I'm uncertain that newsprint will still have the mass presence it has now. Still, if I actually bought newspapers rather than reading them online I would find the links to blogs useful, but not as useful as a citation of the original research - and that's something that newspapers should be including as a matter of course.

I kind of see what your'e saying Ed, but wouldn't even the level of analysis at ScienceNOW or the Science Channel be described by its writers as investigative-journalism-by-omission?

Good post, and interesting to hear you spell out your own methods. I believe science blogging is still in its early infancy, and out of the 1000s of blogs out there only a tiny percentage actually do consistently good science reporting AND are well-written at the same time (but yours is one of the few). It will be interesting to watch this whole blogger/journalist/science communication tussle evolve over time, but blogs still have a long way to go.

Nice picture at the top. I like it that the cheerleader is blonde and stupid and the watchdog is brunette and smart.

Good post, mind!

To sum up, we have two possible ways of acting as a watchdog - uncovering the hidden stories behind scientific discoveries, and casting a critical eye on those discoveries either visibly through the actual text or invisibly through the process of selecting what to cover.

Importantly, neither of these modes has anyfuckingthing whatsoever to do with being the first into print with a regurgitation of university press releases 1 fucktillionth of a motherfucking microsecond after some embargo ends.

Nice post, Ed. Let me offer one answer (not the only one) to the question of what watchdogging/investigative journalism in science would look like: My story on the overexpansion of the PTSD diagnosis. Here's a story in which empirical principles are ridden over roughshod because of political and cultural agendas, assumptions, and forces. It's science gone awry, with profound and lasting consequences, most especially for the soldiers the diagnosis was originally formed to help.

The links at http://is.gd/1qfES will take you to the story at Scientific American as well as a bunch of related blog posts and resources.

PaoloV - It's a good point about the disparities in internet use which would see that model particularly failing the lowest socioeconomic groups (and the eldest) of society. It's a trap that we who live and breathe the Internet fall into, so well put.

Gillt - I don't mean to say that what I do is in any way investigative. It's not, and I have no problem with that. But I challenge the assumption that reporting and translating science automatically puts you in the cheerleader category or that there's somehow something amiss with that. My point was that a fair amount of critical analysis goes into the process of talking about papers before the point of writing. If the writers of ScienceNOW etc. do the same, then all power to them.

Physioprof - fair point. However, I go back to what I said in the post on embargoes that my primary reason for favouring them is that they increase the quantity of science coverage in the mainstream media. In an ideal world, journalists wouldn't care about scooping or being first into print, and editors wouldn't exact a crushing time pressure onto them, but until those larger structural problems are addressed, in the current journalistic climate, I think the embargo process has benefits that can't be ignored. And that's a separate issue to whether people regurgitate from press releases, a practice that we stand united in loathing.

David - great example. Exactly the type of thing I was describing.

Well, I personally have a bad relationship with "science journalism" because everywhere I look, the writers can't get the most basic facts straight about paleontology, or at the very least, they think "Jurassic Park" is the only touchstone the public is aware of. Everything gets compared to that damn movie and the dinosaurs in it. It's as though the Associated Press believes that the public will simply not understand anything prehistoric if they don't throw Jurassic Park in there. And like Matt Wedel said over at SV-POW, every carnivorous dinosaur has to be compared to either Tyrannosaurus (T.Rex) or Velociraptor.

But then you get really horrible things like calling pterosaurs "flying dinosaurs" or Albertonykus a "missing link" between dinosaurs and birds. The lack of basic knowledge, not just in terms of paleontology but SCIENCE is astounding.

And then there was the Ida disaster. It's not just a lack of expertise in "science journalism," but responsibility, too. I don't get paid to blog about ANYTHING, and I have no degree in a field relvant to science or journalism, yet my blog posts are, line by line, more accurate than any Associated Press paleo news story. And that really makes me angry.

Great post and great blog in general! Even more pleasing because your name is almost the same as mine ...

I found the Nature essay fascinating and it's good to see the additional context you provide. The interesting thing for me was how uncritical of science journalism was in the mid-20th century and actively covered up things like the DDT scandal. Given that history, the practice of science scare reporting in newspapers seems understandable, though of course this makes a more exciting story.

Maybe science news stories need *longer* embargo periods in order to get facts double checked and for background information to be trawled through. At the moment, anything resembling ethical debate on science issues in the public sphere is basically impossible because there is so little understanding of the basic points.

Good post - glad to see the term 'cheerleader' being used by others about certain (I would say most) types of science journalism; it's a word I've used for years about it.

But what are the 'basic points' the public are supposed to understand to have something resembling an ethical debate? The laws of thermodynamics? The parts of a cell? That there aren't little green men on Mars? The Evolution is the True Theory? People generally understand that money corrupts and power corrupts, and that while science is supposed to be above all that somehow, scientists are human. If science journalists are too lazy to report about science as a human enterprise which can only aim at the Platonic ideal, scientists have been equally happy to play god.

My first question about any new discovery reported in the press these days is 'who paid for it?' This might seem a bit cynical, but it isn't a bad place to start for the non-specialist. Out of this question comes a host of others, like why was this important to know, what assumptions were made, did the experiment challenge or confirm them - but you can save a lot of time examining the statistical pool by answering the first one.

In terms of grasping the basic points about science, apart from money and egos, the other thing anyone can understand is that any theory or experiment exists within the concerns and capabilities of the people who make them. This is not to say that there is no scientific truth, only that the understanding - and interest in - that truth will be necessarily limited by circumstance and technical skill. These boundaries can be clearly explained to anyone with common sense.

I've just finished reading a beautiful pair of science history books from the early 60s - 'The Fabric of the Heavens' and the 'Architecture of Matter' by Steven Toulmin and Jane Goodfield which does this very well. Of course some of the details have been superseded by later discoveries. But I came away from their general story about the interplay of craft, theory, religion and politics since the Greeks feeling not only that I understood better 'how science works' in general but also individual theories and experiments which have eluded my grasp these twenty years. The key here was the authors' deep respect for and understanding of what people could know and what they were interested in asking. Failed theories became not merely 'wrong' but part of the process of increasing general understanding - and good ways to help explain those which 'won'. If you don't know these already, have a look.

Anyway, congrats on the prize and keep up the good work.