Why is some coverage of scientific news in the media very poor?

Ever have one of those times when you have a cool new blog post all ready in your head, just needs to be typed in and published? Just to realize that you have already published it months ago? Brains are funny things, playing tricks on us like this. I just had one of such experiences today, then realized that I have already posted it, almost word-for-word, a few months ago. It's this post. But something strange happened in the meantime: that post, in my head, got twice as long and changed direction - I started focusing on an aspect that I barely glossed over last time around. So perhaps I need to write this one anyway, with this second focus of emphasis and instead of retyping the first half all over again, just ask you to read the old post again as it provides background necessary for understanding this "Part II" post today. I'll wait for you right here so go read it and come back....

Are you back?

OK, so, to reiterate, about four or so months ago, I started monitoring media coverage of PLoS ONE papers and posting weekly summaries and linkfests on the everyONE blog. In the previous post (the one you just read and then came back here) I focused on the importance of linking to the papers, and why bloggers tend to provide links while traditional media does not. There is also a good discussion in the comment thread there. Here, I'll shift focus on the quality of coverage instead.

Expectations before I started:

Before I was given this task, I was already, of course, aware of a lot of coverage, just not in a systematic way. After all, I needed to read the coverage on science blogs in order to make my monthly pick.

Reading science blogs of repute, or those editorially approved by the editors of ResearchBlogging.org, or those editorially chosen to join the networks like Scienceblogs.com or Discover, I assumed that I was predominantly seeing the best of the best of bloggy coverage of science stories and that there is probably some lesser stuff out there that I just did not pay attention to. If I noted some really bad coverage, I usually discovered it via science blogs on posts that debunked them - such bad coverage I saw tended to come from specialized anti-science or pseudoscience blogs whose political agenda is to misrepresent scientific findings in a particular area of science (e.g., blogs of Creationists or Global Warming Denialists). I assumed that there was also a bunch of stuff in the middle, OK but not brilliant.

Likewise, without specifically looking for coverage in traditional media, such coverage would often find me, either if a blogger linked to it, or via TwitterFriendfeedFacebookverse. Again, it was either excellent coverage of a big story of the week, or such egregiously wrong explanation that it was the duty of science bloggers to do the fact-checking and correcting in a public place - their blogs - so the audience googling the topic would hopefully see those corrections. Again, I assumed I saw only the best and the worst and that there must be a lot of middlin' stuff in-between.

So, if I designed a scale to measure the quality of reporting, ranging from Amazing to Excellent to Very Good to Good to Average to Meh to Poor to Atrocious, I expected to see both blog posts and MSM articles spanning the entire scale in pretty proportional distribution, more of a flat line than a bell curve.

That's not what I found.

What I found surprised me....and depressed me.

So, let me classify coverage by quality:

1) Anti-coverage.

Anti-science and pseudoscience blogs actually rarely post about specific scientific papers. They are essentially political blogs, and thus most of their posts are broad, opinionated rants. When they do target a new paper, they tend not to link to the paper, which makes it difficult for me to find the post in the first place (my previous post describes detailed methodology I use to find the coverage).

There is plenty of anti-science and pseudoscience ranting in what goes under the heading of traditional media as well. Just like blogs, they tend to make broad opinionated rants and not focus on specific papers. HuffPo is notorious for pushing medical quackery and pseudoscientific NewAge-style woo. The things that look like media but are just well-funded AgitProp fronts for RightWing organizations sometimes focus on science they hate, especially Global Warming which they deny. Interestingly, unlike their cable counterpart which is pure ideological propaganda, the FoxNews website has relatively decent science coverage as far as traditional media goes.

It does not matter to me that many of those outlets are indexed in Google News - PLoS is a serious scientific organization and I will not reward anti-science forces with a link from our blog or legitimize them by mentioning them.

2) Non-coverage.

Some blogs are personal RSS-feed aggregators, automatically importing feeds from various sources, or with specific keywords. If those include links to the original, they are legitimate, if there are no links, they are splogs (spam-blog), but either way they are useless - that is not coverage of our papers, so I ignore it. In case of a very big story, sometimes I see a blogger who is not a science blogger post something about it - usually just a copy and paste of some text from ScienceDaily, rarely with any editorializing (e.g., a funny title, or a LOL-cat-ized picture) - also not to be considered original coverage, thus ignored safely by me.

Likewise, often dozens or hundreds of newspapers copy and paste (sometimes abbreviated) text coming from AP or Reuters or AFP or TASS. They are essentially equivalent to feed-blogs (since they usually say that this was from Reuters, etc.) or even splogs (since they never link to the original scientific paper) and only a technicality saves them from being considered outright plagiarism. Thus, I link to the Reuters original (which since recently started adding links to papers - Yay!!!) and safely ignore all the others - they are not considered coverage.

Sites like EurekAlert! and ScienceDaily collect barely modified press releases. If I can find the original press release on the University site I may link to it in my weekly post, but I do not link to these secondary sites - they are just aggregators, not sources of original coverage.

3) Poor-to-average coverage.

This comes 100% from the traditional media. Bloggers are either experts - scientists themselves (or science teachers or science writers) - and thus do a good job, or are not experts in which case they are not interested in science, do not blog about it, and certainly have nothing to say even if they copy+paste something sciencey from the MSM. So they don't even try to write original blog posts with their own opinions. The middlin' gray area just does not seem to appear on blogs. But there is tons of it in the MSM - the range of my quality scale from Average down to Atrocious is filled with MSM articles. They are bad, but they are "original reporting" at least to some extent (though probably warmed-up press releases, rewritten to use different words and phrases) so I include the links in my posts anyway.

4) Good to excellent coverage.

This is interesting. Many PLoS ONE papers get covered on blogs, and are usually covered wonderfully well. On the other hand, good MSM coverage happens only for the Big Papers, those that are covered everywhere (like Nigersaurus, Maiacetus, Green Sahara, Darwinius....). And then, those excellent MSM articles are written by well-known science writers in top media outlets (e.g., Guardian, London Times, NY Times....). Traditional media pull their Big Guns only for a rare Super-Paper, while bloggers cover everything well, big or small - whatever is their interest or area of expertise. Finally, I should point out that the websites of magazines and public radio (NPR or PRI) tend to cover science better than the websites of newspapers and TV outlets. At the latter, science is covered better by their resident bloggers than by the main news-site.

Why does PLoS treat bloggers as journalists, has bloggers on press list, and highlights the blog coverage? You can say we are nimble and 'get it', or that we want a perception of being cutting-edge, or that we'll try to get whatever coverage we can get. But really, the most important reason we do this is because the coverage of our papers by bloggers is just plain better than MSM, and significantly more so.

So, when British Council only suggests science pages of newspapers for science coverage, or when KSJ Tracker puts together linkfests of coverage of science stories that is composed entirely of media organizations that existed 20 years ago, they are not just out-dated (and thus look like dinosaurs), they miss the very best coverage out there.

What is the difference between Good and Poor coverage?

I have been looking and looking and looking....and I think I finally figured it out. Scientific expertise or experience in covering science by the journalist (or blogger) is a relatively small factor in determining the quality of the article. Much more important is availability of space! Bad articles are short, good articles and blog posts are long.

A couple of inches is just not enough to cover a new scientific paper properly.

Let's dissect this in more detail....piece by piece.

Lede - an MSM article will always start with it. But from a blogger's sensibility, lede is weird. Strange. Artificial. Unnatural - nobody really talks like that! It is also superfluous - a fantastic waste of limited space. And it also feels so pretentious: if you are reporting on a scientific story, why start with a paragraph hinting you think you are some kind of Tolstoyevsky? Bloggers either jump straight into the story with a declarative introductory sentence, or start with providing context (including copious use of links) or occasionally start with a joke or personal story or a funny picture, then segue into the serious coverage (but bloggers have endless space so they can afford to waste some of it).

Human Interest paragraph - an MSM article always has it. Bloggers will have it if there is a strong human interest aspect to the story (though never interviewing Average Joe on the street to get a useless quote). Science stories are either "cool" or "relevant" or "fishy". The latter two often have a human interest aspect to it which a good article and blog post with explore. But many 'cool' stories do not - the human interest starts in the reader's mind while reading the story - the human interest is in reading the story about cool animal behavior or some wonder of the cosmos. There is no need to artificially invent a human interest aspect to such stories - those are often misleading, and always a waste of space.

Main conclusion of the paper paragraph - of course bad articles, good articles and good blog posts will have the main conclusion clearly spelled out. But good articles and blog posts have sufficient space to explain those conclusions - from methodology (is it trustworthy, novel, creative...), to authors' conclusions (do they follow from the data, miss some important alternative explanation, or over-speculate). They have enough space to explain how those conclusions differ from similar conclusions reached by previous studies, etc. A brief article has no space for this, thus the summary conclusion is either too blunt and short to be accurate, or is too similar to conclusions that the reader has already encountered many times before.

Context - there is no space for context in a short article. Yet it is the context that is the most important part of science coverage, and of science itself - remember the "shoulders of giants"? Placing a new study within a historical, philosophical, theoretical and methodological context is the key to understanding what the paper is about and why it is important, especially for the lay audience. Even scientific papers all provide plenty of context in the Introduction portion (and often in the Discussion as well) which is sprinkled with references to earlier studies.

Quotes - even the shortest article will have a brief quote from one of the authors and/or another scientist in the field, as well as sometimes another scientist who is a naysayer or skeptical about the results. Names of these people who are quoted are usually completely unfamiliar to the lay reader, so invoking them adds no heft to their claims. This is pure HeSaidSheSaid journalism and, again, a colossal waste of space. Not to mention that there are no links to the homepages or Wikipedia pages of these quoted scientists for the audience to see who they are. And we know that a cherry-picked quote that does not link to the entire transcript or file of the interview is a huge red flag and sharply diminishes reputation and trust of the reporter and the media outlet.

Why don't science bloggers quote other scientists? Why should they? A science blogger is simultaneously both a reporter and a source. If there is a new circadian paper that I find interesting enough to blog about, I am both reporting on what other scientists did AND am a source of expertise in evaluating that work. Why quote someone else when my entire post is essentially an interview with myself, the expert - not just a quote but the entire transcript? The chances I will get something wrong about a paper in my own field are tiny, but if it happens, other people in the field read my blog and they will be quick to correct me in the comments (or via e-mail, yes, it happened a couple of times and I made corrections to the posts). Why add redundancy by asking yet another expert on top of myself?

So, a brief article contains a lot of unnecessary stuff, while it leaves out the most important pieces: the details of methodology and the context. Those most important pieces are also most interesting, even to a lay reader - they situate the new study into a bigger whole and will often prompt the reader to search for more information (for which links would be really useful).

If you are a journalist whose editor gave you plenty of space to cover a Big Story, you can have all of the above in your article, which makes it good. If you are writing for a magazine, it is to be expected you will have plenty of space to give the study sufficiently complete coverage. If you are a blogger, space is not an issue so you just write until you are done. When the story is told, you just end the post and that is it.

But if you are a beat reporter with gazillions of stories to file each week, under tight deadlines, and a couple of inches for each story, then at least try to think how to best use the space you got - is that lede really necessary? The quotes? Can you squeeze in more context instead? And end with a URL to "more information on our website"? Then write (or have someone else write) a longer version on the website, with more multimedia, and with plenty of links to external sources, explainers, other scientific papers, bloggers who explained it better?

How to rethink the Space Restrictions

Once upon a time, buying a newspaper or magazine was an act of getting informed. What was printed in it was what information you got for the day.

Today, a newspaper is a collection of invitations to the paper's website, a collection of "hooks" that are supposed to motivate the readers to come to the website. Different stories will hook different readers, but they will all end up online, on the site (where they may start clicking/looking around). Why? To see more!!!!

What a disappointment when they come to the website to see more...only to find exactly the same two inches they just read on paper!!!! Where is "more"? Where is the detailed explainer, the context, the useful links? Not there? What a disappointment! But the reader is still interested and will Google the keywords and will leave your site and end up on a wonderfully rich and informative science blog instead, never to come back to you and your poor offering.

Public radio folks - both NPR and PRI - have long ago realized this. Have you noticed how every program, and often every story, ends up with an invitation to the listeners to come to the website to see more - images, videos, documents, interactive games, discussion forums, even places where the audience can ask questions (and get answers) of the people they just heard as guests on the radio? Radio understands that their "space" (time) is limited and heavily rationed (seconds instead of inches) and they use their programing as a collection of hooks produced specifically to pique interest in people, as lures for the audience to go to the website to 'see more'. And there is a lot of that "more" on their sites for people to keep coming back.

Very few newspapers have realized this yet - some have, but their online offerings are still not rich enough to be truly effective. Let's hope they start doing more of that if they want to retain the trust and reputation of their brand names and to retain the audience that is loyal to their brand names.


More like this

You may be aware that, as of recently, one of my tasks at work is to monitor media coverage of PLoS ONE articles. This is necessary for our own archives and monthly/annual reports, but also so I could highlight some of the best media coverage on the everyONE blog for everyone to see. As PLoS ONE…
You know I have been following the "death of newspapers" debate, as well as "bloggers vs. journalists" debate, and "do we need science reporters" debate for a long time now. What I have found - and it is frustrating to watch - is that different people use different definitions for the same set of…
Continuing the current discussion of the questionable quality of popular science journalism, British researcher Simon Baron-Cohen weighs in at the New Scientist with his personal experiences of misrepresented research. Baron-Cohen complains that earlier this year, several articles on his work…
Two years ago, at the 2008 Science Blogging Conference, Dave Munger introduced to the world a new concept and a new wesbite to support that concept - ResearchBlogging.org. What is that all about? Well, as the media is cuttting science out of the newsroom and the science reporting is falling onto…

Funny you mention links and finish with loyalty to brands. I put up a post (see link on my name) a few minutes before you about the lack of links in MSM posts that included a rumination that they perhaps need to demonstrate that the are pulling together all the âbitsâ and that links might âdemonstrateâ that to readers and thence encourage loyalty.

More than that is needed, as your article says, but in my defence I was only looking at the one aspect!

Scientific expertise or experience in covering science by the journalist (or blogger) is a relatively small factor in determining the quality of the article.

I differ on this in a subtle way that I been meaning to blog. (You're partly right IMO.) Must get around to it sometime...

A great analysis. Mainstream journalists can obviously learn a lot from bloggers, but I think it also works the other way around. The quotes you mentioned are a good example. A great quote can highlight controversies and different opinions in a single sentence. And since you rightly mentioned that bloggers have all the space in the world for their story, I think bloggers could maybe take some more effort to find that sometimes valuable third perspective..
I have to admit I'm guilty of not using quotes myself, mainly because I want the blogpost to be published as soon as possible after it was finished. I don't want to wait for the responses of authors or experts... This is not a valid excuse of course, but I suspect that it holds true for most science bloggers.

Small suggestion: you might want to explain the abbreviation MSM in the beginning of your post. I only figured out halfway through that it stands for main stream media (right?). Or maybe I'm just being stupid ;).

As per usual, we agree on many things, including the importance of context, the need for good blog coverage being given more prominence than it currently does (although I note that KSJ have kindly linked to several of my pieces in the past), the irritating need to have a quote in every piece, and the overreliance on a fixed story structure.

But there was a lot that really frustrated me about this post, which in some places reads like a tribute to confirmation bias.

You rightly slam âanti-coverageâ blogs but then you just put them aside. You canât do that â they still count towards the sum total of science coverage in the blogosphere and weaken its overall quality. It doesnât matter that you classify them as âpolitical blogsâ â a lot of the worst science reporting in MSM is done by âpolitical journalistsâ who are straying outside their field, but you wouldnât make the distinction there.

We agree on the fact that simply aggregating press releases counts as ânon-coverageâ but again, you attribute this to bloggers who are not science bloggers and again, I think this is untrue. I have seen many examples of well-known science blogs cutting and pasting large tracts from press releases, aggregator sites etc, or articles written based on such material. Churnalism is both easy and prevalent online.

You say that âpoor-to-average coverageâ comes 100% from the traditional media. And here is you lose me completely. You're taking a ludicrously narrow view, Bora, if you define quality solely in terms of accuracy. Iâve now judged two editions of OpenLab and in my opinion, a huge proportion of blog posts fall into this "poor-to-average" category. Sure, they may have the technical details correct but many are unintelligible, boring or painful to read. We agree that specialist knowledge is necessary for good science reporting, but you seem to be suggesting that it is sufficient. That's patently untrue.

Bottom line: many nice ideas, but the first analysis seems to be "MSM are rubbish except for the good ones who are in the minority and can be excluded, while bloggers are awesome except for the ones who are not, but they're not real science blogs so they don't count". As you might expect, Iâm all for giving more prominence to science blogs and and respecting their strengths. But hubristic partisanship just sets us all backwards.

In every category I excluded blogs, I also excluded the equivalent in the MSM. Then I compared only what was left. I made pains to point that out. As for science blog posts that are dull - I noted those in the Part I: they are a different part of the ecosystem, with peers as intended audience, thus not really appropriate to analyze in the discussion of coverage for the lay audience.

@Grant: Expertise plays a role. I assume that a journo with no knowledge would seriously botch up an article if given sufficient space. But when space is so limited, expertise almost does not matter - even the best expert journalist will have a hard time covering a study the way it should be.

@Lucas: In the previous post, http://scienceblogs.com/clock/2009/06/the_ethics_of_the_quote.php I noted that a quote can serve as a hook if done well.

Speed is important in blogging, but also the premium on personality: it is about "what I think about this" coupled with "this is where I come from", much more than "this is a summary of news" coupled with the "View From Nowhere".

Oh, sorry, I thought everyone knew what MSM stands for ;-)

@Ed: I noted that there are feed aggregators on blogs AND their equivalents in MSM. There are splogs on blogs AND their equivalents in MSM. There are anti-science op-eds on blogs AND in the MSM. If I ignore all of those, looking only for coverage, looking at what is left, blogs win hands down.

As for serious ("dull, drab, boring, over-detailed") blogs that target peers as audience, their MSM equivalent are "News And Views"-style front matter in scientific journals, rather than popular media. And, given more space, they usually do quite well in that realm as well. Though I wonder why their posts get submitted to OpenLab as they are clearly not targeting the lay audience.

Also, I almost certainly have a sampling bias here, focusing only on the coverage of PLoS ONE papers. It would be interesting to compare it to coverage of Nature, Cell, Science, JAMA, Lancet, PLoS Biology and PLOS Medicine, for example to see if the pattern holds.

The sampling bias is a fair point and I can more understand your conclusions based on that data set.


As for serious ("dull, drab, boring, over-detailed") blogs that target peers as audience,

No, that's not what I was talking about. When I mentioned posts that are "unintelligible, boring or painful to read", that wasn't synonymous with "serious" posts that "target peers as audience". I would describe many technically minded blogs (indeed, many News and Views and some papers) as a joy to read even if they aren't aimed at a lay audience, just as I have seen many posts written for a general audience that are just not very good, accurate though they may be. Even adjusting for the confounder of technical language, I think it's ludicrous to claim that blogs aren't represented in the "poor-to-average" category. That is massive confirmation bias coming to the fore.

I suspect that this is something we're just going to have to agree to disagree on because I think we have very different opinions on the importance of writing style. And also this is very difficult to discuss without providing case studies, and I really don't want to go there.

I think Ed's criticisms are spot on. Blogs can be great, but as Carl Zimmer said (I am going to have to start paying him if I use this quote one more time) blogs are software, and how they are used varies greatly from person to person. Case in point, like Ed I have twice been a judge of OpenLab, and one of the first major tasks is weeding out all the crap that gets submitted. There are some fantastic science blogs out there that deserve wider attention than they receive, but, at the same time, there are many science blogs that fall into the "poor to average" category (and that's to say nothing of blogs that regularly post paper abstracts without commentary or recycle press releases like the churnalism we all can't stand).

Nor do I agree that 100% of "traditional media" science coverage falls into the poor-to-average category, especially since I have been working hard to pitch articles in newspapers and magazines. (I should hate to think that my work declines in quality based upon where it is published!) Yes, there is plenty of poor coverage out there, but there are also some great sci writers (Tom Levenson, Carl Zimmer, Rebecca Skloot, David Dobbs, Deborah Blum, Ed, etc. - and notice how they all blog, too), so the idea that all traditional science coverage sucks is a howler.

If anything, I am glad to see more crossover between blogs and traditional outlets. Good sci journalists are blogging and science bloggers are starting to break into journalism. That can only be a good thing - if we can foster the best in both maybe we can start to change things.

Yes, there is plenty of poor coverage out there, but there are also some great sci writers (Tom Levenson, Carl Zimmer, Rebecca Skloot, David Dobbs, Deborah Blum, Ed, etc. - and notice how they all blog, too), so the idea that all traditional science coverage sucks is a howler.

I don't think that point is seriously in contention. Referring back to the original post:

good MSM coverage happens only for the Big Papers, those that are covered everywhere (like Nigersaurus, Maiacetus, Green Sahara, Darwinius....). And then, those excellent MSM articles are written by well-known science writers in top media outlets (e.g., Guardian, London Times, NY Times....). Traditional media pull their Big Guns only for a rare Super-Paper, while bloggers cover everything well, big or small - whatever is their interest or area of expertise. Finally, I should point out that the websites of magazines and public radio (NPR or PRI) tend to cover science better than the websites of newspapers and TV outlets.

"Traditional media" can provide "traditional science coverage"; they just don't have the space or the people to cover as large a fraction of the scientific literature as we'd like. And, I expect, their attention is probably skewed towards the most visible stories, which might work against a relatively new journal group like PLoS (compared to Science or Nature or The Lancet).

Correct, Blake. As I stated in the post (that paragraph in particular), when it is a Huge Story from one of the most popular journals, then both MSM and bloggers do a great job. But when I look at dozens of smaller stories each week, then bloggers win hands-down because they keep doing it well while MSM drops the ball. And what I think is the reason MSM drops the ball is:

- they do not pick Big Guns to cover smaller stories
- they do not give sufficient space to cover the story correctly
- the writers waste precious space on journalistic gimmicks (like inverted pyramid, lede, quotes).

Of course, when given lots of space, go ahead and use quotes or whatever you like, but with only two inches at your disposal, a quote is a huge waste of space.

Anyway, the entire post is really a setup for the last section which is the Important Point, which nobody's commenting on yet, instead arguing small points from the early parts of the post....ah well.

Apropos the final section of the post: I had a comment brewing about The Daily Show making full-length interview videos available on its website, but now I've forgotten the point I was going to make.

Another factor - one that got me most recently a couple of days ago, when I was trying to write up a paper on a development in quantum computing - is that it's extremely hard to follow references when they're all behind paywalls. It is impossible for most organisations to afford subscriptions to even a tiny fraction of the journals, and far too expensive to buy papers ad hoc.

For academic bloggers, MSM journalists who work for organisations with well-stocked libraries, or those who can physically get to (and have the time to visit) large physical libraries, it may be possible to do the sort of background reading that's required.

For the rest of us, it's impossible.

By Rupert Goodwins (not verified) on 08 Jun 2010 #permalink

A minor clarification for the last paragraph of #2 "non-coverage":

EurekAlert and ScienceDaily are worlds apart, actually. EA is a subscription service for its members, who are allowed to put press releases and multimedia out to registered science reporters under embargo, often a week before ScienceDaily's unthinking RSS-bots get their hooks into it. (This, in turn, leads to the fight over who is a registered science reporter and why can't bloggers see EurekAlert under embargo, which I will hereby avoid.)

When the embargo lifts, EA publishes to the public, which in turn feeds ScienceDaily, RedOrbit, ScienceBlog and myriad other aggregators.(Here again, the futurity.org kerfuffle raises its head, which I'll also avoid.)

I think that calling EurekAlert "non-coverage" is confusing distribution with reporting. Or possibly you're calling university-written stories that have been fact-checked by their subjects useless crap, and I hope you're not doing that.

As for your disappointment, expressed in comment 10, that nobody's addressing the meat of your analysis, I'll take the challenge, Bora.

I think you're mostly wrong, especially about the importance of a good lede and quotes. Journalistic style has been honed by hundreds of years of experience and the discipline of limited space and time. Yes, news writing may seem hackneyed in its kabuki stylings at times, but it does things that way because the writers and editors aren't taking for granted that every reader is motivated to plow through and carefully parse all 3,000 words of every piece, an assumption that many bloggers seem to labor under. ("But I've got 650 uniques a day!" they'll protest.)

As we move toward mobile reading, smaller screens, and an ever-noisier, aggregator-driven environment for science news, a good lede and a crisp organization (yes, even an inverted pyramid if need be) become even MORE important than they used to be. A science blogger would be smart to learn how to use them.