How to report scientific research to a general audience

Today I'm going to be working with some students in Greta's course "Psychology Goes to the Movies" to help them write CogDaily-style reports on scholarly research. With any luck, you'll see their reports here this summer! I thought CogDaily readers might be interested in some of the principles I'll be sharing with Greta's students, so I'm reprinting them below. If you have any other suggestions for them or other science writers, feel free to add them in the comments section.

1. Find interesting research
This may seem like an obvious step, but there are a couple of problems with the way scientific research is often reported today. First of all, interesting doesn't necessarily mean new. The major science journals like to make a big splash when their latest issue comes out. After all, the more people hear of them, the more likely they are to subscribe (or ask their library to subscribe). But the general public doesn't spend every day poring over press releases to find the most up-to-the-second research. What's interesting to the public is research that's relevant to their lives -- or research that's just too cool to ignore. This might be a study that was done months or even years ago. If they haven't heard of it, it's news to them. The most popular article ever on Cognitive Daily reported on research that was two years old.

2. Show why it's interesting first
A big mistake can be to assume the only reason a study is interesting is because of its practical applications -- there are lots of reasons a research study can be interesting. For example, it might offer insight into place or group that's very foreign to readers. It might show why an everyday problem is more complicated than most people believe. It might question a common assumption. That's not to say that research with practical applications isn't interesting as well, just that there's no formula for deciding what's interesting about an article. Often your first instinct about an article is right. Whatever you decide on, that's where you should begin. Even though journal articles follow a predictable pattern, with an introduction, methods, results, and discussion, this doesn't mean you can't start with methods in your write-up. Your goal isn't to pass peer review, it's to hook readers and show them why science is exciting.

But don't become with obsessed with making research "seem" interesting:

3. Let the research speak for itself
In my two years of writing for Cognitive Daily, I've always been impressed with our readers' hunger for details about the research process. People don't just want to know the researchers' conclusions, they want to understand for themselves how the research was done. When you write about a scientific study, avoid the temptation to gloss over details about methods, to omit the data the study produced, or to ignore other relevant studies the authors mention in their introduction. For an example, take a look at this post, where readers asked for even more details than I originally presented.

On the other hand:

4. Don't include details that are only relevant to scientists
A journal article is written primarily for a scientific audience. It's designed to be complete enough that other researchers can repeat the identical study and get similar results. Needless to say, most readers won't be doing that, so you can omit details from your report such as how much the volunteer participants were paid, what brand of computer was used to present stimuli, or the precise percent of the visual field a stimulus occupied, unless these things are directly related to the point of the study.

This is closely related to the next point:

5. Don't use scientific jargon
I can't emphasize this point enough. Most of your readers are not scientists, and although scientists do like to read popular accounts of scientific work, they are a secondary audience -- and they can always read the original article if they want clarification. I try to stay away from even the most basic scientific terms, like hypothesis, confidence interval, stimuli, ANOVA, and the abbreviations that researchers like to pepper through their work: p, r, F, t, and so on. Researchers also like to create their own abbreviations, for use in just one paper! Avoid these at all costs, instead explaining the concept behind the abbreviation (or just spelling out the word!). If you must use a bit of jargon (generally this is only necessary if you'll be talking about it repeatedly within an article) explain what you're talking about, and if possible, include a link with more information.

6. Tell a story
Although many researchers are excellent writers, they are bound by a formulaic approach to reporting on their research. As I mentioned before, in nearly every scientific field, journal articles start with a literature review, then provide methods, results, and a discussion (not necessarily in that order). If a study contains four separate experiments, these sections are repeated four times. Scientists often are required to state a hypothesis at the beginning of their report, then restate it at the end and indicate whether the data support the hypothesis. You are not subject to those limitations. What you want to do is grab your readers' interest, then lead them through the research in a way that satisfies their curiosity. You might start off with an anecdote, but you might just show a picture of one of the stimuli from the study, or even do as researchers must do and review some of the relevant research. The trick is to start off with something interesting, but leave a few questions unanswered, so that the reader wants to follow along with you all the way to the end. Sometimes that may mean omitting some of the research reported in the article you're writing about. You might skip whole experiments, or report only part of the results. As long as you're reporting accurately, there's no rule saying you have to summarize the entire article you're discussing.

7. Visuals need the same treatment as words
Visuals can go a long way toward making a complex subject understandable, but you need to be careful with them: remember, just like the words in a scientific article, the images were created for other scientists. Consider this graph from a journal article we reported on in CogDaily:

i-61912237d4841066b0fc88287029447b-actual.gif

Seems straightforward enough (I cropped off a second graph which included the vertical axis label, "observer ratings"). Observers are rating emotions portrayed by actors. The question the graph addresses is whether viewers can recognize the intended emotion when the image is displayed upside-down. But there are some problems with this figure. First of all, how can an "emotion portrayed" be either "portrayed" or "non-portrayed"? The text in the article makes it clear that observers rate each video for each emotion, whether or not that's what the actor intended. But the x-axis label here confuses that. I fixed it in the version I used in the article:

i-84fe0bec3e131ca9c1019b9f8f91a09f-emotionpoint2.gif

I also removed the error bars (this is controversial even among CogDaily readers) to simplify the graph even more, and added color in case I need to discuss the graph in detail ("red" and "yellow" are clearer in a discussion than "dark hashmarks" and "dotted hashmarks"). Then I explain the figure in the text:

This time, the results were less consistent: While viewers were still able to recognize anger, joy, sadness, and love, the two other emotions -- fear and disgust -- were rated as highly for other emotions as they were for the intended emotions.

Explaining your figures is crucial -- you have to not only show the image, but show readers why it's important.

Equally important -- don't use images that don't help tell your story. There's no need to ornament your story with pictures just to make it "look pretty." Don't include a picture of a rat just because the study discusses rat behavior. But a photo of a rat using the experimental apparatus might be helpful.

8. Keep it concise
Your report on scientific research should be significantly shorter than the original. I like to keep CogDaily articles under 1,000 words. If a journal article is especially complex, I'll give it two parts, or omit some of the results. Equally important is keeping your language concise. Don't use a bigger word when a smaller one will do. The research itself is complicated enough without making your language complicated, too.

9. Cite your sources
Give a full citation of the original source at the end of your report (I can't tell you how aggravating it is that the mainstream media doesn't do this). Link to any other resources you include in your post. If you borrow an image, link to the source for that. And obviously, don't plagiarize. If you borrow someone's words, put them in quotes, and let readers know where they came from.

10. Don't overstate your case
I shouldn't have to say this, but I see it so often in science reporting that it bears repeating: Make sure you get the science right. Don't mistake correlation for causation. Don't overgeneralize results. You can take the lead from the way the researchers themselves qualify their own work. That said, don't make it boring. You can express more controversial possibilities by posing them as questions, or by qualifying them with "might" or "may." Even so, be careful. An article titled "Researchers discover a new cure for cancer?" is still misleading.

Also, make sure you're reporting on work that has been peer-reviewed. Conference presentations usually aren't reviewed, and sometimes researchers will post unreviewed research on their web sites. If you're not sure, check the policies of the journal or site where the research was published.

11. Have fun!
Tell a joke or two. If a study reminds you of a funny anecdote, tell that. Readers like to see your personality conveyed through your writing. But don't overdo it -- remember, the point is to tell a story about science, not make a stand-up routine.

Tags

More like this

You don't have to look far to find mutterings about the peer review system, especially about the ways in which anonymous reviewers might hold up your paper or harm your career. On the other hand, there are plenty of champions of the status quo who argue that anonymous peer review is the essential…
One cool thing about running a lab is that there aren't really many restrictions about decor. As long as the immediate area around the equipment is clear of visual distractions, anything goes. That's why we're inviting readers to send us examples of crazy lab art. Here's a great example of what can…
Over at DrugMonkey, PhysioProf notes a recent retraction of an article from the Journal of Neuroscience. What's interesting about this case is that the authors retract the whole article without any explanation for the retraction. As PhysioProf writes: There is absolutely no mention of why the…
We've discussed attentional blink several times on CogDaily. It's a fascinating phenomenon: if you see a series of images flashing by rapidly, you can normally pick out one of the images (for example, a banana in series of pictures of familiar objects). But if a second such image (another piece of…

Thanks! A useful guide, which I'm sure I'll refer to in the future. Most of the points you make are probably common sense, but that doesn't mean they are common knowledge - and a refresher is always handy in any case.

This is a must-read for all science bloggers. I try to follow most of this advice in my posts but it is not easy! I think I'll print this one out and stick it somewhere close to the computer as a reminder and a guide for every time when I try to write a blog post on actual research.

Following up on the debate at the 2007 North Carolina Science Blogging Conference, I'd like to share my top three bits of advice for students and science writers.

1. Get the science right.
2. Get the science right.
3. Get the science right.

In other words, accuracy, accuracy, accuracy. All the rest come after this. If the report isn't accurate then the other things don't matter.

Thanks for posting reasons 4-13! Now if only you had moved #10 up to the top. :-)

Larry,

Good point -- I'm only really explaining half the process; the other half involves your steps 1,2, and 3. And a lifetime of science education! After two years working on Cognitive Daily, I still rely on Greta to make sure I've got the science right.

Great, useful information.

I'd like too emphasize one point: Find out what the usual catchphrases that are used and abused in the area you are reporting and DON"T use them.

For example, there are no missing links in paleontology. Or if there are in some sense, the use of the phrase is usually very misleading, yet science reporters seem to use it whenever possible.

And so on.

Great post, thanks.

By writerdddd (not verified) on 01 Feb 2007 #permalink

Seven hours later I come back to report that I am paralized. I meant to write about a study today. And I keep thinking about all this and am not sure any more if the study is worth reporting (although it is in the media) or if I can do it right....

Thanks for the tips! I am just starting out with science writing, and I've found the hardest part for me is walking the line between keeping things too complicated for people to understand and simplifying them so much it's condescending.

Above is an unfortunately typical recommendation: that data graphics intended as feed-grain for talks presented to nonscientists be stripped of (among other things) error bars.

The better option is, of course, showing raw data along with the means (or
other markers of central tendency). This would not needlessly complicate the presentation, and it would allow even the untutored to evaluate consistency or variability in the data - presumably, key parameters in a study of human behavioral responses, the example shown.

If the study is a good one, the raw data will underscore that the conclusions presented sit atop a substantial observational foundation. As Dr. Tufte and many others have pointed out, any audience that can understand a typical sports or business page is going to be underwhelmed by the data-paucity of a typical data-summary bar graph. A dozen dinky little earthtone bars? You fellas got grant support for that?

In short, graphs of raw data clarify how observations are accrued and
interpreted: the very mechanics of science. Scientific process - every bit as much as the
conclusions - should be the central goal of communicating science to a lay audience.
Otherwise, the growing fears and suspicions that science is merely an empty belief system are reinforced.

Note: This comment was crossposted to a thread at Edward Tufte's web site:
http://scienceblogs.com/cognitivedaily/2007/02/how_to_report_scientific…

Alexey:

I responded over there, but I assume the moderation task takes a while, so I'll append what I said there with a response here too.

I'm fully in support of including raw data where possible. I'd submit that the example I give here does present raw data. How does it get more raw than that? In fact, error bars are a step removed from raw data, which is another reason not to include them.

The most important reason was covered in a comment on the "17" thread: This blog is intended for a general audience, and error bars can be problematic. Are we talking about confidence intervals, standard errors, or what? Lay readers do not understand the difference, and the end result is that they make incorrect assumptions based on an incomplete understanding of the underlying statistics.

Take a look at this post for a good explanation of the difficulties in presenting error bars.

Dave, I understand the difference between different types of error bars*, and I think it's easy enough to explain: for a 95% CI, you tell the audience that the bars are a measure of confidence, similar to the margin of error in political polls: there's (i.e.) a 1 in 20 chance that the bars do not include the answer you'd get if you did the experiment on everybody. For estimators of population variance, you tell the audience that the bars represent the estimated variability in the population. Certainly there's no need to explain more than that, and of course it would be a mistake to mix different kinds of error bars in one presentation. I'm assuming a typical, reasonably educated "science talk" audience here: the sorts of folks who might read the science page in the local newspaper once a week, and the sports and/or business stats daily.

*It's great that you linked to the really excellent GraphPad documentation. Last week we bought three Prism licenses for members of my lab, mainly because of the superb manuals; I tend to use Igor for my own work, but the learning curve for that package is pretty steep, and its documentation assumes statistical sophistication that is often not present among our students. Prism is the best general stat graphics package that I've ever seen for a training environment.

"it would be a mistake to mix different kinds of error bars in one presentation."

The problem is, sometimes the articles we report on use standard error, and sometimes they use confidence intervals. So regular readers of CogDaily would see both types; hence the confusion. If we explained what they were every time we used them, that would get tiring for readers. So we've settled on this compromise, where we do not use error bars on graphs, and we indicate significance in the text of our reports. If a scientist would like to see the error bars, he or she can look up the original article.

very good post, I am about to undertake my first independent piece of research and found these comments very intuitive and useful.

Thanks

Joe

By Joe Collins (not verified) on 10 Nov 2008 #permalink