John Mashey made a comment over at Deltoid that was so incisive, Tim Lambert decided to turn it into a post of its own. In the comment/post, Mashey outlines several steps scientists can take to pressure reporters to do a better job reporting science. Indeed, the list is a little daunting. Among other things, he recommends that you
Pick a modest handful of reporters with which to build up rapport, even if it takes a couple years, and half a dozen emails. In particular, try to take good care of any reporter who actually replies (non-negatively) to an email.
That's a pretty big commitment. A more common response to bad reporting is something like this one from T. Ryan Gregory:
This is not science, it is not news, and it is not something I want cluttering up my aggregator feed. So long, LiveScience, it's been a slice.
It pains me to say it, but in light of other complaints from scientists in the blogosphere (e.g., here and here), I am actually beginning to wonder if, despite the efforts of some excellent writers, science reporting on the whole does more harm than good. I despise the ivory tower approach to academia (hence this blog), but in my opinion misinformation is worse than missing information.
You might note that the article that sparked Gregory's tirade is one I blogged about a couple days ago.
Gregory dismissed the entire report as unfounded speculation. My approach was to consider which aspects of the report were plausible, and focus on them. But I agree in principle -- any article that causes thoughtful readers to throw up their hands in disbelief is probably not reported very well.
I also agree with Gregory that it's probably not a good idea to subscribe to publications that regularly skew or misreport science. But I don't agree that science reporting overall does more harm than good. I do agree that it can do a lot of harm, though. This "report," for example, offers no sources for its bizarre speculation that looking at a spinning nude silhouette can tell you something about your personality. What's more, the article doesn't list an author, so we can't even follow Mashey's guidelines for encouraging better science reporting.
I think a lot of learning on Cognitive Daily occurs on our Casual Fridays posts. The research isn't scientifically rigorous, but readers regularly engage in discussions of what would be required to make a particular argument. They learn that science isn't simple or easy, that answering complex questions can't be done in a single step.
Science doesn't have to be perfect to be informative; to me, the worst science reporting is the kind that suggests that science is infallible, a set of facts. The next-worst is the type that assumes there are "two sides" to everything (what's the "other side" to the claim that the Earth revolves around the Sun? Or to the Stroop Effect, for that matter?).
There are those who suggest that scientists need to do the reporting on science. That may be part of the solution, but do we really want to toss out great reporters like Carl Zimmer in order to get rid of the bad ones?
To me, a more important reform in science reporting would be to stop treating science like news. Does the general public really need to know what's going on at the messy, bleeding edge of science? The major journals put out splashy press releases and try to hype the "latest" discoveries, many of which won't hold up under the scrutiny of additional study or have much more modest results than their promoters suggest. This PR is easy for lazy editors to plug into "science" sections without further thought. Yet this is precisely what we're likely to get from scientists promoting their own research. I don't think putting the scientists in charge of science reporting will necessarily solve things.
It would be better if science journalists could give readers a sense of the current state of a whole line of research, instead of focusing on the latest splashy result. The larger question remains, though: How do we get from here to there?
Mashey's suggestions are a good start. I happen to think a site like BPR3 will also help. Any other ideas?
- Log in to post comments
What about if scientists who are interested in this cause offer access to journal articles on the pertinent subjects so that the general public can read what is really going on without having to spend a fortune for access to peer-reviewed research.
The silhouette illusion is a double impertinence, since they also cut the original source from the images... And I have written there commenting that the relation to left/right hemisphere is BS (well, it was more polite ;-)), but that comment disappeared.
There are also dangerous attributions of "stress" towards seeing the snake illusion. Well, sometimes I just feel like giving up...
Actually I don't think it's that big of a commitment -- more of a guide to working with the media. I would add, find out if your institution has a public affairs office. They should be able to make it easier to work with media (especially if they are former reporters themselves, rather than development/marketing types).
As for TRG's fit about LiveScience, don't bother reading everyting in your RSS feeds... mine is down to just 718 unread articles.
Full-time science writers at newspapers are becoming an endangered species, so part-timers, amateurs and blogs are going to fill the gap. Some of what they write is going to be crap. Most likely, a lot of what people write about politics, sports or halloween cookie recipes is also crap. Get over it; some of it is good and scientists can help make it better if they are willing to engage.
You can pull up the ladder to the ivory tower, but that won't stop people writing about you.
Science reporting is geared towards entertainment.
What will make the general audience wow?
This is a good thing, not a bad thing, as it lets scientists (and funders) know which directions are more useful/fascinating.
It is not necessary for scientists to commit to public opinion, but it is not a terrible thing to keep an ear to the rest of the population either.
I agree that science reporting is geared towards entertainment, however it is NOT a good thing because many science articles have a lot of influence over popular notions of health and science and yet so many of them misinform the public. They do a brief phone interview and then make the rest up to appeal to public interest.
Take for instance the appalling BBC article about the effects of socialization on exercise. The study published that rats housed together reaped the benefits of exercise faster than rats housed alone. Simple, right? The news article, however, reported that *humans* should run with partners to get *any* benefit from exercise at all, accompanied by a picture of two women running through the park.
Nowhere in the BBC article did it mention that it was lack normal daily social interaction that was increasing glucocorticoid levels, and thus delaying positive neurological benefits from daily exercise(that's delaying, not eliminating,) nor did they really even mention stress hormones. Their entire article was based on the assumption that the rats had to exercise in pairs to do well. That was not stated in the scientific publication.
When I e-mailed the author of the article pointing out their mistakes and relating why it was so harmful to their readers to misrepresent science and mislead the public into thinking running solo is unhealthy, I received a brief reply that they had reported the information correctly and that I should go read the original publication (and thanks for taking an interest in BBC.) I already had read the paper, and reiterated in more detail what I had said before in a second e-mail. Their reply to that was a curt "well of COURSE the rats were housed together. All rats are housed." [slap palm on forehead here]
If that doesn't tell you just how ignorant, clueless, and careless these publications are in choosing their science journalists, than I don't know what will. I doubt most, if any, actually read the actual scientific publications they report on.
You seem to have accidentally truncated the last paragraph of my post, thereby giving the impression that I have little confidence in science writers. Please allow me to correct this oversight by quoting the entire section.
"It pains me to say it, but in light of other complaints from scientists in the blogosphere (e.g., here and here), I am actually beginning to wonder if, despite the efforts of some excellent writers, science reporting on the whole does more harm than good. I despise the ivory tower approach to academia (hence this blog), but in my opinion misinformation is worse than missing information. PZ suggests that scientists are just going to have to handle much of the reporting themselves -- maybe, but we also have other things to do. Science writers -- I mean, the good ones (and I know there are lots of you out there and that you are just as frustrated as I am) -- what can be done about all this?"
The truncation was intentional, but it wasn't my intention to suggest you have little confidence in science writers, so if people were getting that impression based on the excerpt I chose, I apologize.
That said, even your complete quote still seems to suggest that while you do have confidence in many science writers, you're not very optimistic about science writers in the aggregate.
I'd say I'm slightly more optimistic, but not overwhelmingly so.
One of the points I was trying to make was that turning over science reporting to scientists (and I don't think I ever implied this is what you were recommending) has drawbacks as well.
Breaking the assumption that publication of a new study is always newsworthy may be hard, but would be a promising first step. The problem is that what is sometimes bad science reporting is in fact successful popular journalism.
USC Annenberg recently hosted a panel on the topic and participants made one suggestion similar to John Mashey's: scientists should develop relationships with reporters.
When I blogged this at Science Progress, I also emphasized their point that science bloggers, and particularly scientists who are bloggers, can play a pivotal role developing pipelines of quality information for the mainstream media.
I know nothing about the inner workings of science writing, or, whether the following process has already been implemented and simply is not working efficiently, but it seems that a peer review process between scientists and science writers should be in place just like the one between scientists when they submit articles to journals.
I assume this problem of inaccurate science reporting is the very reason why students are warned not to quote newspapers, magazines, and other non peer-reviewed sources, when writing academic papers.
It occurs to me that one could come up with a "rating system" for science writing. Something like a category code followed by a number. For example (and I'm not advocating this for the system, just giving an example of what I'm envisaging)
'U' entertainment
'S' speculative
'R' report of results
'D' discussion of data
'T' discussion of theory
The numbers would indicate the sophistication of the level of presentation and would have different meanings based upon the different categories.
As an example: an article labeled U0 could mention something vaguely related to a paper (or some research) without citing sources, but it couldn't be classified U1 unless the paper/sources were mentioned, and it couldn't be U2 unless source was in a peer-reviewed journal and some of the context of the study was discussed.
At the upper end of the scale would be the primary sources themselves. (At the highest, end would be primary sources AFTER a large number of corroborating studies).
The idea has some pretty obvious drawbacks too, but I think something along those lines could be helpful...
What gets my respect is admitting mistakes promptly and correcting them. That would be a very short list, if anyone's keeping one.
I recall when the AP Wire sent out a misstatement about sea level rise. Our local SF Chronicle science writer picked it up, and his article had the mistaken number in the headline and several times in the article. I emailed him; he had to contact his editors and webmaster repeatedly, because their response was first fixing just the headline, not the text, then only one place in the text. Foot dragging at best.
I recall something similar with New Scientist some years ago. I got a reply saying they were not a science magazine, they were an entertainment magazine, and no correction on whatever they'd gotten wrong.
Dunno.
Nitpicking is one of the fundamental primate social behaviors. We ought to all be better at doing it and receiving it gracefully.
As a recipient of the benefits of useful science, I have to remind the paid scientists who's footing the bill for them to pursue their dreams: All of us who didn't get our PhDs in ...-ology.
We need you to stay out there on the cutting edge, finding new a better ways of looking at the universe, but we also need you to tell us what you're finding. It is part of your "social contract" to make sure that the work you do gets reported in the most accurate, but digestible ways possible. One of the most effective ways to do that is to poke your heads out of the towers every now and then and tell us what you're doing and what you plan on doing. If you can't translate your work into terms the non-expert can understand, then you need to find someone who can. You need to, we can't do it, because we don't know what the result should be.
Unfortunately, there cannot be a one-to-one mapping of what you have learned and what gets reported. Even if what's reported is highly accurate, when it gets retold, it will be altered. Oh, well.
But scientists do not get a free ride on the backs of the masses, any more than any other profession does. If you don't think enough of the "Muggles" to want to make some effort make sure we understand what you're doing and why you should be doing it, bye-bye grant money.
How funny -- I was just thinking about this question of how science gets reported in the media, after seeing yet another "health" story on the evening news, and I wondered what Scienceblogs would have on this subject. One of the really bad consequences of the "press release" method of science reporting, in an environment where most people know very little about the scientific method at all, is to promote positivism and agnosticism among many people. By that, I mean that the constant promotion of the isolated conclusion (which, as other people have pointed out, is usually overhyped and may not be replicated) trains people in the idea that science is just a collection of isolated packets of information. No larger patterns, no overall trends or theories (much less laws) -- nope, it's all a bunch of disconnected pieces about red wine and colon cancer, or rats running solo or in packs. And, at the same time, people get trained in making huge conclusions based off of way too little data.
And, when these news stories reveal "conclusions" that contradict the shoddy reporting that had previously held sway (see the Chicago Tribune this week on "red wine causes colon cancer; but, wait, I thought it was good for me!") then it leads to agnosticism: people who don't have training in science tend to conclude that you can't ever know anything or come to even any relative certainty, since all of these "authoritative studies" reported on the TV seem to contradict each other all the time.
We could come up with a Top 10 (or more?) of the methods that drive us batty about the way science gets reported -- One of mine would have to be the way that the differences between men and women always get chalked up to some supposed "hard wiring." "Men are just more visual then men." "Men are hard-wired to be drawn to bigger breasts." Etc etc ...
Others?
This is a bit of a tangent, but if you're looking for *good* science writing, the Knight Science Journalism Fellowship Program has an online "tracker" that highlights good science reporting:
http://ksjtracker.mit.edu/
Also (somewhat more related to the discussion), according to their website, they launched the tracker because "We believe that if science reporters and editors have convenient and timely access to the work of peers across the country, they can better evaluate and improve their own performance."
See! Efforts to improve science writing!
Oh, and they welcome reader comments, too.
Linda, what an insightful comment.
Must be because you are a woman :-)
You mention another Top 10 problem: the fragmentation of information disseminated, making it frequently confusing and contradictory.
Re: top-10 lists.
T. Ryan Gregory has an excellent one:
Anatomy of a bad science story
Alvaro,
yeah, women are just naturally more insightful and intuitive ;o)
And of course we're clueless and flummoxed at technical stuff, like how to use html italics codes properly.
And I thought I remembered that "bad science story" post -- it's still pretty sharp.
Why can't the original authors of these studies issue press releases? Corporations and universities do this all the time. Hopefully they can put their conclusions into context so that those of us who don't have access to all the journals in the world can make sense of their findings.
Lonestarslp,
I don't know about other disciplines, but in psychology, the APA does have a website that contains what are essentially press releases concerning research in psychology that are readable by the general public:
http://www.apa.org/releases/
The problem with science-by-press-release is that scientists can abuse the system. It's in their self-interest to get noticed by the MSM, so some scientists overhype their results. I've seen plenty of scientists put out press releases on unreviewed or lightly reviewed conference presentations, or before their results are published. See, for example, my article on Seedmagazine.com.
Beyond the problems of the initial report is what happens when it goes out on a newswire. Small-city newspapers pick up science stories and cut them. If the story is written as an inverted pyramid and the editors lop off the end, all that's lost is the detail.
However, if the paper shortens the story by editing out sentences or replacing words, they can make utter mishmash of a story that was decent when the science reporter wrote it.
"John Mashey made a comment over at Deltoid that was so incisive, Tim Lambert decided to turn it into a post of its own. In the comment/post, Mashey outlines several steps scientists can take to pressure reporters to do a better job reporting science."
Thanks for the kind words, but a couple minor tweaks are:
1) None of that advice was limited to *scientists*, simply people who like to see good science reporting (or actually, reporting in general). To some extent, scientifically literate people who are not (or not currently) research scientists *need* to help out, because we need scientists to spend lots of time doing science, not battling every dumb OpEd that comes along. If the efforts are spread around, it's not so much of problem, i.e., as in computing, parallel processing works.
2) I think saying "pressure" doesn't quite capture the flavor, I'd say it's more like trying help reporters & editors be more informed and do a better job, knowing they have a difficult problem being experts at everything, have deadlines, and have abilities that follow the normal distribution.