Over at National Geographic’s other blog network, Ed Yong offers a guide for scientists talking to journalists. Like everything Ed writes about scientists and journalists, this was immediately re-tweeted by 5000 people calling it a must-read. Also like nearly everything Ed writes about scientists and journalists, some of it kind of rubbed me the wrong way.
Given our respective areas of interest, there’s approximately zero chance that Ed will ever contact me to ask my opinion of a paper, but I want to push back on a few of these, anyway. Because, in the end, scientists aren’t responding in what Ed considers a sub-optimal manner because we’re clueless idiots; there are good reasons why we do the things we do, just as there are good reasons why he does the things he does.
My main gripe is with the list of things Ed doesn’t find useful, particularly:
1) A summary of what the paper showed. Around half of comments start with this. I don’t need it. I already know what the paper showed, or will have talked to someone else who explained it.
This bugs me, because I’m a person who will start off any comments I make with a capsule summary of the paper. Not because I think journalists are clueless– quite the contrary. If I thought the person who contacted me for a comment was an idiot, I would pretend not to have received the message. Life is too short to talk to idiots when I don’t have to.
I start comments on a research paper with a summary, because I think that’s useful information. For that matter, I start off my written reviews of NSF proposals with a short summary, and those are going to people who I know damn well can and will read the proposals themselves. But I think it’s important, because what I see as the crucial elements of a particular proposal or paper may not be the same as what somebody else sees as the crucial elements. And that can lead to confusion unless both parties in the conversation know exactly what the other thinks they’re talking about. The best way to avoid that confusion is to tell the other person what you’re talking about, even if that takes a few extra seconds of their time to skim.
And if there is a difference between my summary and the summary from whoever else explained it, that’s useful information. It tells you that either there’s some disagreement about the real importance of a paper, or at worst that one of the people you’ve talked to is wrong, and their comments should be discounted or at least de-emphasized.
I also find listing summaries as “not useful” kind of annoying when paired with these ones from the “useful” list:
4) The past. The paper will probably have a paragraph that crushes decades of earlier work. You will know all of that; I won’t have had time to read it. So tell me: How does this new discovery fit with what has come before? Is this based on a radical new approach? A long slog? Something that people in the field have been anticipating? Is it just reinventing the wheel?
5) The present. Have other people found similar things? Contradictory things? Is this one of many such studies, or something truly original? If this is, say, a new approach to fighting malaria, how does it compare to all the other approaches people are investigating?
So, I need to explain the past and present context of the work, which is too long for journalists to spend time reading, without summarizing what the paper under discussion did. That’s more or less impossible.
Indeed, most of the summary information I would provide would be to answer exactly these points. When I say “The authors of this paper did A, B, and C,” it’s usually in order to make a contrast between this paper and that one, whose authors did X, Y, and Z.
I’m also a little dubious about the push for really strongly worded comments, especially when combined with “I’m not here to present people with the totality of your views, so what you say will almost certainly end up getting cut and distilled.” Yes, I understand that you’re not going to deliberately misrepresent anything, but knowing that anything I send will be “cut and distilled” makes it seem more important, not less, that I include qualifications and equivocations, so as to reduce the possibility of accidental misrepresentation.
And there’s also the fact that most research isn’t all that superlative, even the stuff that gets picked up by journalists. The vast majority of research papers are… interesting. There’s very little utter crap published, and very little world-changing new breakthroughs. Basic statistics, if nothing else, ought to tell you that much. If you’re getting wishy-washy quotes from scientists, full of “boilerplate adjectives,” it’s probably because their actual opinion of the work in question is kind of equivocal. Particularly if they’re not the ones who did it.
There’s only so much you can inflate your comments about a fairly cool but not utterly amazing new paper for media consumption. At some point, it becomes deceptive. If you actually want the real opinions of experts on new research that’s being published, you need to understand that a lot of the time, their real opinion is kind of boring. If you only want strong adjectives, call the PI’s university press office.
Ultimately, though, what rubs me the wrong way about this is a sense that the ways scientists talk to journalists are wasting the journalists’ time, which they would otherwise be using to do Important Journalism. Which bugs me because, ultimately, each party in one of these conversations is doing the other a favor by having the conversation at all. Yes, journalists are helping to boost the profile of scientists and science in general, but they’re also taking up time that the scientists could be using to do Important Science. And really, the concrete benefit of being quoted in the newspaper is pretty minimal for a scientist, almost certainly not worth the time it takes to provide the material for the quote. And the loss to a journalist of having to skim the occasional summary paragraph or complaint about citations seems pretty minimal, especially since they’re getting these comments for nothing.
Now, this is not to say that I’m totally against everything Ed has to say– a bunch of the advice is good, though I’m a little boggled that some of it is necessary. And on those occasions when I’ve been contacted by journalists looking for comment, I give my opinion as clearly and forthrightly as I can. And I’ll keep a few of these points in mind in the future, particularly the idea of being more specific about what future research is needed.
But as usual in these sorts of discussions, I think there needs to be a little more recognition that both sides have reasons for doing what they do the way they do. Far too many online discussions of scientists and journalists overemphasize the things that scientists “need” to do differently to accommodate journalists, with little reciprocity. This is just another lecture about what one side needs to do, when what we really need is a more mutual dialogue.

