Should scientists wait for peer review before releasing their results?

In case you're reading Cognitive Daily on RSS or don't always check out the links to the (generally very good) seedmagazine.com articles in the column just to the right of this blog, I did want to point you to an article I wrote for them about peer review.

One of the things we like to do on Cognitive Daily is take a closer look at psychology articles in the mainstream media, to see if the media reporting on research matches up with the actual data. But we've been frustrated recently on several occasions because the actual data hadn't been published yet.

The answer isn't as simple as you think: just saying "scientists shouldn't release data until it's been peer-reviewed" could mean that important, timely data gets ignored. But if the data hasn't been published in a scholarly form, then other scientists have no way of informing the public when claims have been overblown. The article explores the issue in depth, asking whether there are occasions when it makes sense to skip peer review and go directly to the press.

Read the whole thing.

More like this

Last week's post on a Peer-Reviewed Research icon has generated a tremendous amount of interest, including many very thoughtful comments and an incisive post over on Cabi Blogs. I'll get to Philip's comments in a moment, because they are at the core of what "peer reviewed" means, but first let me…
Vaccines is a topic I don't like writing about so much for many reasons. Vaccination programs are important to public health but we (all the Reveres, including this one) have always interested either in basic science or programs that are applied to the whole population at once, such as clean water…
In the comments to yesterday's post about science in popular media, ZapperZ responds with a comment that illustrates the problem: I am not saying that the media shouldn't report ABOUT science, as accurately as they can. I am saying that DOING science isn't done in popular media. Science isn't done…
The DI has, predictably, issued a press release spinning yesterday's announcement from the board of the Biological Society of Washington. Also predictably, it contains several misrepresentations. That's what you have to do when the facts are against you, so it's hardly a surprise. The distortions…

This wouldn't be as much of an issue if the peer review process wasn't so dang slow.

I don't see any way around that or anything, but still...it takes forever to go from writing up a study to seeing it in print. That's one of the things I like about this web site...you have a mini-study every week and get results right away. Nice.

The media needs to be more responsible in this area too. As long as it's clear that findings are preliminary, there's little danger in reporting them. But if an unpublished finding is presented as fact, bad science can easily slip through the cracks and mislead the public.

You're absolutely right, Phronk -- it's an excruciatingly slow process, often slower than even the publishers want to admit. They'll claim they have an 8-week review cycle, for example, but it will be six months or more before an author gets even the first round of reviewer feedback.

One thing we try to do on every Casual Friday is to make it clear that the study is nonscientific, so people understand that this is fun "research," but it shouldn't be taken too seriously. Sometimes I find that mainstream media reports aren't so careful, and don't place new results in the proper context. It's very rare indeed that they even mention the peer review process (and then, typically only in the context of its failings) -- or make distinctions between research that has or has not been reviewed.

And, as Drew Westen says in the article, he can be very cautious in how he reports his results in his peer-reviewed report, but that doesn't prevent the press from trumping up his findings.

Check out arXiv.
Most results in most sub-fields in physics, astronomy and parts of mathematics and computer science go straight on the web. Some fraction of authors wait for peer review (but not publication), but a lot of papers go straight on the web, especially if it is a hot result in a hot sub-field.
Been working well for over a decade.

I've been working on a post about the reverse situation, something I see happenning more and more often these days: publishing a hypothesis before generating a single data point to test it. Any thoughts?

The arXiv is great. I generally publish preprints of all my papers there before they hit a conference proceeding or journal. I especially like the arXiv because it takes down some of the barriers for researchers. If I believe in a result I can publish it even if a journal might be hesitant to. This is especially important in mathematics, because often what determines acceptance to a journal is the reviewers' belief that your results are significant, but since mathematicians can be pretty juvenile, that often depends on whether they think your field is capable of producing significant results or if it's all just 'trivial' compared to some other field.

Personally, I like the idea of publishing and then reviewing, something made possible by the Internet. You publish an article, at which point people can review it and leave comments for all to read. The author can then choose how to respond to those comments. Further, making peer review non-anonymous would help, since although it seems intended to encourage honest review, it seems moreso to take away responsibility from the reviewers because they aren't held accountable for bad reviews.

Couldn't peer review and "going to the press" happen simultaneously? For example, have papers published in public journals and magazines that are read by scientists and enthusiasts alike. One would only need to print a disclaimer saying that the study has yet to be peer-reviewed.

Then the discussions emerging from the ensuing review could also be made public, and people could get some more insight into the subject of the research by parsing conflicting viewpoints.

(But I'm not a scientist, and I don't write research papers - so take my two cents with a grain of salt.)

Steinn,

Yes, I've seen arXiv. I wonder if part of the reason it works so well in physics and math is that media-types don't believe they have a shot at understanding the research, so it mostly stays under the radar. I think if psychologists attempted such a thing, all sorts of marginal results would be hyped all over the place -- including things that would never make it past peer review.

Eitan,

Your idea is intriguing, but I think in practice it wouldn't work very well. It's really a different task, making an article accessible to the lay public, compared to including enough data and analysis to impress scholars. Plus, I'm not sure how much patience the public -- or even scholars reading a bit out of field -- would have for reading unreviewed results that didn't pan out. Nature is popular because scientists can get a glimpse of what's going on in other fields, and interested laypeople can see some real research. But still it's a lot of work reading stuff like that, and so we have publications like SEED and Scientific American (and Cognitive Daily!).

From our perspective at CogDaily, it doesn't make sense for us to go through the work of making a technical article accessible if there's a chance it won't make it through peer review.

Perhaps simply requiring writers to identify their work as not having been peer reviewed, with a simple explanation as to why they feel it's necessary to release the data without such review, would be effective? If people are requested to think twice before such publication it might reduce those situations in which published information is faulty.

But then I'm not a scientist, just a budding psychologist. I am young, hear me peep.

It certainly is the case that peer review is much slower than the field (and media hype) needs it to be, but another situation arises with peer review that makes it even more detrimental to releasing results quickly: reviewers often think that while the result is very interesting, "wouldn't it be cool if you also did X analysis and published it with this result". That is, "The paper you've submitted is well-done and makes a contribution, but I'd like it better if you also did a bunch more work on the data to show something of particular interest to my research (or cited these three papers more often)." Then they hold back the paper for that analysis, instead of publishing it for what it is and waiting for part 2. OTOH, there are so many submitted manuscripts out there there has to be some way of filtering them, and editors seem to be happy kick it back to let the authors make a stronger paper. By the time the data in my field is published (Cognitive Neuroimaging), it is usually a few years old, and you can tell this from the methods section, based on the technology that was used to collect it.

It seems to me that today's climate (and I catch myself doing this when I write reviews) is to judge the work on what it could have shown rather than what it was designed to show. Perhaps pre-releasing the punchline to the media would be a good thing in this case, but there's a lot of sub-par/confounded work out there too that I would hate to see get hyped before its time... Regardless of the media, peer review is the main way that junior researchers are evaluated for their promise in faculty positions and as principal investigators on grants, at least in Psychology, so it is unlikely to go away. You can have a slew of publications on the internet, in conference proceedings, invited reviews, etc., but if you don't have at least a handful of peer-reviewed articles, you aren't likely to make the shortlist. Because a paper cannot be published in most journals without confirmation that they are not available anywhere else, that pretty much perpetuates the peer review prior to general release scenario.

By Corby Dale (not verified) on 13 Mar 2006 #permalink

I would add to what Steinn said that you should also check out what Paul Ginsparg (arXiv founder) has to say about this:
http://people.ccmr.cornell.edu/~ginsparg/blurb/pg96unesco.html#who

It doesn't really address the issue of going to the press with results, but I think he is dead on about peer review more generally.

Also, you said in the comments:
"I wonder if part of the reason it works so well in physics and math is that media-types don't believe they have a shot at understanding the research, so it mostly stays under the radar."

I don't think that's right mostly because media-types _do_ report on physics and the arXiv doesn't seem to have a huge impact on it either way. I would like to think it's because science reporters know better and are waiting for the results to be peer reviewed, but I think it's more likely that they simply rely much more heavily on university press releases and on big journals like Science and Nature for their material. My guess is that that wouldn't change drastically if psychologists also started taking advantage of a service like the arXiv since journalists aren't getting much from the original articles - peer reviewed or not.