Is Peer Review Really This Problematic?

So there's an article that a fair number of people have gotten all het up about in The Scientist which criticizes peer reivew. I'll state for the record that I agree with the article in that the review process needs to be much faster, and more people need to be reviewing (the burden is too great for some people). But I'm puzzled by this (italics mine):

Twenty years ago, David Kaplan of the Case Western Reserve University had a manuscript rejected, and with it came what he calls a "ridiculous" comment. "The comment was essentially that I should do an x-ray crystallography of the molecule before my study could be published," he recalls, but the study was not about structure. The x-ray crystallography results, therefore, "had nothing to do with that," he says. To him, the reviewer was making a completely unreasonable request to find an excuse to reject the paper.

Kaplan says these sorts of manuscript criticisms are a major problem with the current peer review system, particularly as it's employed by higher-impact journals. Theoretically, peer review should "help [authors] make their manuscript better," he says, but in reality, the cutthroat attitude that pervades the system results in ludicrous rejections for personal reasons--if the reviewer feels that the paper threatens his or her own research or contradicts his or her beliefs, for example--or simply for convenience, since top journals get too many submissions and it's easier to just reject a paper than spend the time to improve it. Regardless of the motivation, the result is the same, and it's a "problem," Kaplan says, "that can very quickly become censorship."

"It's become adversarial," agrees molecular biologist Keith Yamamoto of the University of California, San Francisco, who co-chaired the National Institutes of Health 2008 working group to revamp peer review at the agency. With the competition for shrinking funds and the ever-pervasive "publish or perish" mindset of science, "peer review has slipped into a situation in which reviewers seem to take the attitude that they are police, and if they find [a flaw in the paper], they can reject it from publication."

In my experience, both being reviewed and as a reviewer, I haven't encountered this attitude (the only times I have, the reviewers signed their names to the review, so anonymous review can't be blamed).

Is it really that bad, because I don't know anyone who is that petty. And neither I nor my colleagues are saints, not by a long shot. I can see someone getting hung up on a method, technique, or analysis, and appearing unreasonable, especially when he uses that approach in his own work: if he didn't feel strongly about it, he probably wouldn't do it. But that's not tanking an article for competitive advantage. That's offering an honest criticism (if perhaps unreasonable and fucking stupid). I also won't even pretend to claim that reviewers are objective; of course, we're not. But that isn't necessarily 'competitive', just human.

It seems to me that this could be either an urban legend or an isolated incident blown out of proportion. Alternatively, this is kinda like watching porn: lots of people have done so, but no one will actually admit it. There are issues with peer review, but I think spiking papers isn't a problem.

More like this

I've only had one really terrible review (in terms of the review's quality, not what it said about my work), and I fully attribute that to the paper being in an *extremely* small field (less than 10 working individuals, even if you count grad students). As a result, they made several errors which were beyond stupid if you're familiar with either the field or species - "Why don't you sequence the bacteria's mitochondrial genome?"-level stupid - but were somewhat understandable if the paper had been handed to someone who had never read a paper on, well, animals or organismal biology.

Honestly, though, I like peer review. It gives me a chance to yell at much more senior people for not doing their statistics correctly.

Spiking a paper with a completely unfair referee comment does occasionally happen; I've been a victim once. In that case I laid out my idea, which was based on six or seven references from several different groups, and the referee argued from authority (no references cited) that "everybody knows" the idea is bunk. (I later resubmitted that paper, and it did get published.) But in the circles where I work, it's a rare thing. I have recommended rejection of papers where I could point to a significant error in methodology, or when the data presented do not support the conclusions.

However, very few in my field ever consider submitting to GlamourMags. The potential for abuse, and rewards for the abuser who gets away with it, are much higher there than for the society publications I usually submit to. I have no evidence on either side of the question where GlamourMags are concerned.

There also is a problem with review of proposals, where unlike journal articles the author often does not have an opportunity to rebut the referees. Part of the problem is that your proposal often competes with proposals from precisely the people who would be most knowledgeable about your proposal, so availability of non-conflicted knowledgeable reviewers is not assured. NASA has been known to use mail-in reviewers who have submitted proposals to the same program (although they are careful to keep such people off the panels), and they are certainly aware of the appearance of a conflict of interest because they go to great lengths to insist that it is not a conflict of interest. Another part is that some agencies (NASA in particular) only show you summaries of the reviews, not the actual reviews (by contrast, NSF lets you see the actual reviews, which you can respond to when you submit your next proposal).

By Eric Lund (not verified) on 25 Aug 2010 #permalink

My question is why didn't the researcher just address the issue of the X-ray crystallography in the resubmission cover letter and explain why it was not appropriate? I've done that multiple times for entomological manuscripts. In the cover letter, you explain your response to each reviewer comment, accept, explain how you addressed the problem, or explain why you disagree with the comment.

By the bug guy (not verified) on 25 Aug 2010 #permalink

"peer review has slipped into a situation in which reviewers seem to take the attitude that they are police, and if they find [a flaw in the paper], they can reject it from publication."

Which is, of course, utter bollocks. Referees don't reject papers, editors do.

I feel a rant coming on. Unfortunately, I have to clean the parrots out tonight.

I have had this happen once. After Reviewer #1 gave a mildly positive review, Reviewer #2 stated in a one sentence review that he would never believe our findings because they ran counter to what he was taught in medical school. Which was very surprising given that we 1) were confirming the findings of several other laboratories and 2) were studying the physiological phenomenon in rats and it may very well be different in humans (and irrelevant to the findings). Basically, he said we were lying. After a conversation with the section editor, he apologized and said he would no longer be using that reviewer.

It's hard to track because a devious reviewer can find a way to spike a manuscript while sounding reasonable. But ultimately it's the editor that makes the decision to reject or accept a paper. If the journal has good ones, they should know which reviewers comments to ignore, which ones need attention, and when they should sent it out for another opinion.

I think one positive addition to the process is the tandem publication of the manuscript and reviewer's comments. It does force the reviewer to take their job more seriously.

Is it really that bad, because I don't know anyone who is that petty.
From the article

if the reviewer feels that the paper threatens his or her own research

A Federal judge just overturned Obama's presidential order that fetal stem cell lines be usable for research funded by federal money. The suit came about because 2 researchers using adult stem cells claimed the availability of fetal stem cells would redirect money from their research into adult stem cells.

Blocking entire new areas of research with the potential for curing many types of disease so you don't lose some Federal funding? That's pretty damn petty. I don't think researchers like these would hesitate for a second to spike a competitor's paper using anything they could think of.

By lynxreign (not verified) on 25 Aug 2010 #permalink

In my experience, there are often review comments that are just plain wrong. I don't hesitate to rebut them in a resubmission letter. There are often comments that betray the reviewer's inexperience with the field. Those you can also deal with in a response.

But one of the most frequent review errors that I see comes from a review saying "you should also do X, Y, and/or Z." Where X, Y, and Z are things that one could indeed do, but that don't have any relation to the question being addressed.

I suspect that this stems partly from a neglected item in training of grad students --- the proper grounds for critical examination of a paper. In seminar classes where students are asked to critically evaluate a paper, the criticisms often start off with a list of things that the author didn't do. When you ask them how the lack of those things affects the conclusions of the study, it often turns out that they haven't asked themselves that question. X, Y, or Z would be relevant to a *different* paper, one that asked a related but different question.

As journals make page limits tighter, it is even more important than it has been to write papers that are tightly focused --- ask a question, figure out how to answer it, present exactly what is needed to answer it, and get the heck off the stage. But that only works if editors hold reviewers to the same standard. (For example, I wonder what reviewers would say if they were told that for everything they recommend adding, they must identify something they would not mind seeing removed.)

By ecologist (not verified) on 25 Aug 2010 #permalink

My favorite review, which the editor passed on to me, read something like this. "I am sorry to say I have lost the manuscript you sent me for review. However, it was very well written, came to well supported conclusions, is appropriate for the Journal,and is an important contribution to the field. I recommend publication without modification."

By Jim Thomerson (not verified) on 25 Aug 2010 #permalink

Reviewers spiking papers and then rushing out a brief communication that has the same findings absolutely happens, since it happened to me during my PhD. You might like to think that scientists can't be petty or mean-spirited but they're only human.

Eric @2,
I feel your pain. I also had a paper rejected because "everybody knows" that the biological phenomenon occurred by a certain mechanism, according to the reviewer. Actually everybody assumed it worked by that mechanism and nobody had studied it. My submission describing the novel mechanism involved was rejected from the trophy journal and subsequently published in a (very nice but not as flashy) trade journal.

My post-doc mentor had a long-running feud with another researcher in the field, and on several occasions that other researcher asked for a lot more data for my papers than was reasonable. Sometimes the cover letter and rebuttal will convince the editor you don't need all that stuff. Other times you have to do it or publish elsewhere. So yes, sometimes peer-review is that problematic.

I think the tell here is probably the reference to "top journals". There is a big difference between someone keeping you out of Science or Nature and keeping you from publishing it all in a peer reviewed journal. Sorry, but I have zero sympathy for anyone complaining about the practices of GlamorMags. You sign up for that game and you are playing by those rules.

The whole (publish or perish) and Peer review is similar to forcing business men or companies to submit their success secrets to an agency where gatekeepers snatch the innovative ideas and use them before anybody else does. How can junior authors trace reviewers and make them accountable to ideas piracy in the current system? After multiple similar events that had happened to me, i changed my career.

By X academic (not verified) on 26 Aug 2010 #permalink

The whole (publish or perish) and Peer review is similar to forcing business men or companies to submit their success secrets to an agency where gatekeepers snatch the innovative ideas and use them before anybody else does. How can junior authors trace reviewers and make them accountable (to whom) for ideas piracy in the current system? After multiple similar events that had happened to me, i changed my career.

By X academic (not verified) on 26 Aug 2010 #permalink

From Humanities-world...

Commenters here who encourage discussions with editors are making one point I would have. To my very first peer-reviewed manuscript, reviewers asked for an extended explanation of one point that I just didn't want to elaborate. So I talked to the editor, and we agreed that the solution was simply to take out the sentence that prompted the request.

Most of the reviewers in my field (Composition, Rhetoric, Writing) sign our reviews. I review for two journals and always give authors my name and contact information. More than once, I've had useful and interesting conversations with authors that were gratifying to me and useful for their work. Seems like that's what's supposed to happen, isn't it?

Of course it's different for us in many ways, not the least of which is that timeliness isn't at a premium in the humanities in the same way it is for science.

When I teach graduate seminars, the first thing I do on the first day of class is to ban the word "should" from the classroom. That way, they don't waste time talking about what authors should or shouldn't have done. If there's something they'd like to see researched, they can research it. If somebody said something they don't like, they can answer it.

if they find [a flaw in the paper], they can reject it from publication

It's the freaking job of reviewers to point out flaws, and if severe enough, to recommend rejection.

It's the freaking job of editors to think about the reviews they obtain and to distinguish real flaws from trivial, irrelevant, or clueless reviewers' comments.

There will always be bad, lazy, and stupid peer reviews. It's editors that just swallow the crap along with the cream that cheese me off.

By Sven DiMilo (not verified) on 26 Aug 2010 #permalink

I have recently ranted upon a related topic here: http://ittakes30.wordpress.com/2010/07/08/raising-the-standard-high/ I believe that graduate students are taught to nitpick a paper, but not taught to ask themselves whether the flaws (there are always flaws!) are actually important to the core message of the paper. It takes a while to get over this initial training; it's easy to ask someone else to do more work, and hey, a crystal structure will always improve a paper, right?

I agree with a couple of the previous commenters that it's the editor's job not to let the reviewer get away with this kind of kneejerk reaction. In my experience editors usually try hard to apply good judgement to this question. If someone really wants to kill your paper (in the high-profile journals at least), the best method is to say that it's technically excellent but just really not that interesting. This is much harder for an editor to defend against without help from the other reviewers.

If you're reviewing a paper and you like it, please make sure you explain why. Give just as much detail about why you like it and why it's important as you would if you didn't like it and were arguing for rejection. This is the only way for an editor to balance your enthusiasm against another reviewer's negative comments.

Accepted papers are the problem - not rejected ones

In this discussion I read a lot about problems with rejected papers. The real problem are the papers, which are accepted for publication. Let me give you some examples from Nature Methods:

1) Nature Methods publishes that it is possible to measure expression profiles from small quantities of cells (Nature Methods 7, 619â621, 2010) using the Helicos Sequencer. The authors did not compare their results to any of the published methods which work with comparable amounts of RNA. Therefore, what did we learn from this paper? Helicos can do what other companies do since years. Good for the company, useless for science.

2) Nature Methods publishes that it is possible to measure expression profiles from individual embryos (Nature Methods 6, 377-382, 2009) using the SOLiD sequencer. The authors did not compare their results to any of the published methods which work with comparable amounts of RNA, they did not even test how reproducible their method works, if it gives results comparable to standard expression profiling methods and they tested several data analysis methods to finally pick the one that give the results fitting best to their hypothesis. Therefore, what did we learn from this paper? SOLiD (Applied Biosystems, Life Technologies) can do what other companies do since years. Good for the company, useless for science.

3) Nature Method publishes binding sites of transcription factors can be characterized by sequencing (ChIP-Seq) from small quantities of cells (Nature Methods 7, 47-49, 2010) using the Helicos Sequencer. The authors did not compare their results to any of the published methods which work with comparable amounts of ChIPed DNA (after e.g. WTA amplification). Therefore, what did we learn from this paper? Helicos can do what other companies do since years. Good for the company, useless for science.

Okay, one could say that we do not have to care about these publications, lets just forget them. Here comes the real problem:

Each of these methods provides poor quality data. You will see once you re-analyze their raw data. Imagine somebody who developed a method providing accurate data to answer any of the questions dealt with in the mentioned publications. Editors will not care to publish the new method anymore since the headlines are already taken by the flawed papers.

The true problem of peer review is that the reviewers accept too many flawed papers. These papers often are published by authors working at commercial companies. Authors and companies have strong financial interests because peer reviewed publications are outstanding sales arguments.

We need reviewers and editors who are capable and willing to scrutinize publications carefully for their scientific relevance.

By Otto Blau (not verified) on 29 Aug 2010 #permalink