What's the point of peer review?

Once again, I'm going to "get meta" on that recent paper on blogs as a channel of scientific communication I mentioned in my last post. Here, the larger question I'd like to consider is how peer review -- the back and forth between authors and reviewers, mediated (and perhaps even refereed by) journal editors -- does, could, and perhaps should play out.

Prefacing his post about the paper, Bora writes:

First, let me get the Conflict Of Interest out of the way. I am on the Editorial Board of the Journal of Science Communication. I helped the journal find reviewers for this particular manuscript. And I have reviewed it myself. Wanting to see this journal be the best it can be, I was somewhat dismayed that the paper was published despite not being revised in any way that reflects a response to any of my criticisms I voiced in my review.

Bora's post, in other words, drew heavily on comments he wrote for the author of the paper to consider (and, presumably, to take into account in her revision of the manuscript) before it was published.

Since, as it turns out, the author didn't make revisions addressing Bora's criticisms that ended up in the published version of the paper, Bora went ahead and made those criticisms part of the (now public) discussion of the published paper. He still endorses those criticisms, so he chooses to share them with the larger audience the paper has now that it has been published.

Later, in a comment on his own post, he adds:

Nobody can publish my comments without my consent, but I can do it myself.

I was not the editor of the paper in the sense people usually think of. I did not communicate with other reviewers, I did not see other reviewer's notes, I do not know if they suggested acceptance, acceptance with minor/major revisions, or rejection, and I was not a part of the decision to accept or not. I suggested major revisions, then a second round of review. That did not happen, so I am free to post my comments - both those I made at the time and additional comments.

I also have a little bit of gray area here. This being the first manuscript I did for JCOM I did not fully understand my role. I was asked to suggest some names for reviewers (I have no idea who reviewed in the end). I did not understand that this is where my job ends, so I sent in my review as well. It is, thus, an unofficial review. They (editors) read it, but they did not need to use my comments if they did not want to do that. This was more like sharing my thoughts with them. But the comments being unofficial makes them even more 'mine' and free to post online.

As Bora was the "editor" of the paper rather than an official referee of the paper, it's not clear whether the journal editors overseeing the fate of this submission actually forwarded Bora's critiques onto the author, or if they did forward the critiques to the author but indicated that they wouldn't count. Myself, if I were the author of the manuscript, I think I'd want more prepublication feedback, not less, on the theory that this would help me produce a stronger paper.

So maybe, the official referees didn't, as a group, make critiques that raised the points Bora's did. Maybe the majority of however many reviewers there were gave effusively positive reviews saying, "This is fabulous as it stands, and should be published without further ado!"

I am in a position to tell you that not all of the substantive critiques raised by people asked to peer review the manuscript were adequately addressed (at least for values of "adequate" that the referees would endorse) before the paper was published. I was one of the referees in question.

I doubt that any of this is terribly unusual, but it brings into focus some interesting questions about the role of peer review. Is it intended merely as a filter against the most atrocious methodological crimes, the most blatant crackpottery? As a gatekeeper for the literature, keeping out submissions that do not meet the very highest standards (or, alternatively, that do not satisfy the reviewers' biases)? As a preview of the kind of engagement from other scholars the author can expect the submission to elicit if published as-is?

Do the journal editor, the peer reviewers, and the author in a case like this have a shared understanding of what peer review is supposed to accomplish, or of how reviewer comments ought to be used, whether by the author or by the journal editor?

In cases where the outcome suggests that there wasn't agreement among journal editor, peer reviewers, and author about how to use referee reports, how ought the parties to deal with this disagreement? For example, if one has labored over comments only to have them ignored, how should one respond? Indulge me in a quick poll:

If we treat referee reports as a preview of coming attractions, a microcosm of the kinds of criticisms -- and disagreements -- other scholars are likely to have, then being the "hard referee" whose judgment is overruled may be all in a day's work. Indeed, if peer review is meant primarily to alert the author of a manuscript to the sorts of battles she is likely to have to wage to defend her work, maybe it isn't even so much of a problem is journal editors end up substituting their (positive) judgment for the (negative) judgments of the peer reviewers whose reviews they have solicited. (There, though, I think it would be better if peer reviewers were given the straight story on how their reviews are, or are not, to be used.)

On the other hand, if getting one's manuscript published in a peer reviewed journal is taken to be a mark of something else -- of having produced officially recognized Reasonable Knowledge Claims Based on Sound Methodology -- and if the constructive engagement between referees and author in the process of peer review is the mechanism that is somehow taken to be responsible for conferring this special status on the published work that results -- then maybe we have cause to worry about authors, or journal editors, who don't actually engage with the criticisms the referees raise. (Please note that "engaging" with criticisms does not always mean accepting them. Sometimes engagement involves mounting a counterargument -- but this requires acknowledging that reasonable people might raise the criticisms you are answering.)

What kind of duties does the manuscript author has, as far as dealing with issues raised in referee reports?

What kind of duty do the journal editors have to ensure that the manuscript author actually engages with those issues, and that this engagement leaves its trace in the version of the manuscript that is eventually published?

More like this

The one thing that surprises me most about your comments here is that you didn't see the comments and the recommendations from other reviewers. I've only reviewed for a few different journals, but they're from 3 different publishers and I've always been able to see the other reviewer's comments. I can't imagine reviewing without getting this feedback and I'd be very hesitant to review for any journal that didn't give that feedback.

I see the core of the reviewing process as making sure the authors did what they said they did without obvious and correctable methodological flaws, clearly report the results, and present an interpretation that is plausible from their results. I can strongly disagree with the interpretation, but it needs to be plausible. I don't see this precluding "coming attractions" papers. I've very forgiving of manuscripts that list a long string of assumptions that are left unproven. If there are unstated major assumptions, I want them stated in a revision.

I don't think authors have any "duty" to engage with the reviewers' comments. If they want to be published in a peer reviewed journal, they don't have any choice but to play by the rules of each journal. The rules are definted by the editors and they have the duty to make sure both authors and editors know the rules and then to make decisions based on those rules.

What kind of duties does the manuscript author has, as far as dealing with issues raised in referee reports?

The author has a duty, even an obligation, to address every single issue raised by the referees. This could be done through incorporation of the comment into the analysis/manuscript, or rebutting the issue in the cover letter accompanying the revised submission.

What kind of duty do the journal editors have to ensure that the manuscript author actually engages with those issues, and that this engagement leaves its trace in the version of the manuscript that is eventually published?

Ideally, the editor(s) should ensure that the author(s) deal with all relevant comments, within reason. Of course, it is also editorial privilege to ignore a reviewer's concern or not accept an author's rebuttal. I've had both happen to me.

The one thing that surprises me most about your comments here is that you didn't see the comments and the recommendations from other reviewers. I've only reviewed for a few different journals, but they're from 3 different publishers and I've always been able to see the other reviewer's comments. I can't imagine reviewing without getting this feedback and I'd be very hesitant to review for any journal that didn't give that feedback.

This must be very field and journal dependent. I've reviewed for a number of journals now (perhaps a dozen, covering the whole range of readership and reputation), and almost never see the other reviewers' reviews.

I have the same question as bsci. Is it that you did not see their comments, that you don't feel it's relevant to the discussion, or that you don't want to address them for privacy issues? I have never reviewed for a journal and then not known the outcome of the review (not because I've refused certain journals or anything, I've just never had this happen). And so far I've had each review happen as I'd expect, and if anything have felt appreciated because the editor seemed to take what I said seriously. But I'm pretty junior and haven't done a ton of reviews yet. There's PLENTY of time for this to happen!

I have always operated under the impression that publication in a peer-reviewed journal constitutes an endorsement that the paper in question is reasonable, complete, and methodologically sound. I get this impression from the phrasing of the rating scales given to me as a reviewer by several journals, which state as much directly.

If a paper that I'd said needed major revision and another round of reviews was accepted over my objections, my reaction would depend on the nature of my objections. If I thought the paper was muddled and confusing, but not incorrect that I could tell, I'd roll my eyes at the low writing standards tolerated by some journals and move on. If I thought something done in the paper was incorrect or unsound, then I'd be quite unhappy with the journal and its editors, and take the situation under advisement with regard to trusting their content in the future. This has never happened to me directly, although I've seen papers that I'd cut to ribbons for one journal published elsewhere more or less unaltered.

Recently, I had the opposite experience, where a paper that looked perfectly fine to me (modulo one omitted bit that was important, but could be easily added) was rejected after being savaged by the other reviewer. Did I miss something important, or was the other reviewer being unnecessarily harsh and picky? Even after reading the other review, I still don't know. Most worrying.

I'm kind of curious now, what have you published in peer-reviewed literature about online science communication?
It surprised me that you'd been selected to review this paper because I was not aware that this was really your field. Though I was obviously aware you had good insights on this topic (from your blog writings), I'd never really asked myself how this intersected with your more 'official'/'professional'/'scholarly' contributions.

(I'm not trying to question whether you were an appropriate choice for a reviewer here. This just got me thinking about how you use blogging in ways that synergize with other formats.)

As to your questions- I honestly don't have enough experience to judge. I'm under the vague impression you HAVE to do anything reasonable requested from reviewers (and if it's unreasonable, it had better well be like "reviewer #3 has asked that we demonstrate cold fusion. While we enthusiastically agree that this would be a fascinating future direction, we regretfully felt it was not addressable within the scope of this study, or the current understanding of objective reality (see references #19, 22, 23, 29, 37, 38, 39 for discussions of implausibility)"
This might be true for the journals I've seen submissions go into (mostly at the JBC type level of respectable if not glamorous), or it might be that my PI's have tended to be on the cautious side, or it might be that I'm getting *grant* reviewer's criticisms mixed up with *journal* reviewer's criticisms (my understanding is that, in the current NIH funding climate, you pretty much HAVE to address reviewer #3's petty insanity point-by-point, no matter how obnoxious).
I honestly didn't know that you could ever actually get something published in the same journal after it came back with "MAJOR revisions required" if it was not feasible to completely rerun the entire study.

I'm even less sure what the journal editor's responsibilities entail.

Although, as a tangential point, sometimes I wish editors would do a better job of ensuring clean-up post peer review.

Sometimes, when I'm reading a paper, they'll be these figures (sometimes in supplemental) that appear to have nothing to do with the thesis of the paper (sometimes mostly negative results). While I think it's fine for this stuff to be included, often the results aren't well integrated into the write up. I've seen two general causes for this:
A) the writers were sloppy in their writeup, but felt the need to pack in this stuff because somebody spent a lot of time on it and/or
B) some crazy reviewer asked for it, the authors did the experiments under duress, and wrote them up as a sort of "see, so THERE!"
I wish journal editors would do a better job making sure things made sense to those of us who DIDN'T see the peer review behind the scenes stuff. I hate the "and now for something comPLETEly different!" feeling in the middle of a paper.

In other words, from my past experience, my bias is to thinking of editors relying too heavily on whatever batshit crazy thing reviewer #3 wants... which makes this whole specific situation particularly bizarre.

This is an interesting discussion for me as a graduate student. I hope to one day publish my work, and I would certainly take reviewers' comments into consideration when writing something, but there's also a point where it is *my* work, and I may have to say that what I did is correct no matter what a reviewer says... I am speaking from nearly complete ignorance of what reviewers might say - I have heard comments from the post-docs, usually about additional experiments that they feel aren't necessary, but I really don't know what an author gets to see from the reviewers.

From the point of view of publishing those comments - if they are valid, or if the reviewer feels they are still valid or were not properly addressed, I think that's a reasonable part of the discussion. If I, as an author, did not get those comments, and all I got were good reviews - I would say so, but then I would attempt also to address the negative comments. To point out that I didn't include that extra experiment at the advice of my PI, or I hadn't thought of checking against the results of Foo, B. et al. - but given the data I *do* have, here's my conclusion, and I would certainly take the new comments into consideration for my next paper.

As a reviewer (something else I've not done), I would probably not make negative comments if I didn't feel they were valid. If they were then ignored and the paper published, I might then assume that other reviewers did not notice the same problems (or did not think them as major) - and at this point in my career, being a graduate student, I would be thinking that perhaps I should revisit what I said and check over my comments. But if I found them to still be valid, then I could publish them as a critical review of the paper - and wouldn't *that* be a reasonable thing to do?

I don't understand why everyone is so surprised here. Low-quality journals tend to have low-quality articles. I think that the authors, reviewers and editors all know this, and how the manuscript is treated is based on this knowledge. If you generated a kick-ass study with cool results, you would not send it to a crappy journal. If you have something crappy and dubious which for some reason you feel you need to publish, then you send it to a crappy journal. Most reviewers reviewing papers for these journals will likely not take these reviews as seriously and will not judge the paper as harshly as if it were going to a more better journal. Likewise, the standard of tolerance of the editor for the author not quite addressing all of the reviewer's concerns is likely to be lower in these journals. As a reader one knows to expect very different things from crappy journals and high-quality ones. By high-quality I don't necessarily mean glamour magz, but simply respectable solid journals in a field.

Thus, the quality of peer review varies with the quality of the journal, and by extension so does the quality of the articles.

No surprise here.

I've always been able to see the other reviewer's comments.

As Andy @3 says, this is probably field dependent. My experience is that I get to see the other reviewer's comments if the paper goes to a second (or later) round of reviews, but not if the editor makes a final decision to either accept or reject the manuscript. I can see not getting this feedback for a rejected manuscript: it's moot at that point because the editor has agreed with my recommendation (or that of the other reviewer) not to publish the paper in anything resembling its current form. But it would be nice to see, in the case of an accepted paper, how the authors responded to my comments. And it's always good to see the other reviewer's comments, since often (s)he will have noticed things that I overlooked.

I'm under the vague impression you HAVE to do anything reasonable requested from reviewers

You should do anything reasonable that the reviewers request, but sometimes the editor will let it slide, and sometimes you can make a strong argument that the reviewer is mistaken. It doesn't have to rise to the level of "demonstrate cold fusion" for you to make the rebuttal stick. I have successfully argued in at least one case that the reviewer was misinterpreting a result from one of the references I had cited (in that case the other reviewer turned out to be the first author of the reference in question--that journal allows reviewers to identify themselves after the paper is accepted). Most often, you will have to revise some paragraph to remove the ambiguity that tripped up the reviewer.

By Eric Lund (not verified) on 17 Mar 2010 #permalink

This is why I like reviewing for journals that make the peer review comments public (Atmospheric Chemistry and Physics is the one that is most common in my field). All review comments are open, and a reader of the paper can look back at the review comments it received and how the authors responded. I think it helps transparency.

Technically, the decision for publication in most journals is entirely up to the editor. The role of reviewers is to advise the editor, but an editor can overrule a reviewer. I've seen several cases where one reviewer will say that X is a problem and other reviewers will say it's not. I remember one exchange where an editor asked one reviewer what another reviewer was smoking, since the other reviewer was so obviously off-base. I've sometimes been asked as a third or fourth reviewer to adjudicate between reviewers that disagreed. Eventually, the editor decides whether the reviewer's complaints should lead to an acceptance or rejection.

Nobody who understands it thinks peer-review is perfect. It is just a way to get additional feedback to improve or reject the paper. There is so much variation between reviewers that to imagine it leading to a certification of the correctness of the methodology and data analysis is asking for the impossible. Often, the journal editor does have to make the call, and often, in low-quality journals, that call has as much to do with needing enough papers to publish an issue as it does with having top-quality work. So, yeah, the editor can totally overrule the reviewers, and the author can make their arguments for not making changes in the cover letter. I have often argued against my reviewers because I thought they were wrong, and figured that the editor would make the call.

This is why the journal reputation is so important. Peer review is a human process, and it is the editor's job to make judgement calls. These judgement calls affect the reputation of the journal, and so readers should have a sense of what they're getting.

I think that 'participate in post-publication criticism' should be accompanied by some form of 'vow never to review for the journal again', at least for editorial board members.

Someone with whom I shared an office while on sabbatical had just had a paper published in a medical journal where the statistical reviewer (and member of the editorial board of the journal) had subsequently written a letter to the editor reiterating the criticisms made as a reviewer. If I felt strongly enough that the editorial decision was wrong that I would write a letter about it, I would also resign from the editorial board of the journal (though I might still be willing to review for it).

In this particular case I'm fairly sure the reviewer was wrong -- the particular statistical issue involved is in the area I work in, and isn't the specialty of the reviewer, who is actually a philosopher -- so I helped the authors write a rebuttal letter.

I was somewhat biased against the reviewer by the fact that in the letter he signed himself as a Fellow of the Royal Statistical Society. This was interpreted by every medical researcher I have shown the letter to as claiming a professional qualification in statistics, similar to the professional specializations in medicine. In fact, it just indicates that you are sufficiently interested in statistics to have signed up and paid your fees.

I can't see where you answered becca's question above:
"I'm kind of curious now, what have you published in peer-reviewed literature about online science communication?"

becca @6, ET @14, nope, I haven't published in the peer-reviewed literature about online science communication (although I have presented about it at a professional meeting). But in my experience doing research in interdisciplinary niches (like responsible conduct of scientific research, and philosophy of chemistry), it's not uncommon for journals considering manuscripts in these areas to draw on reviewers with related professional experience and publications -- so, a working scientist who hasn't published anything on ethics may be a referee for a submission on ethical research practices, or a chemist who hasn't published on anything philosophical might review a manuscript on metaphysical questions in a chemical theory.

Especially in light of comments the editor of the journal left on Bora's blog, it sounds like I was in the pool on account of my "field experience" in the science blogosphere. Perhaps the fact that the comments in my review didn't have much impact on the final version of the paper that was published, maybe my comments were outweighed by those of a referee who *has* published on communications. (I'll leave it to the reader to decide what that says about whether someone who has published in the field is better or worse at addressing methodological issues that might be worth addressing.)

I don't think authors, reviewers, or editors have any "duties" other than to be truthful in their statements, both public and confidential to each other, and to follow the publicly stated editorial and peer review policies of the journal. If a journal sucks ass, people don't have to read it.

i noticed this discussion very late, excuse me... i'm one of the editors of jcom. yes we try to be interdisciplinary. i.e. in some of the articles of our latest issue our referees were a sociologist/media scholar and a science blogger, both suggested by a member of our scientific board.

By alessandro delfanti (not verified) on 27 Mar 2010 #permalink