Beware the bashers of peer review

I'd like to hear from some other sciencebloggers and science readers what they think reform of peer-review should look like. I'm not of the opinion that it has any critical flaws, but most people would like to see more accountability for sand-bagging and other bad reviewer habits. Something like a grading system that allows submitters to rate the performance of their reviewers, then editors of magazines would tend to only consult with reviewers that authors felt were doing a fair job of evaluating their paper.

The drawback of course would be that reviewers might start going easier on papers just because they don't want bad grades.

One thing I do know for sure though, we shouldn't take advice about peer review from HIV/AIDS denialists...

Dean Esmay, for instance, is a frequent critic of peer review, especially in regards to his pal Peter Duesberg - the HIV/AIDS denialist extraordinaire. For instance, read about how Duesberg convinced a woman to let her daughter die, and remember Thabo Mbeki, the HIV/AIDS denialist president of South Africa bases his ideas on Duesberg thereby increasing the number of deaths Duesberg is responsible for greatly.

But that doesn't stop Esmay from singing his praises and basing his belief that our peer review system is flawed on what I would say is an excellent success of the system. Read Esmay's loving post about Duesberg's SciAm article on chromosomal chaos and cancer for an example of his critique. One way not to win an argument about peer review reform is to say things like this:

While millions died, our corrupt Crony Review system blew it big time. Peter's not the only one who illustrates this fundamental breakdown in scientific protocol, but he's probably the most egregious example.

A scientist who has made major contributions in important areas, but questions the consensus view, should not be punished for it should he? Yet Peter has been, repeatedly.

You see, Dean Esmay, who has never published a scientific paper, who has never participated in peer review of a scientific paper, who has probably never even read a real scientific paper (and understood it) has determined it's a system of cronyism because it prevented, of all things, Peter Duesberg from being published (he's only really shut down on HIV/AIDS which he doesn't even research - he still publishes on cancer). I consider that a shining example of the success of peer review, because even though Duesberg is a member of the National Academy, and once a prominent scientist who discovered the first viral oncogene myc, the reviewers started smelling the BS and shut him down so he couldn't contaminate the literature with his absolutely atrocious denialist garbage which can only cause death and misery. It should be noted Esmay is also a global warming denialist and uses the same peer review critique to suggest global warming is all just a giant conspiracy for scientists to enrich themselves with grant money. I think somebody needs to explain to him how academic scientists get paid peanuts compared to their industry equivalents (suggesting that if we were in it for the money we're going about it the wrong way) and that contrary to popular belief, we can't justify buying luxury cars and homes with government grant money.

Other HIV/AIDS denialists, like Hank Barnes et al. of Barnesworld like to give us suggestions on peer review too. Here's how they feel about the problems with the system they have nothing to do with.

Peer review enforces state-sanctioned paradigms. Pollack (2005) likens it to a trial where the defendant judges the plaintiff. Grant review panels defending the orthodox view control the grant lifeline and can sentence a challenger to "no grant." Deprived of funds the plaintiff-challenger is forced to shut down her lab and withdraw. Conlan (1976) characterizes the peer-review grant system as an "incestuous 'buddy system' that stifles new ideas and scientific breakthroughs." Science is self-correcting and, in time, errors are eliminated, or so we are taught. But now with a centralized bureaucracy controlling science, perhaps this rhetoric is "just wishful thinking" (Hillman, 1996, p.102). Freedom to dissent is an essential ingredient of societal health. Braben (2004) contends that suppressing challenges to established orthodoxy sets a society on a path to its doom.

He "likens it to a trial where the defendant judges the plaintiff." That's a bizarre analogy. I can't quite wrap my head around it, I suspect it requires paranoid personality disorder to make sense of. And once again we get this outrageous canard that science enforces "orthodoxy" and doesn't change paradigms. It's like they read the first half of Thomas Kuhn's description of the nature of scientific revolutions and skipped the part where he describes how paradigms get shifted by evidence.

The fact is scientific publications love papers that challenge orthodoxy or present new ideas. The only requirement is that for extraordinary claims you need extraordinary proof. Duesberg got a fair shake, Science Magazine even dedicated a three month review of his arguments and found them to be without merit. And at a certain point, when someone is presenting no data, cherry-picking facts to support their debunked theory, and causing people to forgo life-saving treatments, it's ok to blackball them for being the scum that they are.

So, repeat after me. Show us data, give us proof and you can publish whatever you want. Attacks on peer review are the last refuge of the junk scientist who can't get their garbage published. It's never because they're wrong or they haven't proven their claims. Instead it's a conspiracy! It's those fat-cat peer reviewers who only make sure their buddies get grants so they can enrich themselves on the government's dime!

Finally, note the last paragraph in Esmay's global-warming article:

By the way, watch for the paint-by-numbers responses: the bashers and defenders of orthodoxy love to trot out phrases like "conspiracy theory" and "politics" and "pseudo science." Because that lets them not only smear the skeptics, but also lets them completely evade the real issue: inherent conflict of interest and lack of objectivity.

It may be paint-by-numbers, but it's still true. And I'm no defender of orthodoxy, but I am a defender of the process. And the failure of Duesberg and the other cranks that these HIV/AIDS denialists support isn't due to a conspiracy of peer reviewers, but the fact that their evidence is BS, it has been debunked time and again, and worst of all, it kills. I think we can figure out how to improve peer review without advice from these denialists.

For his cherry-picking of Lindzen's crap from the Speigel article (it doesn't really say what he thinks it says) as well as his attacks on peer review, and terrible "plaintiff" analogy, I give these HIV/AIDS denialist cranks the following rating.
i-3a38ecb7855955738c9e961220d56e25-1.gifi-02de5af1f14cb0cdd5c20fb4d07e9b84-2.gifi-489dd819efedba2ae35c8ed120ac2485-3.gifi-62a2141bf133c772a315980c4f858593-5.gifi-83ab5b4a35951df7262eefe13cb933f2-crank.gif

More like this

One of the favorite targets of pseudoscientists is the peer review system. After all, it's the system through which scientists submit their manuscripts describing their scientific findings or their grant proposals to their peers for an evaluation to determine whether they are scientifically…
"It's just murder...It's really just that simple." -Anthony Fauci on the HIV/AIDS denialist Peter Duesberg I think that one of the clearest examples of denialism, and of the harm that anti-scientific attitudes can have, is in HIV/AIDS denialism. But who in this day and age can continue to promote…
Three can keep a secret if two are dead. -Benjamin Franklin What are denialist conspiracy theories and why should people be instantly distrustful of them? And what do they have to do with denialism? Almost every denialist argument will eventually devolve into a conspiracy. This is because…
Seth Kalichman is a better man than I. Kalichman is a clinical psychologist, editor of the journal Aids and Behavior and director of the Southeast HIV/AIDS Research and Evaluation (SHARE) product, and he has devoted his life to the treatment and prevention of HIV. Despite a clear passion for…

I doubt that any scientist would seriously want to do away with peer-review but that is not to say that peer-review, as currently set up, is perfect. A current flaw, in my opinion, is the view that the best person to review a particular topic is someone who is intimately working with the same exact scientific problem. In theory this is of course true, however in practice this creates a major ethical dilemma for those reviewers invited to examine a paper from a direct rival. The temptation, unfortunately, is to 'hold up' a paper for enough time to allow the reviewers own group to publish first. In the current financial climate there is a limited amount of grant funding available and publishing first can make or break careers. It would be better if editorial boards took a stronger stance on this matter and for instance made it a general policy to send papers to reviewers who were technically competent but not directly working on the same topic.

Yeah, that's what I meant by sand-bagging. When they see a competitor is close to them they sink the paper in review to get enough time to beat them to publication.

It's a major problem. But not an example of "cronyism", if anything quite the opposite.

My take is: Peer Review is not perfect, but *nothing* involving human beings is every perfect. I've heard hundreds of proposals for how to change it, and I've seen some of them implemented by particular conferences/journals, but I've never seen anything that would actually make any real different.

The fact of the matter is, it works. Sometimes it works slower than we might like it to; sometimes someone with a really radically new idea has to work very hard for a long time to get their work through the peer review process; and sometimes garbage by senior people gets through for political reasons.

But in every case that I've seen or heard of, good ideas *do* eventually make it through the process; and garbage that gets through eventually gets knocked down.

Just for example - MartinC, I don't mean to attack you personally. But I've seen the criticism and proposal that you make before. And what it comes down to is really a proposal to *not* assign the most qualified people to review work in a field. When you're dealing with specialized knowledge, the people who do the best job in recognizing good ideas are the people who are doing work in the same field. When you try to change the review process so that you disqualify people who are potential competitors, you're just delegating the review process to *less* qualified reviewers. And the end effect of that is that more crappy work can get through, and more of the less conventional work will get rejected. (Just for example: take cancer research and the recent kerfluffle over DCA. You don't allow people who work on chemotherapy to review papers on DCA. So you wind up with the work being reviewed by someone who is an MD, but who isn't a specialist in cancer biochemistry. Are they going to know that it's hyperbole to say that all cancer cells use a different metabolic process for producing energy? Or on the other side, are they going to be willing to go along and say "That's not the common view, but the chemical evidence in the paper is extremely well-done and demonstrates that the diea is credible"? To recognize whether it's a reasonable claim, or an over-the-top claim really requires deep knowledge of the relevant biochemistry, and the ability to look *in detail* at the experimental data and analysis to figure it out. Odds are, only the researching oncologist who specializes in cancer cell metabolism is going to be able to really judge that. But that's also the person who's most obviously in competition.

By Mark Chu-Carroll (not verified) on 08 May 2007 #permalink

I don't know how to fix the problems with peer review for grant money. But it seems that it's rather prejudiced against risky work and young investigators (one might say young investigators are a risky investment for the NIH). The NIH tries to overcome the prejudice against young investigators by having them compete only against other new investigators, but that doesn't quite get at the problem. Currently, the average age of investigators receiving their first R01 is 41. That seems to be a problem, if you ask me, but it's not clear how to fix it (certainly the suggestion of people like Esmay, who want to disembowel the peer review system is *not* the way to go).

However, for peer review of manuscripts, I think that alot of the prejudice can be removed by double blinding it. That is, take the names of the authors off of the manuscript before giving it to a reviewer. Granted, sometimes you'll be able to guess anyway (based on what they're working on and what they reference) but many times you won't. I think this solves two problems. 1. Killing a paper just because it belongs to a competitor that you don't like (and though some people say this doesn't happen, at the higher tier journals, I think there's evidence to suggest it happens fairly frequently). 2. Publishing a paper by a luminary that sucks. We've all read a paper in Science or Nature by some bigwig and wondered at the end of it "Why is this in Science?". Just because you won the Nobel Prize, it shouldn't give you a pass when publishing garbage.

My $0.02.

factician:

I don't believe in double-blind review at all. There are a lot of conference in Comp Sci (my field) that have adopted fully blinded reviews. And the only thing that it's accomplished is to make it *harder* for young authors to write their papers.

The thing is, no matter how hard you work to make things blind, the senior people in the field *know what each other are doing*. So right away, the idea that you're somehow going to get rid of cronyism doesn't work at all. When I get a paper to review from one of the senior people in my field, I *know* whose paper I'm reviewing, despite any efforts at blinding, and I'm not particularly brilliant at recognizing authors.

And the downside of the blinding is that it creates trouble for junior people. I've gotten blind reviews when I was starting out that criticized me for not adequately citing *my own* work; and yet, if I *had* cited my own work enough, it would have been obvious that the paper was an extension of my earlier stuff, which would have been flagged as a violation of blinding by the editor. I've seen exactly the same thing happen several times to other junior people, and I've even been the *reviewer* of a paper that I rejected for not citing prior art, where I later met the author, who was complaining about how some idiot reviewer rejected his paper because he didn't cite his own work.

It just doesn't work.

By Mark C. Chu-Carroll (not verified) on 08 May 2007 #permalink

To be fair, the quote from Barnes et al. is not against peer review of publications, merely peer review for grant money. His argument is not that dissenting research won't get published by the scientific "establishment," but that it won't get funded in the first place by the governmental establishment. This may not represent the entire article, but from the given quote that's what I can see.

The argument is still fallacious. There are all kinds of grant-giving institutions. Not all are finded by the government. Each one is looking for different things; some are set up specifically to fund new, interesting ideas and others exist to continue existing research in established fields.

Just wanted to point that out.

I think peer review is one of those subjects where we're likely to be swayed by the extreme cases and not be aware of the mean. Just look at what people are doing in this thread: arguing from anecdotes! To be fair, what other information do we have available? Nevertheless, we're likely to have a few stories stuck in our heads about horrible troubles getting a paper published or egregious frauds not caught by the regular review process. Precisely because these stories are memorable, they stay with us, but that does not mean they are representative.

Dirac, agreed. However, the same arguments apply and reviewers in study section are no more conspiratorial than those who get your paper as part of a journal review. If anything, they are less so, because the study section members usually meet face-to-face to discuss which research they think is most important. If a member tried sandbagging research in such a venue it would be a bit more obvious (although this does happen too).

However, I've never, ever, heard of study section rejecting a hypothesis because of novelty or because it conflicted with "orthodoxy". The same rules apply as with journals, original thinking, new ideas, splashy data showing new things are considered more fundable. While that doesn't mean grant reviewers like "risky" projects, they are certainly going to be biased towards projects that are showing novelty as opposed to orthodoxy.

This claim of an orthodoxy is silly, and could only come from someone completely removed from the peer review process. If anything we're too biased towards novel results. I link this video of John Ioannidis' grand rounds at NIH that's expressly about the problem of the emphasis on novelty leading to splashy new results that frequently get corrected with time, rather than what Esmay et al., allege which is the stodgy refusal to publish anything that conflicts with consensus. It simply is an unrealistic picture of scientific research.

"While that doesn't mean grant reviewers like "risky" projects, they are certainly going to be biased towards projects that are showing novelty as opposed to orthodoxy."

The only novel projects that get funded, are projects that have considerable data already gathered and showing as "preliminary data". In that case, it's like applying for money to do work you've already done. *warning, anecdote ahead* The most novel researcher that I know essentially funds his lab on work that he's already done. He applies for grants for work that is nearly ready to be published, and puts in just enough preliminary data to get people salivating for more. He then uses that money to do further innovative research (and not the stuff that he's applied for). Granted, he is very successful at getting money, but it's not a technique that's available to younger PIs and post-docs (as most of us have little in the way of preliminary data, compared to a PI that has 10 people in his lab). He's essentially gaming the system, and I don't think that's what we want, either...

Mark,
I think you misunderstood me. I wasn't suggest that reviewers should be completely outside the field of the paper they are reviewing, merely that editors should be careful when assigning papers for review to individuals who may be direct competitors of the author in question. I simply don't believe that the current situation is the best we can hope to achieve. There should at least be some proper safeguards against conflicts of interest in the review process. As an aside, myself and some of my colleagues have on several separate occasions received completely illogical reasons for rejecting a paper from a reviewer and then managed to get a re-review by writing to the editor and directly accusing a competitor of unethically holding up the work (remember, this is an anonymous process from the authors side and seeing there is no way for you to be sure who the reviewer is, simply accusing a competitor like that is likely to get you absolutely nowhere. Unless of course you are correct in your guess). Speaking to many colleagues I know that these are not isolated incidents yet there remains a head in the sand approach from the scientific community towards solving this particular problem. There should at least be a more direct way to approach the topic of unethical reviewers (at the moment there is unfortunately no disincentive regarding unethical behaviour on their part and quite obviously a real incentive for them to behave so).
I actually agree with you about the double blind idea, when I'm reviewing papers on certain topics I have no doubt I would be able to guess the author.

Factitian,
I agree, it is necessary to include far too much preliminary data in grants. We joke about this all the time too, that we've usually finished the grant and are retroactively asking for cash to cover it.

I think that's a separate problem from peer review though. That's more of a problem for young investigators and reflects how tight the funding has gotten. People who have labs up and running and have benefited from steady cash (like my lab) have a huge advantage because we can risk spending money on projects before getting the grant, then show a ton of preliminary data in the grant thus guaranteeing a nice percentile score. Right now study sections are looking for any excuse to penalize grants since they have to choose between lots of excellent research. It's not really their fault or the fault of peer review. The research enterprise has been overextended from years of exponential funding increases. Now that it's been hammered flat, all these people trained up for what we thought would be continuing increases in funding are fighting for the scraps.

Peer review is the best system we have, even with all its flaws. One possible solution/improvement would be to get the editors to do a little more legwork in choosing reviewers, but more importantly, in checking their reviews. I've heard enough complaints about how R1 just didn't read the paper properly and brings out concerns that are totally irrelevant, and those can end up sinking a paper. The journal editors should be able to disregard a review like that and publish the paper anyways or "accept with minor revisions". That could potentially help bring in a little bit of accountability for reviewers.

Editors could also impose stricter time limits on responses to reviews before giving potential reviewers a paper to look at. By accepting the review task, they would be entering a sort of contract to finish the review in an expeditious manner, say 2-3 weeks, and that solves the problem of holding up papers that are in review for selfish purposes.

I agree with Mark C-C that blinding would probably do more harm than good. In my field, you just know that certain labs run studies in certain ways, or using certain set ups. You read the introduction, and you recognise the stock phrases from some post-doc in a lab across the country.

Speaking of double-blind reviewing...some friends of mine have submitted papers and were chastised for citing their own work, being rejected because of it. Pity that the work they cited was from another group entirely. This has happened to them at least twice from what I have heard.

As someone who is trying to get papers published, I wish that I could respond to the comments on my papers when they are rejected. For some people, it would be horrible. But for those of us with cool heads, being able to express that we are happy to address their concerns and that they should look forward to another submission could (I expect) help get papers accepted in the future.

Really wish I had time to make some decent comments.

All I will say for now is that I strongly support peer-review as one of the most critical components of the practice of science, and believe that it works, up to a point.

But the view that peer-review is more-or-less okay, and doesn't need some serious (and ongoing) review itself, and possible overhaul, is just plain wrong.

I have seen far too many problems, including incompetence, dishonesty, petty jealousy and empire building, to believe otherwise.

I think the answers largely revolve around increasing the transparency and accountability of the peer-review process.

Sorry, really wish I had the time to expand on this. Maybe later.

I've considered this - and like Ardem I think that the simple answer is more transparency, i.e. publish the reviews.

I should add that I think that any actual flaws in the peer review system are orders of magnitude less severe than those thrown around by denialists of all stripes. They might act to slow down the publication of innovative work, but claims of wholesale censorship of "challenges to the orthodoxy" are obviously rubbish, since the "orthodoxy" in most fields is clearly evolving over time.

When these blogs finally are made to count as "peer reviewed" publications, you will have it made.

And then they won't be able to make fun of "scientists" like PZ Myers for his dearth of peer reviewed publications.

By Skeptical Student (not verified) on 09 May 2007 #permalink

Just an observation that at many journals, particularly where the editorial staff are practicing scientists, there is a tremendous amount of "fixing" of bad review behavior. Editors are not stupid, and in general have tremendous latitude in which of the reviews they will focus on, often you will get an Editor's decision saying (not in so many words naturally) "fix the problems identified by Reviewers 1 and 3 and ignore crackpot Reviewer 2". They also "fix" the problem by failing to use the "bad" Reviewer on the revision. I've seen several cases in which the Editor will solicit additional review, in some cases asking that reviewer to focus solely on the issues raised by the original crackpot reviewer.

In terms of grant review, the SRAs serve this purpose too. They bring in "discussants" or fourth reviewers when there is apparent initial disparity between reviewers. They don't invite the cranks back and don't put them on as charter members.

It is not foolproof. But the whole review system depends on people behaving well. This is a great thing. It is clear at every official point that the expectation is for professional behavior. There are some back-check mechanisms in place. It is not clear to me that any major changes in approach can enforce good behavior in the bad actors...

In theory this is of course true, however in practice this creates a major ethical dilemma for those reviewers invited to examine a paper from a direct rival. The temptation, unfortunately, is to 'hold up' a paper for enough time to allow the reviewers own group to publish first.

In many cases when you submit apaper you can specify who you do not want to review it. You might want to do this if you suspect that the journal will send the paper to a rival that will either use the information for their own benefit or reject the paper because it refutes their papers.

Most people know of some cases where the peer-review system has not worked well but in the vast majority of cases it works very well. If you really want to get something published then you just keep on submitting it to various journals. Eventually it will get published although perhaps not in a journal with a high impact factor. Duesberg's latest HIV paper is a good example of this.

By Chris Noble (not verified) on 09 May 2007 #permalink

"If you really want to get something published then you just keep on submitting it to various journals. Eventually it will get published although perhaps not in a journal with a high impact factor. Duesberg's latest HIV paper is a good example of this."

It could be argued that there should be a registry for submitted but not yet accepted papers, along the lines of the clinical trials registry. Every time a paper is rejected it should be noted on the registry. Endless resubmissions of the same paper to different journals is not legit, it is just a fishing expedition. After about 3-4 rejections a paper should either have a major rewrite, the work in it redone, or be refused any further consideration by any journal.

fractician says:
"The only novel projects that get funded, are projects that have considerable data already gathered and showing as "preliminary data". In that case, it's like applying for money to do work you've already done. *warning, anecdote ahead* The most novel researcher that I know essentially funds his lab on work that he's already done. "

Dude, this is not gaming the system, this is professional development 101. I don't know any tenured full professors who don't work a grant ahead- how else can you plan staffing and equipment grants? This is why it is necessary to do a post-doc that gives you enough results that you can get one cycle ahead.

Ardem, you miss the point of having multiple journals. Most papers are not rejected because they are bad science, they are rejected because they are not appropriate for that particular journal. Most researchers will naturally overestimate the importance of what they do for a living, and apply to too prestigious a journal for what they have achieved. Their paper will then slide down until it gets to an appropriate publication.

finally, there is merit in the idea of getting younger people reviewing earlier, and rewarding those who do it well. Good reviewers are hard to find.

My thoughts on blind reviewing, and the process in general, are here:
http://lablemminglounge.blogspot.com/2007/04/what-to-do-about-reviews.h…

Chris, you can, of course, keep resubmitting to other journals and eventually get your paper accepted but that is not the issue for real scientists (in contrast to creationist/IDers for example). While the majority of peer review works out fine there will be certain research topics that become no-go areas for new researchers - if they want to publish in the top journals (and thus impress grant authorities and attract funding). I am simply stating that this is a real problem that should not be ignored.
Ardem, I really don't know how the idea of a registry for submitted papers can work. Most papers can very easily be given a new title to qualify as a rewrite when resubmitted to a new journal. I don't think journals or non-paid reviewers should be expected to go on some sort of detective hunt with each paper they receive to make sure it is sufficiently different to any previous submissions. It is time consuming enough without having to do this.

Lab Lemming. Some fair points in what you say. But I don't think I missed the point, though perhaps I didn't explain my position clearly. If people have to go through many submissions to get published (say more than 5-6), then the chances are they are either submitting poor quality work, or they are choosing the wrong journals in the first place (and I agree that it is not always clear cut which is an appropriate journal). Also, I was referring to the actual review side of submission, not the editorial selection of what gets passed on for review. If someone submits a paper that is inappropriate for a particular journal and the editor simply passes it back without getting it reviewed, then this shouldn't count as a failed review.

"Finally, there is merit in the idea of getting younger people reviewing earlier, and rewarding those who do it well. Good reviewers are hard to find." LL

Agree completely there.

MartinC. Relabelling a paper is one of the problems with multiple submission, and it should be a strict no-no, with anyone found doing it without good reason, (ie simply to try to sneak it past editors and reviewers), being sanctioned, one way or another. I agree that there are real limits on the "detective work" that editors and reviewers should have to do, and maybe a registry is impractical. But I regard the endless resubmission is fundamentally illegitimate, and a problem within peer-review that has to be dealt with somehow. A relatively minor problem, perhaps, but a real one nonetheless, that I have seen happen quite a few times in my area (a branch of med sci).