Nature's report on open peer review

I was on the way out the door for a vacation when the journal Nature published its much-anticipated report on the results of its open peer review experiment, but I did want to offer a few comments on the report, even if I'm arriving to the discussion a bit late.

Peer review, of course, is the gold standard for academic publishing. I believe one of the reasons for Cognitive Daily's success is our clear delineation between reports on peer-reviewed research and commentary on news items reported in the popular press (you can always click on the Just the Research tab above to see only reports on peer-reviewed articles). We try to offer several articles each week covering peer-reviewed research, and these are by far our most popular offerings.

But the peer review process itself has also been the subject of criticism. Even after review by experts, some research has later been found to be fraudulent. Some scholars have wondered whether new lines of research may be repressed by the "old guard" in charge of the peer review process. It is in this context that Nature decided to try out a new form of peer review, an open process which allowed anyone to comment on manuscripts before publication. Now, a few months later, a team led by Philip Campbell reports on the results of the process:

We sent out a total of 1,369 papers for review during the trial period. The authors of 71 (or 5%) of these agreed to their papers being displayed for open comment. Of the displayed papers, 33 received no comments, while 38 (54%) received a total of 92 technical comments. Of these comments, 49 were to 8 papers. The remaining 30 papers had comments evenly distributed. The most commented-on paper received 10 comments (an evolution paper about post-mating sexual selection). There is no obvious time bias: the papers receiving most comments were evenly spread throughout the trial, and recent papers did not show any waning of interest.

The trial received a healthy volume of online traffic: an average of 5,600 html page views per week and about the same for RSS feeds. However, this reader interest did not convert into significant numbers of comments.

In a word, the results are disappointing. Only 5 percent of submitters agreed to the process, and only 38 out of the 71 papers presented for open review received any comments at all. Most of the comments on the substance of the papers were not judged to be helpful by the paper authors. For the most part, the only useful comments covered editorial concerns -- presumably, these same issues could have been addressed during the copy editing phase.

I doubt the editors of Nature were surprised by these results. I've been an editor myself, and finding reviewers is one of the most onerous tasks an editor can undertake. I've spent endless tedious hours trying to coax potential reviewers, first to agree to write a review, and later to submit the work they promised. Greta is considered a "prompt" reviewer because she never turns her reviews in more than a week or two after her deadline. Many reviewers routinely turn in their reviews months after the deadline. Reviewing is a thankless job, and without incentives for reviewers, I doubt any open review process will ever gain much traction.

What kind of incentives would work? The most obvious would be career incentives: if work as a reviewer was rewarded with tenure and promotion, it would soon become one of the top priorities of any scholar. Unfortunately this revolutionary change in the glacial world of academia is about as likely as PZ Myers undergoing a religious conversion, so we probably will need to look elsewhere. Many journals already require authors to review articles as a condition of submitting articles for publication. Perhaps this sort of incentive could be adapted to an open review process. Even so, it would be difficult to administer. How would reviewers be evaluated? By authors? But then wouldn't there be an incentive for reviewers to rubber-stamp articles for publication? For now, it appears that the peer review process as it stands might be the lesser evil.

In the future, I think there is a possibility that some combination of blogs and wikis might become a partial replacement for the traditional peer review process, but don't expect this sort of change overnight either. Only when contributing to these resources becomes part of the tenure rewards system are they likely to become important factors in the world of academic publishing.

Tags

More like this

I've frequently noted that one of the things most detested by quacks and promoters of pseudoscience is peer review. Creationists hate peer review. HIV/AIDS denialists hate it. Anti-vaccine cranks like those at Age of Autism hate it. Indeed, as blog bud Mark Hoofnagle Mark Hoofnagle, pointed out…
Over at the Nature blogs, they're soliciting comments and opinions about open peer review: The goal of any change in the peer review system must be to improve the quality of review, where quality is determined by two distinct functions: filtering manuscripts for publication in a given journal; and…
There's an article in yesterday's New York Times about doubts the public is having about the goodness of scientific publications as they learn more about what the peer-review system does, and does not, involve. It's worth a read, if only to illuminate what non-scientists seem to have assumed went…
One of the fundamental principles of modern science, as well as other academic pursuits, is peer review. By subjecting a submitted paper to evaluation by other scientists in the authors' field, the solid science advances at the expense of the not-so-good and the interesting and relevant prevails…

The peer review system is both an information and a social technology, one that's deeply embedded in the scientific establishment. Any system proposing to replace it will need to satisfy both technical goals (examination and vetting of papers) and social goals (recruiting [appropriately qualified] people to do the work of reviewing).

From the sound of your summary, it sounds like this particular attempt failed on the social end, but that doesn't mean the problem's insoluble -- it's just difficult to create these things de novo. Eventually, I'm sure some workable solution will be found, but I wouldn't take any bets on what it'll look like.

By David Harmon (not verified) on 02 Jan 2007 #permalink

I'm glad this experiment was done. I hope the experimenters realize how preliminary it was, in formulation and execution.

0. The analysis seems a bit hasty. 5,600 html views per week sounds less than 'healthy' to me. Consider how that compares to the # of readers of Nature. A 1,000-to-1 ration of views to edits is standard for Wikipedia, for instance, and that includes minor along with major edits; another 10-to-1 ratio for major edits sounds right.

1. Reviewing the right kinds of material is not thankless, however technical and detailed; some people gladly spend their leisure time blogging, writing, and talking about precisely this.

You say "Reviewing is a thankless job, and without incentives for reviewers, I doubt any open review process will ever gain much traction." -- well, there are ambient incentives to review : a chance to express oneself, a chance to improve research in one's field, a chance to build a reputation as a skilled reviewer (if the process and its results are open), a chance to push one's own POV. Clearly some of these incentives cut both ways, and should be balanced by infrastructure that helps ensure a cross-section of opinions among reviewers (or at least transparency with regard to potential conflicts of interest).

2. I wish more analysis of peer review would look actively at the law review process, which is both effective and quite different from reviews in other fields -- in its determination of what expertise is needed to be a good reviewer, its identification of people willing to put a great deal of time into the process, and its review / publication response times.

Looking forward to more investigations along these lines,
SJ Klein

I once worked as a programmer on web based applications with 2,000 users who needed these applications to do their job. One day, one of the applications went off-line. There wasn't a single call to the help desk the first day, and only one call the second day. Perhaps most of these users figured someone else would call. Perhaps the help desk exeperience sucked. But one in a thousand sounds real familiar. One in a thousand is likely optimistic.

With open source software, lots of people see your stuff. Fewer bother to download it and install it. Of those, many will find it doesn't do what they want and drop it. Fewer still will see that it nearly does what they want and ask for a change. Fewer still will look at the source and make a small change (since they also have to have the needed skills and resources like time). And then some of those don't bother to post the fixes back. Yet this model has produced some of the best functioning software available anywhere.

If finding good reviewers is problematic, consider this. The open source software model finds good reviewers. How? It offers a very wide audience some incentive to pay attention. Early Linux boasted 100,000 developers. To get that, you need to expose your work to 100,000,000 people. What incentive could be used for science paper reviewers? Early and cheap access to papers of interest should be enough. Those who are particularly interested in a paper should surface.

"What incentive could be used for science paper reviewers? Early and cheap access to papers of interest should be enough. Those who are particularly interested in a paper should surface."

The problem is, that unlike linux, which has millions of users, there may only be 15 or 20 people currently interested in a particular line of research. These are likely very busy people, working on their own projects. If there isn't a clear reward for reviewing, they're not going to do it. The numbers simply don't work.

With linux, a developer often makes a "contribution" to the software because he or she actually needs the added functionality. By contrast, reviewing someone else's work doesn't personally benefit the reviewer.

I'm not saying an open review system is impossible, but I am saying that systemic changes will probably have to be made in order for it to work.