Portrait of a Reviewer as a Young Man

Science is dynamic. Sometimes this means that science is wrong, sometimes it means that science is messy. Mostly it is very self-correcting, given the current state of knowledge. At any given time the body of science knows a lot, but could be overturned when new evidence comes in. What we produce through all of this, however, at the end of the day, are polished journal articles. Polished journal articles.

Every time I think about this disparity, I wonder why different versions of a paper, the referee reports, the author responses, and all editorial reviews aren't part of the scientific record. In an age where online archiving of data such as this is a minor cost, why is so much of the review process revealed to only the authors, the referees, and the editors?

More like this

Having made reference to the referee system in my post about a paper being accepted, this seems like a good point to dust off an old post about the peer review system in physics. Like many of the other Classic Edition posts I've put up here, this one dates from July of 2002. Apparently, I wrote a…
I said in the previous post that the time-resolved collision paper was one of my favorite experiences in grad school, even the paper-writing process. It's not so much that the paper-writing process was all that exceptionally good-- it was the usual "paper torture," arguing over every single word in…
There's been a lot written recently about academic publishing, in the kerfuffle over the "Research Works Act"-- John's roundup should keep you in reading material for a good while. This has led some people to decide to boycott Elsevier, including Aram Harrow of the Quantum Vatican. I'm generally in…
Two recent events put in stark relief the differences between the old way of doing things and the new way of doing things. What am I talking about? The changing world of science publishing, of course. Let me introduce the two examples first, and make some of my comments at the end. Example 1.…

Completely agreed. My dream setup for a journal would be that whenever a paper gets accepted, the reviews and the reviewer's names get published with it. If the paper is rejected, then everything stays anonymous. This way, reviewers have the same incentives as authors. In addition, a good review could be viewed as a valuable contribution to the process (like it should be right now, but since reviews inevitably end up buried, very few people seem to bother)

Carlos, the problem with that I think would be that most people would go the 'safe' way and reject papers, since that would protect their anonymity and wouldn't present them with the risk of being associated with a potentially wrong/trivial/.. paper. Also, since there are several reviewers per paper, at least for some journals, how would you deal with the person who would raise objections (or reject) but would still have his/her name published if the other reviewers said 'publish'? Should their name be published too? Moreover, blind reviews protect more vulnerable members of the community like early-career and tenure-track scientists.

Amen. I've spent a lot of time working in the history of science and one of the things we have lost in the digital age is "rough notes." For papers written thirty years ago or so, notes - from scraps of paper to entire notebooks - frequently can be found in archives and private collections that detail the "messy" process of science. The other thing we have lost, particularly with the advent of e-mail, is written letters as a record. Some of the best ideas have come out of these letters (I cited several in my PhD thesis). But electronic mail not only gets lost to the cyber-wastebin, even when it is saved it is rarely as informative as a written letter. The latter often included hand-drawn diagrams, easier-to-decrypt equations (LaTeX has its limits), and other tidbits not found in the relatively cold form of an e-mail.

Some of the open access journals do this - PLoS comes to mind. I want to say that the BMC journals also do, but I can't be sure offhand.

IJQI experimented with this for the special issue(s) on distributed quantum computing. My contribution:
http://quantalk.org/139
Apparently, only a couple of people were willing to go through the public humiliation :-). I'm an author, and reviewed a paper publicly, too.

I'm in favor, at least on an experimental basis.

I had this same thought a little while back. Couple of comments:

(1) This could also be done for conferences (in CS we don't use journals unfortunately).

(2) The reviewers comments could be anonymous. This would still be valuable.

Also, don't they already do this with the Royal Society Journal of Statistics? It seems to work well.