In reality, peer review is a fairly recent innovation, not widespread until the middle of the twentieth century. In the nineteenth century, many science journals were commandingly led by what Ohio State University science historian John C. Burnham dubbed "crusading and colorful editors," who made their publications "personal mouthpieces" for their individual views. There were often more journals than scientific and medical papers to publish; the last thing needed was a process for weeding out articles.
In time, the specialization of science precluded editors from being qualified to evaluate all the submissions they received. About a century ago, Burnham notes, science journals began to direct papers to distinguished experts who would serve on affiliated editorial boards. Eventually--especially following the post-World War II research boom--the deluge of manuscripts and their increasing specialization made it difficult for even an editorial board of a dozen or so experts to handle the load. The peer review system developed to meet this need. Journal editors began to seek out experts capable of commenting on manuscripts--not only researchers in the same general field, but researchers familiar with the specific techniques and even laboratory materials described in the papers under consideration. The transition from the editorial board model to the peer review model was eased by technological advances, like the Xerox copier in 1959, that reduced the hassles of sending manuscripts to experts scattered around the globe. There remained holdouts for a while--as Burnham notes, the Tennessee Medical Association Journal operated without peer review under one strong editor until 1971--but all major scientific and medical journals have relied on peer review for decades.
In recent times, the term "peer reviewed" has come to serve as shorthand for "quality." To say that an article appeared in a peer-reviewed scientific journal is to claim a kind of professional approbation; to say that a study hasn't been peer reviewed is tantamount to calling it disreputable. Up to a point, this is reasonable. Reviewers and editors serve as gatekeepers in scientific publishing; they eliminate the most uninteresting or least worthy articles, saving the research community time and money.
Democratizing the peer-review process raises sticky questions. Not all studies are useful and flooding the Web with essentially unfiltered research could create a deluge of junk science. There's also the potential for online abuse as rogue researchers could unfairly ridicule a rival's work.
Supporters point out that rushing research to the public could accelerate scientific discovery, while online critiques may help detect mistakes or fraud more quickly.
The open peer review movement stems from dissatisfaction with the status quo, which gives reviewers great power and can cause long publication delays. In traditional peer review, an editor sends a manuscript to two or three experts - referees who are unpaid and not publicly named, yet they hold tremendous sway.
Careers can be at stake. In the cutthroat world of research, publishing establishes a pedigree, which can help scientists gain tenure at a university or obtain lucrative federal grants.
Researchers whose work appear in traditional journals are often more highly regarded. That attitude appears to be slowly changing. In 2002, the reclusive Russian mathematician Grigori Perelman created a buzz when he bypassed the peer-review system and posted a landmark paper to the online repository, arXiv. Perelman later won the Fields Medal this year for his contribution to the Poincare conjecture, one of mathematics' oldest and puzzling problems.
What do you think? And what can be the role of blogs in this Brave New World of online science publishing?
Maybe I'm just a jerk, but I think getting things published these days is still too easy and that reviewers aren't hard enough. Even in top-tier journals you see some real crap get in, and when your field in particular gets contaminated with this crap, it takes effort and money to correct the literature.
I've rejected papers and just seen them reappear further down the line into the lower tier journals with the same glaring flaws. I rejected a paper a few months ago that had the most obvious and important control omitted, told the researchers so, and rather than re-submit after doing a simple experiment, they published the junk unmodified elsewhere (probably because the control would have nullified the result of the entire paper). Now it's part of the literature, this pisses me off.
I'm protective of my field, and when people publish crap that has to be refuted with additional research, that's time and money out of our lab's pocket. It's a serious problem, and the worst thing is, is that time spent correcting the literature and cleaning up BS science doesn't make grant applications any easier. It's not positive research that will lead to further funding. So what we end up with when the number of journals expands is more demand to fill space, lower quality of publications, a necessity to then correct the literature after sloppy stuff gets through, and no financial or career incentive to correct the mistakes.
I hate this idea that peer-review should be slackened or that we should expand online and let people publish without fighting the referees first. It's just going to let more crap in, and it's already hard taking out the trash as it is.
Unfortunately, you don't only have to deal with author's crap but also with referees who either don't have a clue about the issue or are just lazy.
Indeed it happend several times to papers I was involved in, that referees claimed controls already included in the work. In one case one of the referees discussed a western blot although there was nothing like that in the paper and wouldn't have made sense. In another case about parental inheritence one of the referees obviously had no clue about this mode of inheritance, although I must admit that he seeemd to be an expert on the protein family in question.
In another case a big shot published a mouse mutant with phenotypes we did not observe with similar mutants. Obviously, if you are established in a field referees accept sloppy experiments. For us this meant generating an additional mouse mutant, two additional years of work and a whole bunch of experiments just to disprove our competitors, our results finally ending up in a journal of lower impact (still this guy keeps citing his crap paper).
In addititon, I someetimes have the feeling that drafts are at first rejected anyway, just because referees have to prove to the editor that they really work and that they comprehend the stuff they are reviewing. In such cases it would be better if reviewers would tell the editors that they don't have the expertise to judge the experiments and suggest somebody else to review the paper. However, being a reviewer is quite prestigeous, so this won't happen to often.
Therefore I would appreciate if there was more control of the quality of the reviewing system.
An improvement that some online journals have established is publishing the referees comments. I guess this would be helpful for any other journal's reputation. In addition, blogs on the journals web pages could help to further improve reviewing and would make the scientific discussion more lively. However, such blogs should be moderated which of course would cause additional costs.
congratulations...i've nominated you for 'best sci/tech blog . good luck!!!1
I think sparc and I are arguing on parallel tracks here. I agree completely, not only do we need more peer review, we need better peer review.
I've even seen instances in which people peer-review the peer-reviewers to make sure they're doing a good job, which is probably a good idea.
I'm not going to say there aren't problems with peer review, the power reviewers have to sandbag stuff they don't like (or which might compete with their own research) can be abused, but the alternative is so much worse. It should be more encumbent on the editors to carefully prospect new reviewers and evaluate the quality of their reviews initially before allowing them regular authority to review.
I certainly agree with guitter despite the trouble I sometimes had with reviewers. What a complete lack of peer reviewing would mean can be seen in some places in the blogosphere. There the scientific quality only relies on the blog owner. Unfortunately, there are so many bad blogs that are un- or anti-scientific or dishonest or bothe (e.g. UD). Thus I am quite happy to have scienceblogs.com, PT etc. to get insight in fields not related to my own work.