When my "Ethics in Science" class was discussing scientific communication (especially via peer reviewed journals), we talked about what peer review tries to accomplish -- subjecting a report of a scientific finding to the critical scrutiny of other trained scientists, who evaluate the quality of the scientific arguments presented in the manuscript, and how well they fit with the existing knowledge or arguments in the relevant scientific field.
We also talked about the challenges of getting peer review to function ideally and the limits of what peer review can accomplish (something I also discussed here). In many instances, the people peer reviewing your manuscripts may well be your scientific rivals. Even if peer review is supposed to be anonymous, in a small enough sub-field people start recognizing each other's experimental approaches and writing styles, making it harder to keep the evaluation of the content of a manuscript objective. And, peer reviewing of manuscripts is something working scientists do on top of their own scientific research, grant writing, teaching, supervision of students, and everything else -- and they do it without pay or any real career reward. (This is not to say it's only worth doing the stuff you get some tangible reward for doing, but it can end up pretty low in the queue.)
Why, one of my students asked, don't the journals hire people to do peer reviewing? Why not make it an actual paid job?
Surely there are enough highly trained scientists with Ph.D.s out there to staff a project like this, given the shortage of academic and industrial jobs. Would being a professional peer reviewer be worse that the eternal cycle of postdoc-ing?
Another student expressed the worry that you'd really want the people reviewing your manuscript to be working scientists -- folks who understand what's involved in getting scientific research to work, and in interpreting results, because that's what they do, too. People who only performed peer reviews of manuscripts would lose touch with the realities of research, and maybe wouldn't be able to give as much useful feedback as a result.
Why not make that part of the peer reviewer's job? another student ventured. You don't just evaluate the logic of the arguments, but you actually get the experiments they describe up and running and see if you get results that are consistent with the ones reported.
In other words, not only do you shift peer reviewing from the hands of overburdened scientists who are locked in competition with each other for funding, journal pages, and fame to the hands of scientists with the time and maybe more objectivity, but you also create a mechanism whereby scientific research is regularly reproduced. And, you create decent jobs for some of the "excess Ph.D.s" that currently exist.
Aside from the concern that the journals would need to find money to create and support (with lab space, materials, etc.) these positions -- which might well raise the price of journals beynd where they are already (or raise operating costs for open access journals) -- are there obvious reasons that a plan like this would be a bad idea?
One might worry that the peer-reviewers would tend to distort the sort of research that gets published. Consider, as an analogy, pathologies that can arise at universities where the adminstrative power is held by people without academic backgrounds.
Regardless, replicating every study before publishing it would double the cost of research-- because anything would have to be done twice. And who would review the research of the professional peer-reviewers who are replicating all the original research? A regress ensues.
Most scientific work nowadays is so complex and so expensive, it would be beyond the capabilities of all but a few labs to reproduce it. For example, if I report results obtained on a 21T NMR spectrometer, which costs several million dollars to purchase, has a lead time of perhaps 18 months from order to installation, and requires highly trained people to operate, what journal could afford to try to reproduce the work?
One advantage of peer review is that it usually enlists the few people in the world who are in a position to check results directly, if they need to.
I'm fully in favor of paying people to peer review, by the way. An efficient system is one that prices the costs of work accurately. Conscientious peer review takes time; why should the Nebraska taxpayer (say) pay me to spend my time doing unrenumerated work for Nature?
I'm not so sure that reproducing experiments reported in manuscripts would double the cost of research, since presumably the pile of things that didn't work wouldn't have to be redone.
I'd worry that this limited number of peer reviewers would be more easily corruptible than the distributed network that's currently in place. Also, how many times would the reviewers attempt to reproduce an experiment before they gave up? There's a chance that finicky experiments would be difficult to reproduce.
Also, some experiments (I'm thinking here of longer medical type studies) take a very long time to perform. One can't very well hold off on publishing a paper for years in order to completely redo the work.
The biggest problem with this idea is that it would skew published studies towards experiments with simple, straightforward experimental protocols and penalize any work using complex or difficult techniques, which would be much harder to replicate in this manner. Even something as seemingly straightforward as immunoprecipitation often takes a lot of tweaking to get it to work. There's no way specialized techniques are going to be replicated in any sort of cost-effective manner.
Sounds like a PhD make-work program to me.
Journals already charge for color images. Can you imagine having to pay the Experimental Reproduction charge?
If you want to know what such a program would look like think of the FDA. That's the kind of bureaucracy that would be necessary to pull off something like that. It's hardly worth the trouble.
In the end, the research community already knows which labs perform high quality research and which labs cut corners and which labs produce BS.
Even an FDA-style bureaucracy wouldn't be enough to reproduce a substantial fraction of submitted data. (Once upon a time, IIRC, you had to send samples of every manufactured lot for some products to FDA, so they could do confirmatory testing if they wanted. They don't even do that anymore.)
A more workable alternative might be to send the reviewer to the submitting lab. Let them look through the notebooks & raw data, or observe a representative experiment in progress.
Even then, I think the time and effort would be prohibitive.
While I think the reproducing of experiments just isn't feasible for the reasons in the other comments, I agree with Gerard that reviewers should be compensated. I also think the concern that professional reviewers would be out of touch with the lab is overblown: federal program officers don't do lab work, and, in most cases, make reasonable decisions.
I'd love to be a full-time reviewer, but wouldn't want to do the lab work. I could probably reproduce something that is experimentally similar to things I've done before, but a lot of experiments, even in fields I know a lot about, use techniques I've never done. So I could judge whether or not the experiments they did made sense in the context of the work, but even if I was given all their reagents and machines it would take me forever to reproduce it, just because I wouldn't have the experimental skills. For example: I don't do dissections (for various reasons) but I can understand descriptions of experiments done on primary cells and whether or not it was relevant to the article or if it needed more controls/statistics.
I really like the idea -- I don't think the lab setting is at all necessary, as paid peer reviewers would be doing the job all the time, so I don't think they'd need the lab setting to keep their mind fresh or replicate results. I don't mind reviewing articles, but I do feel they eat up a lot of time, AND I have been burned by my paper going out to the wrong reviewers (not for a rivalry, but because the reviewer was clearly not in my field and not well chosen).
This could also apply to reviewing grants...
I have seen something similar to this suggested in the Notices of AMS; Michael Fried of University of California asks if journals should compensate the referees? (pdf) As far as reproductions go, if they are simulation results, and, if the authors take the required measures to make the results reproducible, it should not be too difficult. But, I am not sure about experiments.
Why not make [peer review] an actual paid job?
Because there's no way you can cover all fields of expertise, even for a fairly narrowly targeted journal, with a stable of tame reviewers -- at least, not nearly so well as you can by selecting your reviewers directly from the relevant field. And what would Nature do, employ 200 reviewers?
What Orac said holds true without replication, too -- the less complex your system, the less abstruse your ideas, the more likely the tame reviewers will understand your work and publish your paper.
(The experimental reproduction idea is ridiculous -- cost far outweighs benefit.)
Independently repeating experiments before publication by people who are not experts in a very narrow subspeciality would bring science to a screeching halt. I work with a class of mutants that do not show a phenotype until the 6th generation. It would take a reviewer at least a couple of years (if all goes well) to recreate these late generation mutants, and then they would still have to analyze them.
I think the student who proposed this approach can be forgiven as being young and naive, but this could also be interpreted as part of the War on Science, which continually disparages the expertise and insight required to actually advance our knowledge of the universe.
Mike the Mad Biologist:
Federal Program Officers do frequently make the right decisions (although on occassion they have given me funds), but they rely strongly on input from panel members and ad hoc reviewers who do lab work, or at least employ postdocs who do. The Officers' talent is in making sense of input from multiple sources, including working experts in the field; not in generating opinions on their own. Even if they stayed current in their own field, there is no way they could have the breadth to render reasonable decisions on most of the proposals their program receives.
The intricacies of lab work already force the use of collaborators to perform experiments not only for access to reagents and equipment, but also for simple expertise.
This kind of a system would put a lot of burden to have the reviewers master thousands of techniques, including novel ones introduced in papers. Not to mention would increasing publication times to unreasonable lengths. By the time one paper is reviewed a lab would have much of a follow up study prepared - imagine if it turns out rejected.
I also tend to think its antithetical to the "academic spirit". If the scientific community feels that it needs to start policing itself, and validating results via neutral parties, there are greater concerns to tackle. Thankfully most any truly novel and interesting result will be built upon and repeated lest it not be of great value.
While I would support stipends for reviewed articles by working professionals, I think the only other improvement the system could stand is to end the practice of anonymous review. That way reviewers have to stand by their critiques.
I agree with a lot of the comments here, the paid reviewer idea is a good one but the suggestion of that reviewer or their lab trying to repeat the experiments is unworkable.
Try redoing a knockout mouse model for instance or a completely new technique in systems biology or indeed any study using patient samples - it wouldn't work. A scientific consensus is derived from many labs reproducing the same results in multiple papers rather than one single definitive report.
As for the paid reviewer, I think that post should be part time so the reviewer can still work as an active scientist and remain an 'expert' on a topic or technique - the principle reason why they would be used as a reviewer in the first place.
As a peer reviewer and contributing editor to a small journal, I spend a large part of my "journal-related time" trying to squeeze it around my real work. Since all my paying work is on tight timelines, I end up peer reviewing and editing AFTER I get my regular work done, usually late at night. Not really good for the authors whose work I'm reviewing or for the papers I'm editing for the journal. Yes, getting paid for this work might help, but it still doesn't make more time in the day to do the job! Many peer reviewers are in the same position, squeezing their reviewing time into the odd minutes/hours when they aren't doing their regular jobs. If anyone wonders why it takes so long to get a manuscript through peer review - this is probably the biggest reason, followed by the limited number of people who are willing to volunteer!
It is nice to put "peer reviewer for XXX Journal" on your CV, and it does feel flattering to be asked to review for a journal in your specialty; however, folks that get solicited for this job often forget that journal publishing is a business with killer deadlines and authors who are wondering why it takes so long to get feedback regarding their manuscript.
Making this a full-time job would help to address the problem of having enough hours in the day to do the work, but I'm doubting that journals (or the organizations who support them) would happily put up the money for this kind of activity. If they did, we'd probably end up seeing the journal prices go up, along with some kind of submittal fee for new manuscripts.
"Aside from the concern that the journals would need to find money to create and support (with lab space, materials, etc.) these positions"
That's an awfully big thing to leave aside. One of my coworkers is currently building an instrument. By the time it is completed, it will have taken several years and several hundred thousand dollars. I'm not sure how that could be made to work.
The idea of professional reviewers is interesting, though.
Tex,
I agree, but they are able to keep current without staying in the lab (i.e., attending scientific meetings, etc.). I've seen plenty of reviews by 'experts' in the field that also need 'help.' With rare exceptions, I don't think it's as a big a problem as it's made out to be. I'm also not advocating for full-time reviewers, but 'part-time', so papers get the attention they deserve, and the reviewers are fairly compensated.
Replicating the experiments would not be where I'd put my money. I would, however, vote for establishing the post of paid reviewers. Review isn't blind now, in the sense that reviews can generally figure out who wrote a paper, so no need to keep the reviewers a secret anymore either. The paid reviewer would be known to the paper authors, and this would help keep them on their toes.
Regarding the need for them to have experience or be "working scientists" (a loaded term...), there are a couple of possibilities. Require paid reviewers to have a PhD and a few years of postdoc experience so that they are well-versed in lab work and experienced enough to parse a journal article. As paid reviewers, their time won't be consumed with research, so they'll have time to stay abreast of developments in the field they are supposed to be covering - send them to a conference or two a year as well. Lots of people who review articles now for free don't actually do any research themselves - they are too busy being administrators - managing their labs, writing grant proposals, going to meetings. They aren't doing bench work. Paid reviewers would be similar, they just wouldn't have to write grant proposals. :)
Another possibility, if you really think the paid reviewer has to have his/her hands on the bench, is to make paid reviewer a part-time job, and have it done by someone at the technician level. They are immersed in the actual work and often know the pitfalls and realities of what's going on better than the PI's and postdocs. Have someone who's a halftime tech and a halftime paid reviewer. That would certainly be interesting....
I hate to look at the dark side of things, but having the same set of reviewers all the time, also means everyone will know who the reviewers will be. Fierce competition could lead to offering (and possibly acceptance) of bribes.
Time to be contrary...
Paying referees for their time is certainly appealing on the grounds of fairness, but from an economic point of view it would probably change very little. Publishing science is ultimately funded by the source as research. Funding bodies and universities support libraries; libraries and labs buy subscription to journals. Personal subscriptions to journals are bought, mainly by scientists, who are paid by universities and grants.
If you make peer-review more expensive you make journals more expensive so unless the pot of money funding science as a whole goes up there will be less money for research. Researchers might even start supporting their research through refereeing.
Of course that would mean that the people who could get good funding would have less need to referee. So the people who would want to referee would be those having the most difficulty getting funded. Increasingly refereeing would be done by less successful, less competent, and plain poor scientists.
In fact paying referees might be disastrous for science, reducing the quality of peer review. What we need is for good refereeing to give scientists status and career development so that being a referee is not seen as a chore and a necessary evil. To do that far more refereeing will need to be done in the open, accessible to readers and not performed behind a cloak of anonymity.
Some of the system you suggest is in place at journals where the editorial staff is professional. Your Nature, Science and Cell class journals have editorial teams drawn from the ranks of ex-scientists to a large extent. Their role in "peer" review (what gets sent out, which pleas from PIs to heed, which reviewers to use, when 'competitive' pressure justifies overlooking crap papers) is extensive.
How's that working out for us?
From Nath et al. Med J Aust. 2006 Aug 7;185(3):152-4.
The three journals with the highest number of retractions in this study were Science, Proceedings of the National Academy of Sciences, and Nature. It seems highly unlikely that these journals are prone to publishing shoddy research. Instead, this elevated error rate may reflect the high level of post-publication scrutiny received by the articles in these journals. It is likely to be easier for errors to slip by undetected in less widely read and cited journals. In addition, the complexity and rigour associated with studies published in these journals may lead to a higher risk for error in implementing and replicating the research. Furthermore, the large volume of articles published in these journals may naturally increase the rate of error among them
...of course the authors overlook the Ha: Higher rewards of higher-profile publications are more likely to compromise professional ethics.
The real trouble of reviewer replication of experiments is what to do with a different result. If someone can't get the same quality results as another person who has spent 5+ years mastering a technique I'm not sure how an editor would want to use the information.
A possible compromise is having a paid statistics/methodlogy reviewer. That person wouldn't need to know all the nuances of a field, but would have the job of answering the following questions:
Are statistics used properly?
Does the presented data and analysis support the conclusions?
Could another researcher replicate the study using the information and references in the methodology section?
The additional questions a specialist needs to answer are:
Are the results novel and relevent to this field of research?
Are the conclusions properly presented in context with existing knowledge of the field?
A possible compromise is having a paid statistics/methodology reviewer.
Now that is a damn good idea, but really only for the question: "are statistics used properly?". All the rest is par for the course for regular reviewers, but statistical methods -- at least in biomed research -- are so commonly abused as to warrant their own, paid, reviewer.
Re. the statistics review: Recent discussion in the medical writing/editing arena had to do with one of the major medical journal editors recommending that all submitted papers undergo outside statistician review for validity of stat. methods and interpretation of results, claiming that this would cut down on the lax analysis/interpretation of results/gibberish being put into scientific papers. Of course, this outside review would be required of the authors (so they would have to find $$$ to get outside stat. review and analysis interpretation done before even submitting the paper to the journal). The journal editor thought this would also nip some of those nasty retracted article problems in the bud, 'cause an outside stat. person would review the paper. Of course, being the skeptic that I am, I wondered why a hired statistical gun (who I hired to review MY paper) would be less likely to pooh-pooh my results if I were paying his/her way. No one seemed to find that an issue (!!!), as the journal editors prefer to NOT have to deal with issues like this...oh well....wonder if the journal ever went through with this...I'm guessing not, or I would have heard the howls from authors by now.
Why not mix the two, have a staff of peer reviewers and offer an honorarium to folks who do peer reviews out in the business? Sounds like a way to take pressure off of tenure-track research folks and give Ph.D's who have something to offer but who are blown away by the rat race a place to contribute.
Many of these comments are very interesting. I've blogged elsewhere on the issue and refer you to a new post based on this discusion:
http://neutralsource.org/content/blog/detail/873/
Paid reviewers, Janet? LOL That's a good one. Wasn't this originally an April First post?
Seriously, as others have said above, there's too much specialization and time involved in many experimental results. It's just not practical to try to replicate results.
I, for one, review on the average one manuscript per week. I spent approximately 4-5 hours of my time on each one, that's 200-250 hours a year. I ususally do the review at home (evenings and weekends). It would be nice to receive an honorarium for my efforts. The NIH pays its reviewers (a little). Considering that most journals charge authors a page charge, a color photo charge, reprint fee, etc., and considering the unbelievably high library subsription price all journals charge, reviewers only get a "thank you" recognition on the last page of the last issue of the current volume of the journal.
Reviewing manuscripts is a hard and thankless job that fewer and fewer scientists now aday are willing to do. An honorarium ($50-100) could at least alleviate that problem.
As one of the students who was present for this discussion, I find all of these comments interesting and informative. The idea of having a statistician to review results is a great one. While I think that replicating each experiment would be a rediculously burdensome cost, I don't understand why only those that seem to have something not-quite-right about them can be replicated, as well as those that are extremely controversial or an enormous break-through. Or, to go along with what another said above, why not have the lab sending the manuscript also send photocopies of the notebooks if there are any further questions or problems? Or if it is not too problematic, have the reviewer go to their lab?
I also think that since the reviewer will be reading most or all of the manuscripts for a certain subfield, they should become more than adequately skilled to learn what seems feasible or plausible and what doesn't. (This isn't exactly what I mean, but I can't think of a better way to phrase it...)
There should peer reviewers' association or something like that. There is one congress held four years apart, International Congress on Peer Review and Biomedical Publication managed by JAMA and BMJ. No group, no member body, nothing. Choosing reviewer is biased. Papers from famous group may be sent to new comer to assure acceptance. Remuneration is far off thing, though big money is involved in publishing. Editors and subeditor meet, get perks, remunerations at the cost of reviewer's toils. Richard Horton, editor of the British medical journal The Lancet, can say that "The mistake, of course, is to have thought that peer review was any more than a crude means of discovering the acceptability â not the validity â of a new finding. Editors and scientists alike insist on the pivotal importance of peer review. We portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller. But we know that the system of peer review is biased, unjust, unaccountable, incomplete, easily fixed, often insulting, usually ignorant, occasionally foolish, and frequently wrong (1)."
Hipocrisy running. So, blog on! reviewers! Happy blogging!
1.eMJA: Horton, Genetically modified food: consternation, confusion, and crack-up