Fixing peer review

I've frequently noted that one of the things most detested by quacks and promoters of pseudoscience is peer review. Creationists hate peer review. HIV/AIDS denialists hate it. Anti-vaccine cranks like those at Age of Autism hate it. Indeed, as blog bud Mark Hoofnagle Mark Hoofnagle, pointed out several years ago, pseudoscientists and cranks of all stripes hate it. There's a reason for that, of course, namely that it's hard to pass peer review if you're peddling pseudoscience, although, unfortunately, with the rise of "integrative medicine," it's nowhere near as difficult as it once was.

Be that as it may, peer review, the process by which scientific papers are evaluated by scientific "peers" to look for problems with the science and decide if the paper is appropriate for publication in a scientific journal, is a concept that dates back hundreds of years. However, for the most part, before the middle of the 20th century, the ultimate determination of whether a paper was appropriate for scientific publication was made by editors or editorial committees. Opinions of external reviewers were sometimes sought when deemed appropriate by journal editors, but by no means was this the practice for most manuscripts. Over the last six or seven decades, external peer review by scientists chosen by the journal editor evaluating a submission has become the standard. Similarly, decisions regarding whether or not to fund grant applications are now generally made by a panel of external reviewers. In the case of the NIH, these panels are called study sections and consist of scientists with expertise in the types of applications being referred to the study section for evaluation, along with (usually) a statistician or two and officials from the NIH who take care of organizing and running the meetings of the panel. The scientific members of a study section usually include "permanent" members, who are assigned to fixed terms on the study section, and ad hoc members, called in for one or a few meetings as needed and deemed necessary by the NIH.

I've not infrequently stolen the words from one of Winston Churchill's speeches to describe our current peer review system:

Many forms of Government have been tried and will be tried in this world of sin and woe. No one pretends that democracy is perfect or all-wise. Indeed, it has been said that democracy is the worst form of government except all those other forms that have been tried from time to time.

Simply substitute the words "scientific evaluation" for "Government" and "peer review" for democracy, and you get my drift. Peer review is, like many any system devised by human beings, imperfect. Scientists know that it is not perfect or all-wise. Indeed, scientists probably complain about peer review more than anyone else, because we have to deal with it many times a year, either as applicants, authors, or peer reviewers. Of course, to the pseudoscientists and quacks we routinely discuss here, peer review is viewed as the equivalent of the Cerberus guarding the gates of Hades (the Underworld) preventing the spirits of the dead from escaping, except in this case, "escape" means breaking out of the crank journals and bottom-feeding "pay-to-publish" open access journals and getting their work published in a real scientific journal. OK, OK, I know it's not a perfect metaphor, but peer review isn't a perfect process; so I'll use it anyway. Besides, if you're a scientist trying to get a paper published and have had to deal with clueless peer reviewers, the image of the peer review process as a giant three-headed dog has undeniable appeal, given that most scientific papers are assigned to three reviewers.

I've been thinking about writing another somewhat general post about peer review at least since August, but, since then, something always seemed to manage to catch my interest when Sunday blogging time rolled around. (Truly, I am Dug the Dog when it comes to blogging.) I figured the topic would keep for another week. Then, last week's New England Journal of Medicine featured a Perspective article by Charlotte Haug entitled "Peer-Review Fraud — Hacking the Scientific Publication Process", complete with an accompanying interview with her. I'm sure I'll be seeing this article featured on quack websites very soon. That is, what we call in the biz, an "in." So I dusted off the list of web pages I had been carefully hoarding at least since summer. Let's dig in.

Hacking Cerberus

Quacks love it when scientists complain about peer review because they think that those complaints validate their conspiratorial belief system about "close-minded" scientists trying to "suppress" their views. Of course, our pointing out the shortcomings of the peer review system are generally intended as a starting point from which either to improve the existing system or to discuss potential alternative systems to replace it, not as agreeing that pseudoscience should be published in scientific journals. We know that science depends on transparency and honesty; if those are compromised, the trustworthiness of science itself can be compromised. In any event, beginning a few months ago, advocates of various pseudoscientific forms of medicine started circulating certain articles quoting them, citing these articles as evidence that science is irretrievably corrupt, broken, biased, or close-minded (take your pick of any or all), the implication being, of course, that their preferred form of quackery has legitimacy but is being unfairly excluded from the scientific literature by the peer review process.

In her article, Haug notes a disturbing trend in peer review, specifically peer review evaluations that are outright fraudulent. Noting that in August, Springer retracted 64 articles from ten different journals "after editorial checks spotted fake email addresses, and subsequent internal investigations uncovered fabricated peer review reports." Later, BioMed Central, also owned by Springer, retracted 43 articles for exactly the same reason, Haug notes:

"This is officially becoming a trend," Alison McCook wrote on the blog Retraction Watch, referring to the increasing number of retractions due to fabricated peer reviews.2 Since it was first reported 3 years ago, when South Korean researcher Hyung-in Moon admitted to having invented e-mail addresses so that he could provide "peer reviews" of his own manuscripts, more than 250 articles have been retracted because of fake reviews — about 15% of the total number of retractions.

How is it possible to fake peer review? Moon, who studies medicinal plants, had set up a simple procedure. He gave journals recommendations for peer reviewers for his manuscripts, providing them with names and e-mail addresses. But these addresses were ones he created, so the requests to review went directly to him or his colleagues. Not surprisingly, the editor would be sent favorable reviews — sometimes within hours after the reviewing requests had been sent out. The fallout from Moon's confession: 28 articles in various journals published by Informa were retracted, and one editor resigned.3

When I first found out about "fake" peer review, I had a hard time believing it. The main reason I was so incredulous was because I couldn't believe that journal editors would be so clueless as to let something like this happen. After all, most peer reviewers work at either university or government facilities; if you see, for example, a manuscript submission with suggested peer reviewers with Gmail, Hotmail, or Yahoo! accounts (or any account not using the domain name of the university or institution for where that peer reviewer works), you'd think that would at least raise a red flag to look a bit more closely. Yes, I know that some scientists might use their home e-mail addresses, but at the very least a non-university or non-institutional e-mail address should lead the editor to take a closer look. Most scientists' e-mail addresses are locatable through their university's website or by looking up the most recent papers they've published as corresponding author; in the case of industry it's more difficult but not insurmountable.

Unfortunately, given how relatively easy it is (or should be) to detect the kind of fake peer reviewers mentioned above, some researchers have become more sophisticated in their peer review fraud:

Peter Chen, who was an engineer at Taiwan's National Pingtung University of Education at the time, developed a more sophisticated scheme: he constructed a "peer review and citation ring" in which he used 130 bogus e-mail addresses and fabricated identities to generate fake reviews. An editor at one of the journals published by Sage Publications became suspicious, sparking a lengthy and comprehensive investigation, which resulted in the retraction of 60 articles in July 2014.

It goes beyond even a researcher creating his own "peer review and citation ring." There exist companies that offer manuscript preparation services to authors. Many are reputable and exist to help with editing and figure preparation. Some provide ghost writing services. Others, as this Committee on Publication Ethics (COPE) statement reports, offer services that include fabricated contact details for peer reviewers to be used during the submission process plus reviews from these fabricated addresses. COPE notes that some of these "peer reviewers" have "the names of seemingly real researchers but with email addresses that differ from those from their institutions or associated with their previous publications" and that "others appear to be completely fictitious." COPE notes that it's not clear how much the authors of manuscripts submitted using such services know, specifically whether they know that the reviewer names and e-mail addresses are fraudulent. My response to this is "Oh, really?"

It goes beyond even this, though, as a more detailed report of Hyung-In Moon's and Peter Chen's fraud documented in Nature. Moon and Chen both exploited a flaw at the heart of Thomson Reuters' ScholarOne, a publication-management system used by quite a few publishers. Again, it's a flaw so unbelievably obvious that, in this era of concern about identify theft and cyber-crime, it's incredible that this is how ScholarOne works:

Moon and Chen both exploited a feature of ScholarOne's automated processes. When a reviewer is invited to read a paper, he or she is sent an e-mail with login information. If that communication goes to a fake e-mail account, the recipient can sign into the system under whatever name was initially submitted, with no additional identity verification. Jasper Simons, vice-president of product and market strategy for Thomson Reuters in Charlottesville, Virginia, says that ScholarOne is a respected peer-review system and that it is the responsibility of journals and their editorial teams to invite properly qualified reviewers for their papers.

So, if an editor agrees to use one of the author's fake suggestions, that author is allowed into the ScholarOne system and create whatever identity he wants as a registered "peer reviewer" in the system. Unfortunately, ScholarOne isn't the only system with such glaring vulnerabilities. Another system, Editorial Manager, does something no halfway well-designed system in 2015 should be doing:

Editorial Manager's main issue is the way it manages passwords. When users forget their password, the system sends it to them by e-mail, in plain text. For PLOS ONE, it actually sends out a password, without prompting, whenever it asks a user to sign in, for example to review a new manuscript. Most modern web services, such as Google, hide passwords under layers of encryption to prevent them from being intercepted. That is why they require users to reset a password if they forget it, often coupled with checking identity in other ways.

Yes, I've experienced this very thing as a reviewer for journals using Editorial Manager. Even so, to me the Nature article is misguided in that it seems to harp on vulnerabilities in the various computer software platforms used by publishers to manage submissions and peer review a bit too much and on the true flaw that allows self-peer review to occur a bit too little. Don't get me wrong. Technological and security problems are serious. After all, no software should make it so easy for fake reviewers to be entered into the system, and no software should be sending out passwords in regular e-mail in plain text. However, the true problem that facilitates fraud of this sort lies less within the software used than within the system that uses the software. Even so, Nature's list of "red flags" that "you just might be dealing with fake peer reviewers if..." is rather simple. One is even amusing:

  • The author asks to exclude some reviewers, then provides a list of almost every scientist in the field.
  • The author recommends reviewers who are strangely difficult to find online.
  • The author provides Gmail, Yahoo or other free e-mail addresses to contact suggested reviewers, rather than e-mail addresses from an academic institution.
  • Within hours of being requested, the reviews come back. They are glowing.
  • Even reviewer number three likes the paper.

Number four amuses me just based on my own behavior. I rarely complete peer reviews in less than three days, and frequently I'm so busy that I'm late, such that the editorial software is sending me reminders. As for number five, this gives you an idea of why that's downright funny:

Yes, "reviewer number three" is notorious for being the one whose criticisms of a submitted manuscript are the most—shall we say?—pointed.

The fox guarding the henhouse?

It should be quite clear from the discussion above that the real practice that facilitates peer review fraud is the way that many journals ask authors for names of suggested peer reviewers and then actually use those names. I've always wondered about this myself, because, after all, at the very minimum, no one's going to suggest a peer reviewer who's likely to trash the paper being submitted. Even leaving aside the possibility of fake peer reviewers, using peer reviewers suggested by an author makes it far more likely that the review will be less rigorous and far more likely to recommend publication with few changes. After all, scientists are only human. If they're asked to pick their own peer reviewers, of course they're going to pick ones that maximize their chances of getting published and minimize their chances of having to do multiple revisions and more experiments to satisfy reviewers' criticisms.

Readers who aren't scientists and haven't dealt with peer review before might reasonably wonder: Why on earth do editors do this? Haug lists three reasons:

  1. In highly specialized fields, authors may actually be the best qualified to suggest suitable reviewers for the manuscript in question.
  2. It makes life easier for editors because finding peer reviewers can be difficult, given that it's unpaid work that can be quite demanding.
  3. Journals and publishers are becoming increasingly multinational, which means that it's become more difficult for editors and members of editorial boards to be familiar with all the scientists throughout the world working on a topic.

These all sound very reasonable, but for them to be valid reasons to use author-recommended reviewers there have to be trust, honesty, and transparency because, as Steve Novella pointed out discussing this issue, scientists are human beings and some proportion of human beings will always cheat to gain an advantage. That can never be completely eliminated. However, any good system with an incentive for cheating (and, make no mistake, there are major incentives for scientists to publish in good journals, as such publications can make their careers and provide evidence of productivity to be used in grant applications) should implement processes to make cheating more difficult and the price of being discovered cheating more costly.

It seems to me that, at the very minimum, the era of asking scientists for suggestions for peer reviewers for their own manuscripts must end. The reasons why many (but by no means all) journals have done so for so many years are quite understandable but no longer defensible in the wake of these damaging and large scale incidents of self-peer review fraud. This practice must stop, even at the price of more work for already harried editors. One technological solution that might help would be a database of peer reviewers, each with his or her relevant field of expertise listed, as well as collaborators and those with whom they've published, so that editors can know not to send a manuscript to an author's friend or collaborator for review. In the wake of these scandals, it might even be profitable for a company to develop such a database and sell access to publishers. Lacking a system like this, it will fall on the shoulders of editors to be more careful and to pick peer reviewers themselves, rather than using any recommendations by authors submitting manuscripts.

Is the peer review system a "sacred cow" that needs slaughtering?

All this brings me back to the title of this post, which is based on a quote from Richard Smith, former editor of the British Medical Journal (now The BMJ). Speaking at a Royal Society meeting in April, Smith characterized the peer review system as a "sacred cow" ready to "slaughtered." As you can imagine, that particularly juicy quote went down quite well among those who are less than enamored with science-based medicine, such as Robert Scott Bell and, of course, Mike Adams' minion Ethan Huff at NaturalNews.com, who twisted Smith's quote to read 'Sacred cow' of industry science cult should be slaughtered for the good of humanity, BMJ editor says.

Of course, what these accounts neglected to mention was that Smith made his quotes in the context of a debate with Georgina Mace, professor of biodiversity and ecosystems at University College London, with Smith taking the "anti-" position and Mace taking the "pro-." Thus, it might not be surprising for each debater to take a more extreme position. For instance, Smith actually characterized John Ioannidis' famous 2005 paper "Why most published research findings are false" as meaning that "most of what is published in journals is just plain wrong or nonsense," which is clearly not what Ioannidis was saying. Just because something turns out to be incorrect does not make it nonsense in the context of the time, and, in fact, Ioannidis was making an argument that prior plausibility has to be taken into account in doing and interpreting research studies, which is a key argument for science-based medicine.

Still, Smith did make some good points, particularly when he described a BMJ experiment in which a brief paper was sent to 300 reviewers with eight deliberate errors introduced into it. No reviewer found more than five; the median was two, and 20% didn't spot any. Of course, I would counter that this observation is not an indictment of peer review as a process, but rather evidence that BMJ under Smith's editorship didn't pick its peer reviewers very well and that peer review needs improvement. Perhaps, instead of scrapping peer review, we should work to improve it.

Fix it, don't dump it

Fixing peer review is more the approach taken by Richard Horton, who is the current editor-in-chief at The Lancet and who published an article around the same time entitled Offline: What is medicine’s 5 sigma? Of course, whenever I hear Horton pontificate about peer review, it's hard for me not to remember that he was also the editor of The Lancet under whose regime Andrew Wakefield published his execrable 1998 Lancet case series that was has been used to blame autism on the MMR vaccine for nearly 18 years. Still, after discussing the problems with research and peer review, Horton does make some decent points. Perhaps he has learned from l'affaire Wakefield:

Can bad scientific practices be fixed? Part of the problem is that no-one is incentivised to be right. Instead, scientists are incentivised to be productive and innovative. Would a Hippocratic Oath for science help? Certainly don’t add more layers of research red-tape. Instead of changing incentives, perhaps one could remove incentives altogether. Or insist on replicability statements in grant applications and research papers. Or emphasise collaboration, not competition. Or insist on preregistration of protocols. Or reward better pre and post publication peer review. Or improve research training and mentorship. Or implement the recommendations from our Series on increasing research value, published last year. One of the most convincing proposals came from outside the biomedical community. Tony Weidberg is a Professor of Particle Physics at Oxford. Following several high-profile errors, the particle physics community now invests great effort into intensive checking and re-checking of data prior to publication. By filtering results through independent working groups, physicists are encouraged to criticise. Good criticism is rewarded. The goal is a reliable result, and the incentives for scientists are aligned around this goal. Weidberg worried we set the bar for results in biomedicine far too low. In particle physics, significance is set at 5 sigma—a p value of 3 × 10–7 or 1 in 3·5 million (if the result is not true, this is the probability that the data would have been as extreme as they are).

I always love it when physicists suggest such a strategy, given how much more variability is inherent in biological and medical research, so much so that very few experiments ever reach that level of significance statistical significance. Still, I could see decreasing the p-value for "statistical significance" to 0.01 or even 0.001. I could even see eliminating the p-value altogether, together with using Bayesian reasoning to estimate the probability that a given result is correct. Weinberg might be correct that the current value is not strict enough, but medicine isn't particle physics, and there is a huge difference between preliminary experiments with small numbers and large randomized clinical trials with a high prior plausibility.

There is value in some of Horton's other suggestions. Certainly, one problem is that, as much as we scientists want to do a good job at peer review, the fact remains that peer review is unpaid and, from an academic standpoint, doesn't really contribute much to our career advancement. For instance, when going up for promotion, assistant professors do have to show evidence of scholarly activity, such as peer review, but peer review is of low value in that equation compared to other activities. Publishing a single peer-reviewed paper in a decent journal is worth more than reviewing dozens of papers for journals, and a single NIH grant is worth more than reviewing any conceivable number of papers. Receiving neither significant financial nor career rewards for performing the onerous duty of peer review, scientists understandably don't knock themselves out to review papers. Is this any wonder, particularly given that, as Horton points out, there is no reward for high quality reviews and a perverse incentive (i.e., fewer papers to review) for doing low quality reviews? These are the sorts of impediments that have to be changed, along with tightening procedures to make self-peer review far more difficult to achieve. More radical changes could include a system of "open" peer review, such that reviewers are known and their comments follow the published paper, although such a system would present its own challenges, particularly given the reluctance of more junior faculty to publicly criticize the work of more renowned senior faculty.

What is becoming clear is that, whatever changes we make in the peer review system, we can't keep doing what we're doing any more. Referencing the Churchill quote, at the moment, as flawed as it is, our peer review system is the best system we have for evaluating science. Until someone can come up with an alternative that works at least as well (admittedly not the highest bar in the world), it would be premature to abandon it. That doesn't mean it can't be improved. Contrary to Richard Smith's view, peer review is not a sacred cow, and it doesn't yet need to be slaughtered.

Categories

More like this

Predatory open access journals seem to be a hot topic these days. In fact, there seems to be kind of a moral panic surrounding them. I would like to counter the admittedly shocking and scary stories around that moral panic by pointing out that perhaps we shouldn't be worrying so much about a fairly…
I was on the way out the door for a vacation when the journal Nature published its much-anticipated report on the results of its open peer review experiment, but I did want to offer a few comments on the report, even if I'm arriving to the discussion a bit late. Peer review, of course, is the gold…
Rethinking Peer Review: In reality, peer review is a fairly recent innovation, not widespread until the middle of the twentieth century. In the nineteenth century, many science journals were commandingly led by what Ohio State University science historian John C. Burnham dubbed "crusading and…
You don't have to look far to find mutterings about the peer review system, especially about the ways in which anonymous reviewers might hold up your paper or harm your career. On the other hand, there are plenty of champions of the status quo who argue that anonymous peer review is the essential…

Good article.

My suggestion for improving peer review is to open it up - publish the names of reviewers along with the article. This would help solve the problem of fake reviewers, plus address the incidence of personal issues (i.e. competitive jealousies) getting in the way of "objective" reviews.

By Dangerous Bacon (not verified) on 28 Dec 2015 #permalink

There cannot be real improvement if we do not realize that the critical term in "peer review" is "peer". Homeopathy journals are peer reviewed, and neither you nor I would be happy to have one of our papers evaluated by such "peers". Being a productive author, even in conventional academic journals, is insufficient to qualify him as "peer".

By Daniel Corcos (not verified) on 28 Dec 2015 #permalink

Well, my husband dearest recently wrote an article that had to be peer-reviewed (nothing connected to medicine, humanities). And he was asked to supply names of possible reviewers. So he did - he gave names of three scholars specialising in his narrow field, more accomplished than he is. The thing is, they are not his colleagues, he's just met them at a few conferences but they are from other institutions. He got a positive review overall, but with suggestions for changes, which he implemented. As far as I can tell - for the better.

As for publishing names of reviewers, I'm sceptical here, at least when thinking about my husband's field of research. Younger reviewers particularly would be afraid to provide negative reviews of older and more established scholars' work, as it could have a negative impact on their careers. Sad but true.

Double blind peer review. The reviewers don't know the author's name, the author doesn't know the reviewers' names. It's not perfect, sometimes it's easy to guess who the author is, but you should at least try. It's the standard in the field of cryptology. Letting the author give names of reviewers? Unthinkable.

The more serious problem is the

hypothesis -> prediction -> test -> success or fail.

In particular with Global Warming claimants.

When you query their predictions suddenly there's a new science where you don't have to make predictions.

Or the predictions are projections.

Or more likely a whole stream of abuse.

That's with peer reviewed science.

That's the problem. Peer review isn't the gold standard. It's a priori testing and prediction that gives you more confidence in a hypothesis.

Predictions that fail, falsify.

There's an in built asymmetry.

So what does a failure to predict say about the skills of the original peer reviewers?

Reviewers not knowing the author's name is one potential change that might be beneficial (if complete disclosure is not considered feasible) - the problem is in highly specialized fields where there is a limited amount of active research, and it would be fairly obvious to reviewers who was submitting the paper.

By Dangerous Bacon (not verified) on 28 Dec 2015 #permalink

I remember that years ago one of my very first grad courses was in how to (figuratively) rip apart papers. It was excellent training for reading the literature and, hopefully, avoiding some of the major mistakes others had committed.

Perhaps as a replacement for the current peer review process we could hold competitive review sessions with small rewards (Smarties, Mars Bars?) with starving graduate students or hungry & underpaid post-docs? Perhaps free pizza and bragging rights with the group, ... ?

This is likely to have the advantages that the reviewing process is likely to be a bit more of a novel activity, may be a real learning experience for the reviewers who may have to do a bit of digging and (grad students forgive me) the students are as likely or more likely to have time to do thorough reviews.

R the digging I have found that actually looking at references, the actual papers that is, can be quite revealing. Abstracts can be misleading, probably unintentionally but still misleading; sometimes the cited paper does not exist, cannot be traced or actually shows the exact opposite of what the reviewed paper claims.

On the other hand one does understand the issue the video was highlighting:

http://www.nature.com/scitable/blog/labcoat-life/when_peer_review_turns…

@ Laurent
I must admit I was surprised that all reviews are not double-blind. It is standard for most or all psychology papers.

By jrkrideau (not verified) on 28 Dec 2015 #permalink

Double-blind peer review is not really feasible, as authors often are citing themselves anyway. Paying reviewers (or some reward) might go somewhere, as there is indeed almost no reward for the job which can be arduous. I do find asking the authors to suggest reviewers is asking for trouble. Asking the reviewers for names to avoid (with good reasons for that) makes more sense.

Asking authors to supply names of reviewers has led colleagues in my field to establish their own ring of not-quite fraudulent reviews: they simply all suggest each other, and everyone in the the ring is sworn, even if only tacitly, to give each other's papers only glowing reviews. It works very well, and has resulted in the publication in good journals of a great many boring, sloppy papers, to the great advancement of the careers of the members of the ring.

Just sayin'. There's nothing illegal or fraudulent about that, and it happens, I am sure, in all fields. But it results in shoddy, even erroneous, science being published all the same. Authors suggesting reviewers has to stop. Names of people *not* to be asked to review is worthwhile, even necessary.

And, no scientist who is junior to the one being reviewed, or even perhaps on the same level, would dare submit criticisms if their name was revealed as the reviewer. That would be the end of their career.

The editor has to be judicious and put work into the decision, including evaluating the reviewer and whether his/her critique is likely to be unbiased. More time, less rush. If that could ever happen.

By Garnetstar (not verified) on 28 Dec 2015 #permalink

Clearly we need more papers using all-caps for emphasis. Just can't get enough of that.

For all the talk of abandoning peer review, we have the examples of Medical Hypotheses or the old fast-track PNAS papers to show what things are like without it.

By herr doktor bimler (not verified) on 28 Dec 2015 #permalink

Personally, I don't like the practice of suggesting reviewers from either side. As an author, I don't do it unless the submission software forces me to do so. As an editor, I ignore the list completely. This practice is a hangover from pre-internet days, when you would select one of the author's choices and one of your own. With the various databases now available there is no excuse.

There are lots of other issues around peer rewiew that need fixing. Editors should actually check through the manuscripts themselves and not just accept the word of the reviewers.

And then there are the bottom-feeding pay to play journals, for whom I refuse to peer-review as being a waste of time. Denialists are now starting to love peer review, for certain qualities of review.

By Chris Preston (not verified) on 29 Dec 2015 #permalink

What Garnetstar and Chris Preston said -- the keyhole through which these fraudsters gained entry is the practice of letting authors suggest their own reviewers. This is not done in my field (astronomy).

We also use only one well-selected (one hopes!) reviewer per paper, as a rule. This reduces the labor burden on the community and on the editors. On the other hand, if a bad paper gets published in astronomy, no on dies (as a rule).

A journal's editorial staff should have sufficiently fine-grained expertise to be able to find good reviewers on their own. The big astronomical journals have a platoon of associate editors who cover the waterfront pretty well -- papers tend to go to well-qualified reviewers. I've occasionally had less-than-savvy reviewers, but they're mostly pretty good.

I'm not sure if the video "Peer review ca. 1945" is still up, but if it is, it is an especially good bunker-meme parody.

By palindrom (not verified) on 29 Dec 2015 #permalink

some reviewers, at least at some research institutes, have to deliver their reviews by Ouija Board.

And why not? Last year I found a message in my spam-tray addressed to my co-author Dr Joachim Harloff, inviting him to contribute his papers to an India-based bottom-feeding mockademic vanity press, Global Journal of Human Social Science.* The problem being that Dr Harloff died in 2012. Perhaps I was expected to pass on the invitation through a seance.

* Available in "Online, 3D and Print versions", the 3D edition presumably being a pop-up book. I was disappointed by the absence of an interpretative-dance edition.

By herr doktor bimler (not verified) on 29 Dec 2015 #permalink

I was disappointed by the absence of an interpretative-dance edition.

ObStarman.

I agree with several of the comments suggesting that a significant portion of the problem is carelessness on the part of reviewing editors: a few weeks ago I heard an odd, aborted freak-out coming from one of my lab mates at his desk (something along the lines of: "AAAAAAH! $@(#($!*!!!.... wait...is this...what the...?") It turned out that he'd been asked to review his own paper! (the initial swearing was because, at first glance, he thought he'd been scooped.)

To be fair, the reviewing editors are themselves volunteers squeezing their editorial duties into whatever time they have left after research, writing, mentoring, faculty committees, etc. If journals had their own salaried, in-house editors, their names could be included with the authors' without fear of reprisal, giving them an incentive as well as the time to thoroughly vet both the manuscript itself and the reviewers they choose. As a bonus, it would create jobs for the ever-growing numbers of biomedical PhDs.

Obviously, the biggest issue will be finding the money to pay these editors. But I should think (hope?) that publishing in a journal known to employ a more rigorous form of peer review would confer a degree of prestige on the author(s) (kind of like the idea behind impact factors except that it would actually mean something) - and we already know that researchers will pay higher page fees to publish in a higher-impact journal that will look good on their CV.

^ I ran some hypothetical numbers to see how much a paid reviewing editor would increase the cost of publishing a paper. Assuming for the sake of argument that the editor is paid $60,000 per year, and reviews 100 papers per year (2 per week, less a 2-week vacation), the additional cost would be $600 per paper - not too bad. Now I'm kind of surprised more journals aren't already doing this.

@14 herr doktor bimler
The Global Journal of Human Social Science looks like a very reputable and worthy journal, I mean, see the review here http://scholarlyoa.com/?s=Global+Journals.

@16 Gilbert
That is an unusual link but I am not sure I understand it.

By jrkrideau (not verified) on 29 Dec 2015 #permalink

@ 17 Sarah A
If journals had their own salaried, in-house editors
Well the academic publishing industry is based on free labour and it would interfere with profits if the publishers had to pay "gasp" salaries for such things

Most of the major academic/scientific presses are no longer particularly interested in "knowledge". They often are now owned by venture capital firms who want a ROI.

https://medium.com/@jasonschmitt/can-t-disrupt-this-elsevier-and-the-25…

There have even been reports of complete false journals being published http://www.the-scientist.com/?articles.view/articleNo/27383/title/Elsev…

By jrkrideau (not verified) on 29 Dec 2015 #permalink

HDB, jrjrideau
http://scholarlyoa.com/?s=Global+Journals.

Dear editor,
In view of the negative reviews and considering the high level of competition, I am sorry to inform you that I have taken the decision to reject your journal.
Feel free to continue spamming my mailbox in the future.

By Daniel Corcos (not verified) on 29 Dec 2015 #permalink

gilbert @16, jkrideau @19 -- I had to use my google-fu to figure it out. The link gilbert posted was to the ending scene of a Lars Van Trier movie, Melancholia, that ends when a planet that has intruded into the solar system collides with the earth, so everybody (and everything) dies, including Kirsten Dunst.

Niburu I already knew about, more or less -- in the crank-o-sphere, it's believed to be an extra body in the outer reaches of the solar system that periodically causes all kinds of havoc on earth. NASA supposedly knows all about this and suppresses the truth -- of course. Don't ask me how NASA maintains a complete lock on all astronomical knowledge worldwide.

By palindrom (not verified) on 30 Dec 2015 #permalink

Oh, and Old Time Indians still get to smoke, guys.

Think about it. It's not "Tobacco Use Disorder." It's just something we do for our DEAD HOMIES, you know, the ones who FELL recently.

Thanks, Detroit.

See you soon.

22 palindrom
Ah, I had very vaguely heard of Niburu but totally missed the significance of the end of the movie. My apologies Gilbert.

Re NASA, it's simple, it is completely on the control of the One World Order. (Well it's a logical as any other "explanation" )

By jrkrideau (not verified) on 30 Dec 2015 #permalink

^ "in" the control. My earlier post was obviously interfered with by an agent of the New World Order.

By jrkrideau (not verified) on 30 Dec 2015 #permalink

Letting the author give names of reviewers? Unthinkable.

This ten times.

By Rich Woods (not verified) on 30 Dec 2015 #permalink

Twice now I've received emails from Academia.edu inviting me to accept a "co-author" tag on a paper with an "L. Brown" listed as an author, whom is not me. I have no idea how they got my email address, but it's annoying. I tried their "Do not contact me" link after the first one, which didn't work.

No idea if Academia.edu is legit or not, but if this is the normal level of diligence in peer review, we are in a world of hurt.

By Lance A. Brown (not verified) on 02 Jan 2016 #permalink