Who's Afraid of Peer Review? by John Bohannon is about his experiments in sending a fatally-flawed paper to a variety of open-access journals, and the appalling lack of rejections that followed (note that PLOS-ONE correctly rejected it).
To make it not too easy to reject just based on "I can't find your institute on the internet" (and, I think, to simulate the target group) the paper was supposed to come from non-West non-native-English speakers. And so:
...my native English might raise suspicions. So I translated the paper into French with Google Translate, and then translated the result back into English. After correcting the worst mistranslations, the result was a grammatically correct paper with the idiom of a non-native speaker
Isn't that lovely?
The serious point, as I take it, is the murky industry of pay-to-publish journals, which threatens to either pollute the science-o-sphere with trash, and/or rip of poor authors. On the second point: well, its always been part of science to know what the credible journals are in your field, based on their reputation, and based on the papers you've read that they've already published. If you submit to journals that are filled with trash, you've shot yourself in the foot.
- Log in to post comments
Firstly, this is a hit-piece against open-access, and I suspect that a similar spoof against paywall journals might have similar results. For any detailed analysis, we'd need the data, which don't appear to be published. For instance, how many journals in DOAJ completed the review process?
Secondly, I think it shows that the quality-control measures coming from within the OA world are starting to work. The author specifically targetted Beall's "predatory publishers" list and found (surprise, surprise) that those publishers have very lax standards. We know that already: that's how come they are on the list.
[I don't think this is a hit piece. You appear to be rather too dismissive of the problem of pay-to-publish attracting (surprise surprise) people who'll take your money in order to publish you. And the pollution that results. Paywall journals (for all their faults) have an inbuilt defence against this because they need people to buy them, and people won't do that if they publish drivel. Open access doesn't actually need readers -W]
And by sending a large fraction of the manuscripts to the list of "predator publishers", you also get the suggestive average figure Science Magazine as a closed-access published wanted to spread.
This is a warning that we should not only state: "climate science is based on peer-reviewed articles", but becomes more and more important to state that "climate science is based on scientific article from reputable journals".
In case of well-known scientists, I do not even care much about the peer reviewed part. They put their own reputation on the line if they would publish nonsense.
Nick, Victor, you sound like McIntyre or Curry accusing the IPCC of nefarious behavior - why immediately assume the worst of Science (published by AAAS)? In fact, this sort of analysis of science publishers is rare and extremely valuable - read up on the Gordob and Breach case at http://barschall.stanford.edu/ if you think anybody ever thought this was exclusively an open access problem - but also on the extreme legal difficulty anybody conducting such a study has faced. This was a brave thing to do. Publishers need to be held to account.
Of course it's a real problem, and long acknowledged as such by OA advocates. I repeat: the fact that publishers on Beall's list are churning out crap is not news. The list exists in order to warn people of this exact fact. I agree that there might be a valuable study here to be done. So do an actual study, not just journalism.
Paywall journals do publish drivel, and people^H^H^H^H^H^Hlibraries buy them anyway. Publishing costs are nearly zero, especially if you're not doing much or any peer review. And you can bundle your crappy journal together with a couple of hundred other crappy journals, and five which people actually need, and sell it anyway. That's what the Big Scam^H^H^H^HDeal was all about.
In other words, what the 'study' demonstrates is that there are some OA journals doing inadequate peer review. It does that by starting with a list of OA journals selected because they may do inadequate peer review. So: no actual news there then. It also does not demonstrate, nor could it demonstrate, a correlation with OA, let alone a causal link.
It's fun, but that's all it is.
"If you submit to journals that are filled with trash, you’ve shot yourself in the foot."
You seem to have experience in that area [incivility redacted]
[I think I'm missing your point. I have a number of respectable publications in respectable journals, and none in unrespectable ones. Perhaps you'd care to be more specific? -W]
"I repeat: the fact that publishers on Beall’s list are churning out crap is not news."
Not news to whom?
"Not news to whom?"
Not news to those who are aware of Beall's list, i.e. the authors of the paper in question.
Here is the description of Beall's list:
"Potential, possible, or probable predatory scholarly open-access publishers"
So, as Nick points out, the study has found that open-access publishers suspected of possible or predatory behavior are, in some cases, apparently guilty of predatory behavior.
The surprise would've been if this had *not* been true.
A more valuable approach would've been to select open access journals randomly from the set of all such journals, rather than only those on Beall's list.
I agree with Nick that they should have included some traditional journals in the batch. Every now and then I'll read a paper in one of those journals in my field and wonder, "How the &^%# did that get past the referees?" So even the traditional journals let something slip through now and then.
Even more disturbing was that several of the journals went to the trouble to solicit referee reports which pointed out the flaws in the paper, and the editor recommended publication anyway. Why are you going to bother with a review process if you'll ignore it?
A more valuable approach would’ve been to select open access journals randomly from the set of all such journals, rather than only those on Beall’s list.
They did. Only about half of the journals were on Beall's list. They weren't immune from the problem, and some of the journals on Beall's list actually rejected the paper, but the alleged predatory journals were much more likely to accept the paper.
I did not think this was a hit piece on OA, but then I read the overview of that whole section , and read the Bohannon article to the end, including the coda. I've corresponded with with Beall and Ginsbarg over the years, and found both pretty reasonable.
Neal's list gets criticized: I actually took this article more as an evaluation of Beall's list than anything else. Bohannon explains that the original project would have included non-OA , but noted the long delay.
I once asked Beall about a specific journal and publisher, as I had concerns. He was going to put it on his list to keep an eye on, but after further research, I told him I thought it looked more like incompetence, where somebody slips a bad paper through a legitimate, but weak peer review for a peripheral journal. He agreed, and noted he really only wanted to list the predators, not possible incompetents.
Anyway, I don't think this paper was a blanket condemnation of OA or guarantee that paid journals were good, just that there were a lot of predatory OAs and Beall's list was a decent red flag. The problem is not the professionals, but the public.
Oops, good discussion of this at Retraction Watch.
I already carped about this at length on Pharyngula, but briefly, deluging the natural products chemistry reviewer community with hundreds of worthless crappy papers that, oh, by the way, just happen all to have been written by Africans has potentially ugly bias-reinforcing repercussions.
[I thought the point was that most of them never even saw a reviewer? -W]
"If you submit to journals that are filled with trash, you’ve shot yourself in the foot."
That doesn't matter if you're in a place where your reputation or career advancement isn't linked to the quality and credibility of your work, but simply the sheer numbers of papers you managed to publish somewhere, anywhere.
Such places exist on earth.
To the complaint that the data weren't published: the authors provide all manner of data. You can read every single version of the spoof paper. That's paired with a spreadsheet which tells you which paper went to which journal, what the result was, and whether the journal is on the DOAJ or Beall list. And the article itself describes their methods pretty well.
You can complain that some fraction of traditional format journals would have accepted the paper as well, and somebody is quoted in the coda to that effect. We know what can happen - the professor hands the review off to some grad student, who may or may not do a good job, and anyway isn't really rewarded for doing a good job. But while peer review can go awry at the traditional journals, and while peer review can work well at the open source journals, it seems quite clear that a number of open source journals are somewhere between being useless and being scams.
carrot eater: you are right and I was wrong: the data is published. I didn't see the link to 'Data and Documents'. http://www.sciencemag.org/content/342/6154/60/suppl/DC1
As I said in a comment elsewhere on SB, using only African names and institutions is a bigtime no-no: it opens the whole project to charges of racism, and/or it becomes fodder for racists in any of a number of ways. Bohannon didn't need to be a covert ops specialist to set up some cover in other parts of the world; he could have hired someone to do that for him. I'd be willing to bet that an institution in New Zealand or a rural address in Canada or Australia might have worked, particularly with a minimal website.
The absence of using conventional journals as a control case is a fatal flaw: we don't know the extent to which conventional journals have similar flaws to those of some of the OA journals. And now that the exercise has been done once, it will be years before complacency returns such that it could be done again with both types of journals in the mix.
Meanwhile, PLoS One comes out looking good, which demonstrates that alternative publishing models can succeed without compromising their scientific integrity.
[Only you seem to be worried about the racism problems, which don't seem to be real. Not doing it to conventional journals is indeed a flaw, though not fatal -W]
In this week's Science Podcast:
John Bohannon describes to Kristy Hamilton what his sting operation reveals about the dark side of open-access publishing.
That is the headline. It is nice that if you read the article and interpret it yourself, you can see that the numbers are bad because a list of predatory publishers, which is maintained by the Open-Access community, was targeted and that pay-wall publishers were not even investigated. But that is the headline.
Just because impossible conspiracy theories blossom among the climate ostriches, does not mean that there are no conspiracies (Brutus), not that publishing houses do not have interests. I would only call this a conspiracy if they would have coordinated a campaign together with Nature and Cell.
Also sending the manuscript to traditionally publishing journal would have been easy to do. You could even have sampled the same number of submissions.
Might Eli point the crowd to Multi-periodic climate dynamics: spectral analysis of long-term instrumental and proxy temperature records by H.-L. Lüdecke, A. Hempelmann, and C. O. Weiss.
Whether or not Science has some bias on this, the article explained the domain of its study and its conclusions about that domain,i.e., that OA journals varied wildly in their quality of review, from very good to (many) very awful.
The author consulted with people who have contributed seriously to OA, such as Ginsbarg. The context around this was that OA was important, could work, but quality control was highly variable, and the big increase was challenging, and issues needed to be worked on.
Suppose a tobacco research report reported smoking patterns among males 18-24. Would it's results be fatally flawed because it did not report on females, doubling size of the study?
I think it would be a dandy thing to do this experiment with both subscription and OA journals, although I'd guess it might be as easy to find so many appropriate subscription journals, and I suspect that if it were a lesser number, people would claim flawed..
But that is a different experiment, which anyone is free to do if they spend 2X the effort that Bohannon did and take longer, since turnaround times seem likely, especially to drive a paper like this through revision cycles.
All this is highly reminiscent of proprietary vs open source software. Some of the former is very good, some is bad. Some of the latter is really good, but there's long been plenty that doesn't make it.
You reply to me:
"[I thought the point was that most of them never even saw a reviewer? -W]"
Post facto, you can say that some copies were unreviewed, though clearly many were; a priori, if you're sending manuscripts to 304 journals, most of which doubtless claim to be peer-reviewed, you have to presume that many of them will indeed be sent to reviewers.
[But we're talking post-facto, so that's fine. Your pre-facto assumption that most would be reviewed turned out to be wrong. Don't cling to it -W]
Then you reply to G.:
"[Only you seem to be worried about the racism problems, which don't seem to be real. Not doing it to conventional journals is indeed a flaw, though not fatal -W]"
No, I'm also worried about the racial implications. On other blogs I've seen people who feel entitled to automatically reject certain results if they come from Asia, because, they say, there are problems with Asian research.... What do you suppose such folks are inclined to think of African research, which often necessarily use methods that were fashionable in the West when Western scientists were as ill-funded as African scientists are today, then written up by a scientist who may be writing in his fourth language, and who doesn't know that quality American journals nowadays expect you to provide exhaustive detail on X, Y or Z because he can't afford to subscribe to such journals? It is hard for those folks to get their work into international journals, so many people, at best, casually assume that no science worth paying attention to is being done in Africa.
Now suppose that one of those people is asked to review a paper by an "African faculty member" that displays not just unfashionable M&M or poor English but a total lack of comprehension of basic issues; will that not reinforce his bias? Multiply that by a few hundred reviewers to estimate the potential negative impact of this study. Your dismissal of that as not "seeming" to be real lacks evidentiary support, as nobody has surveyed the reviewers about their experiences. (In fact, it is not clear to me whether they were individually notified that they'd been hoaxed.) I have very hard-working African colleagues who accomplish as much as they can with almost no resources, and this offends me.
Note that I am not saying that the hoaxer is a racist. He correctly argues that if he were to create phony people or institutions in America and attach their names to such total garbage, reviewers might start Googling them and get suspicious when nothing turned up. Besides, making the authors foreign gives you the chance to use broken English and then sneer at journals who overlook it. Fine. But surely it would have been plausible to make up some dinky institutions in parts of South America, Asia, and even, gasp, Europe that might not have a web presence? There does seem to be a possible implication that if you want reviewers to accept at face value that total garbage is really someone's best effort, rather than suspecting it's a hoax or mistake and asking more questions, just slap an African name on it, and it will seem more plausible. This too is very problematic.
[Any European institute without a web presence would be suspect. The problem with reinforcing racist stereotypes exist, but the real problem is the racist stereotypes. "making the authors foreign gives you the chance to use broken English and then sneer at journals who overlook it" - AFAIK, that formed no part of the evaluation, which was entirely about the journals failure to apply peer review at all, or to critically examine the actual scientific content. Perhaps you're thinking of some other study? -W]
I maintain that this is an anti-OA hit-piece, or certainly that it is being used as such by the anti-OA vested interests. It's completely obvious that this research can't possibly reveal anything about whether OA is better or worse than non-OA, but that is how it is presented. It's also obvious that this research *does* reveal that Beall's list is a strong indicator of bogosity, and that Beall journals are considerably worse than others, but that is not emphasized, and barely even mentioned in the second- and third-hand reporting.
Out of 304 cases, only 255 were either accepted or rejected. Of the rest, the spreadsheet lists them as either 'dead' (29), 'review' (20) or 'submission fee required' (10). It's not clear what 'dead' and 'review' mean: 'review' presumably means that the paper was apparently sent for review, with no further outcome: I think that should count a little in the credit column but let's disregard them (FWIW they are 14 from the DOAJ list and 6 from Beall's, with zero from both). I guess that 'dead' means a nonexistent journal, which we should disregard (breakdown is 10,18,1). 'submission fee required' I would regard is a red light in any case (curiously breaks down as 8,2,0). So we have to use 255 as our denominator in any more detailed analysis.
The first thing to note is that there is almost no overlap between the DOAJ and Beall's list: the 255 breaks down as 144 DOAJ: 97 Beall's: 14 both. Nobody believes that the DOAJ is a gold standard of publication: a lot of somewhat dodgy-looking journals are on it, but you can see at once that there is some filtering there, a low hurdle but a hurdle nonetheless. As you say, "its always been part of science to know what the credible journals are in your field".
The ideal outcome for submissions like these is that the editor takes a quick look and rejects it out of hand. That happened to 70 cases, 27.5% (64 of those were journals on the DOAJ list, 3 on Beall's list, and 3 on both. That's 42% for DOAJ journals and 5% for Beall journals: a striking contrast).
The next best outcome is rejection after review. It's more wasteful but it doesn't result in bogus publications. That happened 28 times, 11% (16:10:2, which is 11% for both DOAJ and Beall). Putting all rejections together, DOAJ journals are getting it right more than half the time (barely), as against 16% for Beall.
The troubling cases are those which get accepted: 157 (64:84:9, which is 46% of DOAJ versus 84% for Beall). Of those, undoubtedly the worst problem is acceptance without any review: 82 (29:47:6, or 22% to 48%).
"Your pre-facto assumption that most would be reviewed turned out to be wrong. Don't cling to it."
It's not my "pre-facto" - is that a word? - assumption, as I didn't have anything to do with the design of this study, but it should have been the authors' assumption.
[Why should you (now) suggest they assume something that we (now) know to be wrong? That's just silly -W]
If one of the hypotheses you wish to test, albeit without a control group, is that open-access journals print garbage without reviewing it, then the corresponding null hypothesis is that open-access journals that claim to do peer review actually do. And in fact, the numbers helpfully obtained and cited by Nick Barnes above show that a majority of the papers sent were indeed reviewed, either by immediately rejecting editors or by peer reviewers. The fact that some peer reviewers either did not see the problems with the manuscript or were ignored by the editors is a problem - but it does not mean they did not exist.
By the way, I have personally encountered Eastern European, as well as Asian and South American, institutions that had little if any web presence of their own.