Stones, glass houses, etc.

John Bohannon of Science magazine has developed a fake science paper generator. He wrote a little, simple program, pushes a button, and gets hundreds of phony papers, each unique with different authors and different molecules and different cancers, in a format that's painfully familiar to anyone who has read any cancer journals recently.

The goal was to create a credible but mundane scientific paper, one with such grave errors that a competent peer reviewer should easily identify it as flawed and unpublishable. Submitting identical papers to hundreds of journals would be asking for trouble. But the papers had to be similar enough that the outcomes between journals could be comparable. So I created a scientific version of Mad Libs.

The paper took this form: Molecule X from lichen species Y inhibits the growth of cancer cell Z. To substitute for those variables, I created a database of molecules, lichens, and cancer cell lines and wrote a computer program to generate hundreds of unique papers. Other than those differences, the scientific content of each paper is identical.

The fictitious authors are affiliated with fictitious African institutions. I generated the authors, such as Ocorrafoo M. L. Cobange, by randomly permuting African first and last names harvested from online databases, and then randomly adding middle initials. For the affiliations, such as the Wassee Institute of Medicine, I randomly combined Swahili words and African names with generic institutional words and African capital cities. My hope was that using developing world authors and institutions would arouse less suspicion if a curious editor were to find nothing about them on the Internet.

The data is totally fake, and the fakery is easy to spot — all you have to do is read the paper and think a teeny-tiny bit. The only way they'd get through a review process is if there was negligible review and the papers were basically rubber-stamped.

The papers describe a simple test of whether cancer cells grow more slowly in a test tube when treated with increasing concentrations of a molecule. In a second experiment, the cells were also treated with increasing doses of radiation to simulate cancer radiotherapy. The data are the same across papers, and so are the conclusions: The molecule is a powerful inhibitor of cancer cell growth, and it increases the sensitivity of cancer cells to radiotherapy.

There are numerous red flags in the papers, with the most obvious in the first data plot. The graph's caption claims that it shows a "dose-dependent" effect on cell growth—the paper's linchpin result—but the data clearly show the opposite. The molecule is tested across a staggering five orders of magnitude of concentrations, all the way down to picomolar levels. And yet, the effect on the cells is modest and identical at every concentration.

One glance at the paper's Materials & Methods section reveals the obvious explanation for this outlandish result. The molecule was dissolved in a buffer containing an unusually large amount of ethanol. The control group of cells should have been treated with the same buffer, but they were not. Thus, the molecule's observed "effect" on cell growth is nothing more than the well-known cytotoxic effect of alcohol.

The second experiment is more outrageous. The control cells were not exposed to any radiation at all. So the observed "interactive effect" is nothing more than the standard inhibition of cell growth by radiation. Indeed, it would be impossible to conclude anything from this experiment.

This procedure should all sound familiar: remember Alan Sokal? He carefully hand-crafted a fake paper full of po-mo gobbledy-gook and buzzwords, and got it published in Social Text — a fact that has been used to ridicule post-modernist theory ever since. This is exactly the same thing, enhanced by a little computer work and mass produced. And then Bohannon sent out these subtly different papers to not one, but 304 journals.

And not literary theory journals, either. 304 science journals.

It was accepted by 157 journals, and rejected by 98.

So when do we start sneering at science, as skeptics do at literary theory?

Most of the publishers were Indian — that country is developing a bit of an unfortunate reputation for hosting fly-by-night journals. Some were flaky personal obsessive "journals" that were little more than a few guys with a computer and a website (think Journal of Cosmology, as an example). But some were journals run by well-known science publishers.

Journals published by Elsevier, Wolters Kluwer, and Sage all accepted my bogus paper. Wolters Kluwer Health, the division responsible for the Medknow journals, "is committed to rigorous adherence to the peer-review processes and policies that comply with the latest recommendations of the International Committee of Medical Journal Editors and the World Association of Medical Editors," a Wolters Kluwer representative states in an e-mail. "We have taken immediate action and closed down the Journal of Natural Pharmaceuticals."

Unfortunately, this sting had a major flaw. It was cited as a test of open-access publishing, and it's true, there are a great many exploitive open-access journals. These are journals where the author pays a fee -- sometimes a rather large fee of thousands of dollars -- to publish papers that readers can view for free. You can see where the potential problems arise: the journal editors profit by accepting any papers, the more the better, so there's pressure to reduce quality control. It's also a situation in which con artists can easily set up a fake journal with an authoritative title, rake in submissions, and then, perfectly legally, publish them. It's a nice scam. You can also see where Elsevier would love it.

But it's unfair to blame open access journals for this problem. They even note that one open-access journal was exemplary in its treatment of the paper.

Some open-access journals that have been criticized for poor quality control provided the most rigorous peer review of all. For example, the flagship journal of the Public Library of Science, PLOS ONE, was the only journal that called attention to the paper's potential ethical problems, such as its lack of documentation about the treatment of animals used to generate cells for the experiment. The journal meticulously checked with the fictional authors that this and other prerequisites of a proper scientific study were met before sending it out for review. PLOS ONE rejected the paper 2 weeks later on the basis of its scientific quality.

The other problem: NO CONTROLS. The fake papers were sent off to 304 open-access journals (or, more properly, pay-to-publish journals), but not to any traditional journals. What a curious omission — that's such an obvious aspect of the experiment. The results would be a comparison of the proportion of traditional journals that accepted it vs. the proportion of open-access journals that accepted it… but as it stands, I have no idea if the proportion of bad acceptances within the pay-to-publish community is unusual or not. How can you publish something without a control group in a reputable science journal? Who reviewed this thing? Was it reviewed at all?

Oh. It's a news article, so it gets a pass on that. It's also published in a prestigious science journal, the same journal that printed this:

This week, 30 research papers, including six in Nature and additional papers published online by Science, sound the death knell for the idea that our DNA is mostly littered with useless bases. A decade-long project, the Encyclopedia of DNA Elements (ENCODE), has found that 80% of the human genome serves some purpose, biochemically speaking. Beyond defining proteins, the DNA bases highlighted by ENCODE specify landing spots for proteins that influence gene activity, strands of RNA with myriad roles, or simply places where chemical modifications serve to silence stretches of our chromosomes.

And this:

Life is mostly composed of the elements carbon, hydrogen, nitrogen, oxygen, sulfur, and phosphorus. Although these six elements make up nucleic acids, proteins, and lipids and thus the bulk of living matter, it is theoretically possible that some other elements in the periodic table could serve the same functions. Here, we describe a bacterium, strain GFAJ-1 of the Halomonadaceae, isolated from Mono Lake, California, that is able to substitute arsenic for phosphorus to sustain its growth. Our data show evidence for arsenate in macromolecules that normally contain phosphate, most notably nucleic acids and proteins. Exchange of one of the major bio-elements may have profound evolutionary and geochemical importance.

I agree that there is a serious problem in science publishing. But the problem isn't open-access: it's an overproliferation of science journals, a too-frequent lack of rigor in review, and a science community that generates least-publishable-units by the machine-like application of routine protocols in boring experiments.

Categories

More like this

As anti-vaccinationists, global-warming denialists, and young-earth creationists know, it’s not too hard to fool the public with bogus science. But a new exercise by John Bohannon of Science suggests it’s not too hard too fool professionals either. Bohannon used a computer program to generate…
Predatory open access journals seem to be a hot topic these days. In fact, there seems to be kind of a moral panic surrounding them. I would like to counter the admittedly shocking and scary stories around that moral panic by pointing out that perhaps we shouldn't be worrying so much about a fairly…
Who's Afraid of Peer Review? by John Bohannon is about his experiments in sending a fatally-flawed paper to a variety of open-access journals, and the appalling lack of rejections that followed (note that PLOS-ONE correctly rejected it). To make it not too easy to reject just based on "I can't find…
OA pillars The following are excerpts from the journal Nature regarding the Public Library of Science. These were located with a simple search for the phrase "Public Library of Science." For each item, I provide the source, and a selected bit of text. I have no selection criteria to report…

Okay, I'm not happy about the choice to send out hundreds of crappy manuscripts all from fictitious Africans, no matter how well-argued the rationale. It so happens that many such papers from African authors would be called second-rate by current American standards. That is not to say that they are blatantly wrong and meaningless, as was the case here. The reasons for their "poorer" quality include language barriers; the fact that developing-country researchers often use older, cheaper methods that are no longer popular in wealthy nations, which may be sneeringly dismissed as "obsolete" by American reviewers even though they were thought useful when they were popular here; and the fact that when Western scientists set the ever-changing fashion for what types of information must be provided in a "good" publication, people in other countries will inevitably be a bit behind the curve. On the one hand, reviewers who are aware of those issues may review papers from poorer and non-Anglophone countries more leniently to facilitate those countries' participation in international science; therefore, it may be that if these fake papers had been from American institutions more of them would have been thrown out. Onthe flip side, hundreds of reviewers have now been exposed to alleged examples of African science that are not just unfashionable or tersely written up but outright garbage, and they might have ended up saying to themselves, "So this is the kind of crap 'science' Kenya is doing now, huh?", exacerbating a too-common view of science as a Western practice that non-Westerners sadly just don't do very well.... I hope, therefore, that each of the reviewers was later notified that the manuscript was actually manufactured by an American.

WHERE THE FAULT LIES

To show that the bogus-standards effect is specific to Open Access (OA) journals would of course require submitting also to subscription journals (perhaps equated for age and impact factor) to see what happens.

But it is likely that the outcome would still be a higher proportion of acceptances by the OA journals. The reason is simple: Fee-based OA publishing (fee-based "Gold OA") is premature, as are plans by universities and research funders to pay its costs:

Funds are short and 80% of journals (including virtually all the top, "must-have" journals) are still subscription-based, thereby tying up the potential funds to pay for fee-based Gold OA. The asking price for Gold OA is still arbitrary and high. And there is very, very legitimate concern that paying to publish may inflate acceptance rates and lower quality standards (as the Science sting shows).

What is needed now is for universities and funders to mandate OA self-archiving (of authors' final peer-reviewed drafts, immediately upon acceptance for publication) in their institutional OA repositories, free for all online ("Green OA").

That will provide immediate OA. And if and when universal Green OA should go on to make subscriptions unsustainable (because users are satisfied with just the Green OA versions), that will in turn induce journals to cut costs (print edition, online edition), offload access-provision and archiving onto the global network of Green OA repositories, downsize to just providing the service of peer review alone, and convert to the Gold OA cost-recovery model. Meanwhile, the subscription cancellations will have released the funds to pay these residual service costs.

The natural way to charge for the service of peer review then will be on a "no-fault basis," with the author's institution or funder paying for each round of refereeing, regardless of outcome (acceptance, revision/re-refereeing, or rejection). This will minimize cost while protecting against inflated acceptance rates and decline in quality standards.

That post-Green, no-fault Gold will be Fair Gold. Today's pre-Green (fee-based) Gold is Fool's Gold.

None of this applies to no-fee Gold.

Obviously, as Peter Suber and others have correctly pointed out, none of this applies to the many Gold OA journals that are not fee-based (i.e., do not charge the author for publication, but continue to rely instead on subscriptions, subsidies, or voluntarism). Hence it is not fair to tar all Gold OA with that brush. Nor is it fair to assume -- without testing it -- that non-OA journals would have come out unscathed, if they had been included in the sting.

But the basic outcome is probably still solid: Fee-based Gold OA has provided an irresistible opportunity to create junk journals and dupe authors into feeding their publish-or-perish needs via pay-to-publish under the guise of fulfilling the growing clamour for OA:

Publishing in a reputable, established journal and self-archiving the refereed draft would have accomplished the very same purpose, while continuing to meet the peer-review quality standards for which the journal has a track record -- and without paying an extra penny.

But the most important message is that OA is not identical with Gold OA (fee-based or not), and hence conclusions about peer-review standards of fee-based Gold OA journals are not conclusions about the peer-review standards of OA -- which, with Green OA, are identical to those of non-OA.

For some peer-review stings of non-OA journals, see below:

Peters, D. P., & Ceci, S. J. (1982). Peer-review practices of psychological journals: The fate of published articles, submitted again. Behavioral and Brain Sciences, 5(2), 187-195.

Harnad, S. R. (Ed.). (1982). Peer commentary on peer review: A case study in scientific quality control (Vol. 5, No. 2). Cambridge University Press

Harnad, S. (1998/2000/2004) The invisible hand of peer review. Nature [online] (5 Nov. 1998), Exploit Interactive 5 (2000): and in Shatz, B. (2004) (ed.) Peer Review: A Critical Inquiry. Rowland & Littlefield. Pp. 235-242.

Harnad, S. (2010) No-Fault Peer Review Charges: The Price of Selectivity Need Not Be Access Denied or Delayed. D-Lib Magazine 16 (7/8).

By Stevan Harnad (not verified) on 04 Oct 2013 #permalink

Agreed with Jane: Using only African names and institutions was a bigtime no-no. Holy cow!, anyone with half a brain should know that doing that could become fodder for racists in any of a number of ways, and could itself be called out as racist.

Understood, Bohannon needed to create institutions that had plausibility for not being found with a web search. Understood, he wasn't a covert operations expert who could create a quantity of fictitious institutions with full websites, faculties, phone numbers, email addresses, etc. etc. But he at least could have come up with a few plausible scenarios in other parts of the world, or hired someone with relevant background to do it for him.

As PZ says, the absence of using mainstream subscription journals as a control case is a fatal flaw. What we end up with here, is journalistic research, but not scientific research. It might be useful by way of exposing the shoddy practices at some of these OA journals, and giving PLoS a helpful boost by way of contrast. But it's not exactly taking down a major target, as Sokal did with the postmodernist plague that was crapping-up universities like a Norovirus epidemic.

Someone needs to run a test comparing a handful of the better OA journals with a handful of mainstream subscription journals.

In the end, the entire model of scientific publishing needs some serious re-think. Ideal case: universities support journals on a kind of "progressive taxation" basis, peer review doesn't come out of working scientists' salaries, and results are freely open to the public after some small delay such as six months.

Notice the bad paper in all its trivial variations is on a medical topic. The Sokal hoax paper is literary. Peer review in the physical sciences is a lot more robust. Watch the public relations agents from the coal industry's "climate science skeptics" project jump all over this thing.

and results are freely open to the public after some small delay such as six months

That's not a small delay if you actually are a researcher. My institution has a subscription that provides online access to Nature, Science, and Elsevier journals, but not to Springer journals. My previous institution got Elsevier and Springer, IIRC, but neither Science nor Nature!

By David Marjanović (not verified) on 07 Oct 2013 #permalink

I presume the "6 month delay" suggestion was modeled on the NIH Public Access Policy, which guarantees that the public can have access to NIH-funded research, in which case the delay may not be a great concern (although, as you correctly note, a big pain to researchers who don't have immediate access through institutional subscription).

By Nick Theodorakis (not verified) on 08 Oct 2013 #permalink

PZ's claim of 'no controls' is a straw man. The test of the sample is valid for the described sample. What PZ calls a 'control' is really just arbitrarily enlarging the sample size (ie. why do Nature or Science merit as de facto controls?). The study shows what it shows for the sample selected and described.

By Douglas Watts (not verified) on 11 Oct 2013 #permalink

The study shows what it shows for the sample selected and described.

The paper's own conclusions, however, go well beyond this; they could only be tested by enlarging the sample exactly as PZ described!

By David Marjanović (not verified) on 12 Oct 2013 #permalink

my Aunty Jasmine recently got a stunning green Mercedes-Benz E-Class Convertible only from working part-time online... look what i found

By Alex marshall (not verified) on 13 Oct 2013 #permalink

BAM%32%31%2E%63om

By Alex marshall (not verified) on 13 Oct 2013 #permalink

Everyone knows the peer review system has a few cracks. I await the next good article so that I might copy sections of it, and have my paper look fresh.

By Gerard Holton (not verified) on 13 Oct 2013 #permalink

This article put this way: from molecular lichens species X Y possessed the growth of cancer cells, these variables z alternatives, I created a database of molecules, lichens, cancer cell lines and wrote a computer program to generate hundreds of thousands of papers together. In addition to these differences, the scientific content of each paper is the same.
The author of this virtual are affiliated to the virtual organization in Africa. I generated by the authors, such as Ocorrafoo m. l. Cobange, through random permuting Africa first and last name from the online database, and then randomly increase center initials. Contact, such as Wassee institute for medical research, I randomly contact swahili words and Africa title with general organization and capital cities in Africa. My expectation is that the use of the author of the developing countries and organizations will cause less doubt if a editor was found not to see about them on the Internet.

By stone crusher (not verified) on 14 Oct 2013 #permalink

Sign up here if you are interested some online part time jobs!!
Bow6com

By ClaraMeyer15 (not verified) on 15 Oct 2013 #permalink

my Aunty Kaitlyn just got a stunning metallic Ford Mustang Coupe by working part-time off of a home computer. this post Bow6com

By CathyKarr04 (not verified) on 16 Oct 2013 #permalink

One glance at the paper’s Materials & Methods section reveals the obvious explanation for this outlandish result. The molecule was dissolved in a buffer containing an unusually large amount of ethanol. The control group of cells should have been treated with the same buffer, but they were not. Thus, the molecule’s observed “effect” on cell growth is nothing more than the well-known cytotoxic effect of alcohol.

This is the explanation for the seeming "results" of a fair number of homeopathy papers I've seen.

This is the explanation for the seeming “results” of a fair number of homeopathy papers I’ve seen.

*blink* Seriously? They're that stupid?

By David Marjanović (not verified) on 18 Oct 2013 #permalink

Well I have heard about very simillar approach in the social sciences. It was as easy as in this case. You just randomly programm it to produce terms and frases that are somehow connected, with very consolidated vocabulary, but with no real scientific value.
Internet has made it simple to publish almost any kind of a paper. The quality went straight down in many cases, although there are still many good journal. On the other hand, there was also a highly positive impact - blogs, articles, sharing of data, etc. This is why I rather go and read about solar power from a blog than from a scientific paper.