That post about how hard it is to clean up the scientific literature has spawned an interesting conversation in the comments. Perhaps predictably, the big points of contention seem to be how big a problem a few fraudulent papers in the literature really are (given the self-correcting nature of science and all that), and whether there larger (and mistaken) conclusions people might be drawing about science on the basis of a small fraction of the literature.
I will note just in passing that we do not have reliable numbers on what percentage of the papers published in the scientific literature are fraudulent. We may be able to come up with reliable measures of the number of published papers that have been discovered to be fraudulent, but there’s not a good procedure to accurately count the ones that succeed in fooling us.
Set that worry aside, and the legitimate worry that “little frauds” that might not do too much to deform the shape of the scientific literature might end up having significant effects on the scientific career scorekeeping. Let’s take on the big question:
How much of a problem is it to leave the scientific literature uncorrected? Who is it a problem for?
There are different ways published results are put to use — as sources of techniques, as starting points for a new experiment or study built on the one reported in the published paper, as support for the plausibility of your own result (my result fits what you’d expect given the results or theory here), or as part of the picture of what we understand so far about a particular system or phenomenon (e.g., in a review article).
But published scientific papers are also put to use in other ways — maybe by researchers in other fields helping themselves to an interdisciplinary approach (or applying the approach from their discipline to a result reported in another discipline); or by policy makers (if the result seems to bear on a question with policy implications — think biomedical science or climate science); or by non-scientists trying to educate themselves or make informed decisions.
In other words, we can’t assume that the papers in the scientific literature are just used by a tight-knit community of scientists working in the particular discipline that generated the results (or on the particular problem that was the focus of the researchers who produced a particular paper). The scientific literature is a body of knowledge that exists in a public space with the intention that others will use it. To the extent that there are problems in that literature, the uses to which people hope to apply various pieces of the literature may be undermined.
Well, sure, the commentariat opined, but that’s a temporary problem. The truth will out! As Comrade PhysioProf put it:
I doubt that any of this really matters. Fraudulent work that leads to false conclusions is going to be realized by those to whom it is relevant to be wrong sooner rather than later if it is actually of any importance.
Myself, I have enough memories of trying to get experiments described in the scientific literature to work as described that I responded:
I reckon it might matter quite a lot to the researchers who end up wasting time, effort, and research funds trying to build something new on the basis of a reported result or technique that turns out (maybe later than sooner) to be fraudulent.
So, in the long run, maybe the community is just fine, but some honest individuals who are part of that community are going to get screwed.
If we’re paying attention, however, we’ve noticed that the fortunes of Science are separable from the fortunes of individual scientists (at least to a certain extent). Our current set-up seems to accommodate a certain amount of “waste” in terms of scientific talent that bleeds out of the pipeline. What are you going to do?
Dario Ringach chimed in to support Comrade PhysioProf’s general point:
Fortunately, time will do the job. There is no incorrect scientific result, fabricated or not, that will stand the test of time. Science is self-correcting, a feature that is difficult to find in any other human activity.
My experience is that by the time a retraction actually appears in print the community has already known for some time the study could not be replicated or that something was wrong.
Does the scientific community know when something is wrong with a paper long before misconduct investigations have run their course or retractions or corrections are even published online?
As Pinko Punko points out, that rather depends on who you count as the community — people who are well-connected and “hear things” from reliable sources, who can schmooze at professional meetings with the bigwigs with large labs that have the equipment and the labor to devote to trying to replicate (and build on) interesting and important results right away, or everyone else? (Remember that simply attending the professional meetings doesn’t automatically score you schmooze time at the bar with the people in the know — frequently they want to hang out with their friends, and the conversation may be more circumspect if there are people they don’t really know in the group.)
Surely there are interested scientists who are not CC’d on the email correspondence about The Exciting Reported Result We Just Can’t Replicate In Our Lab. If you are not part of these informal networks, you’re pretty much stuck trusting the literature to do the talking. I’m guessing that those not “in the know” and on the CC list of the well-connected are disproportionately early career scientists, and disproportionately trained at schools not recognized as super-elite hotbeds of scientific research (and less likely to have a current position at such a SEHOSR). I’d also bet that, despite our idealistic picture of science as an international activity and community, the backchannel communications between the scientists “in the know” about problems with a paper may be happening more within national boundaries than across them.
In short, if we’re trusting informal routes of communication to alert scientists to problems in the literature, I think it’s a stretch to assume that everyone who matters will know there’s a problem (let alone the nature of the problem) well before formal retractions or corrections come down.
Unless, of course, you have a much different idea of who matters in the community of science than I do. Maybe you do. But then maybe you want to come clean about it rather than paying lip service to the idealistic view of science as a democratic community in which your personal characteristics are not as important as the contribution you make to the shared body of knowledge.
Now, I don’t think people are really saying that if you are not plugged into the bigwig rumor mill, you don’t count as a full member of the tribe of science. But I think I’m seeing a maneuver that’s a little disturbing. It’s almost as if people are suggesting that it’s your own fault if you are deceived by a fraudulent paper published in the scientific literature. Careful scientists don’t get fooled. There are things you can do to protect yourself from being fooled. If you got fooled, clearly you fell down on one of those steps on the checklist that careful scientists apply — but that couldn’t happen to me, because I’m more careful than that.
Short of not trusting anything published in the literature, I submit that there is no guaranteed method to avoid being deceived by a fraudulent paper published in the scientific literature.
And, I think laying the blame for being deceived at the feet of those who were deceived (rather than at the feet of the fraudster, or the lax vetting of the paper that resulted in its being published, or what have you) is not primarily intended to blame the victim of the deception (although it surely has that effect). Rather, I think it is a way to try to make oneself feel safer and less vulnerable in a situation in which one can’t tell the fraudulent papers or the lying scientists just by looking at them.
Of course, it doesn’t actually make us safer to tell ourselves that we were somehow smarter, or more skeptical, or more vigilant, than the ones who got burned. But maybe feeling safer just keeps us from freaking out.
Or maybe it keeps us from having to deal with bigger problems, like whether there are structural changes we should consider (for scientific journals, or the way hiring and promotion decisions are made, or the way grant money is distributed) that might reduce the payoff for fraudsters or give us more robust ways to ensure the reliability of the scientific literature. Change is painful, and there’s plenty of hard work on scientists’ plates already.
But sometimes our reluctance to reexamine the status quo, paired with our unwillingness to accept a world in which bad things can happen to scientists who didn’t somehow bring those bad things upon themselves, can leave us stuck in a state of affairs that is decidedly suboptimal.
In closing, I want to remind you of the apt observations of Eugenie Samuel Reich in her book on fraudster Jan Hendrik Schön, Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World (reviewed here). In particular, Reich notes that the belief in the self-correcting nature of science is fairly useless unless actual scientists do the work to hold scientists and scientific results accountable. She writes:
W]hich scientists did the most to contribute to the resolution of the case? Paradoxically, it wasn’t those who acted as if they were confident that science was self-correcting, but those who acted as if they were worried that it wasn’t. Somewhere, they figured, science was going wrong. Something had been miscommunicated or misunderstood. There was missing information. There hadn’t been enough cross-checks. There were problems in the experimental method or the data analysis. People who reasoned in this way were more likely to find themselves pulling on, or uncovering, or pointing out, problems in Schön’s work that it appeared might have arisen through clerical errors or experimental artifacts, but that eventually turned out to be the thin ends of wedges supporting an elaborately constructed fraud. … In contrast, those who did the most to prolong and sustain the fraud were those who acted as if the self-correcting nature of science could be trusted to come to the rescue, who were confident that future research would fill the gaps. This way of thinking provided a rationalization for publishing papers at journals; both at Science magazine, where editors expedited at least one of Schön’s papers into print without always following their own review policy, and at Nature, where, on at least one occasion, technical questions about Schön’s method were less of a priority than accessibility. Bell Labs managers reasoned similarly, taking the decision to put the reputation of a renowned research institution behind research that was known to be puzzling, and so crossed the line between open-mindedness to new results, leading others astray. (238-239)
Taken as a whole, science may get the job done on the scale of decades rather than years, but decisions made and actions taken by individual scientists working within this enterprise can put errors (or lies) into the shared body of knowledge, or affect how quickly or slowly the correction filter identifies and addresses those problems.
Make no mistake, I’m a fan of science’s strategies for building reliable knowledge, but these strategies depend on the labors of human scientists, individually and collectively. Self-correction does not happen magically, and untrustworthy results (or scientists) do not wear their status on their sleeves.
All of which is to say that errors in the scientific literature are worth taking seriously, if for no other reason that they can do harm to the individual scientists whose labors are essential if the “self-correcting nature” of the scientific enterprise is to be more than a theoretical promise.