Problems in the scientific literature: vigilance and victim-blaming.

That post about how hard it is to clean up the scientific literature has spawned an interesting conversation in the comments. Perhaps predictably, the big points of contention seem to be how big a problem a few fraudulent papers in the literature really are (given the self-correcting nature of science and all that), and whether there larger (and mistaken) conclusions people might be drawing about science on the basis of a small fraction of the literature.

I will note just in passing that we do not have reliable numbers on what percentage of the papers published in the scientific literature are fraudulent. We may be able to come up with reliable measures of the number of published papers that have been discovered to be fraudulent, but there's not a good procedure to accurately count the ones that succeed in fooling us.

Set that worry aside, and the legitimate worry that "little frauds" that might not do too much to deform the shape of the scientific literature might end up having significant effects on the scientific career scorekeeping. Let's take on the big question:

How much of a problem is it to leave the scientific literature uncorrected? Who is it a problem for?

There are different ways published results are put to use -- as sources of techniques, as starting points for a new experiment or study built on the one reported in the published paper, as support for the plausibility of your own result (my result fits what you'd expect given the results or theory here), or as part of the picture of what we understand so far about a particular system or phenomenon (e.g., in a review article).

But published scientific papers are also put to use in other ways -- maybe by researchers in other fields helping themselves to an interdisciplinary approach (or applying the approach from their discipline to a result reported in another discipline); or by policy makers (if the result seems to bear on a question with policy implications -- think biomedical science or climate science); or by non-scientists trying to educate themselves or make informed decisions.

In other words, we can't assume that the papers in the scientific literature are just used by a tight-knit community of scientists working in the particular discipline that generated the results (or on the particular problem that was the focus of the researchers who produced a particular paper). The scientific literature is a body of knowledge that exists in a public space with the intention that others will use it. To the extent that there are problems in that literature, the uses to which people hope to apply various pieces of the literature may be undermined.

Well, sure, the commentariat opined, but that's a temporary problem. The truth will out! As Comrade PhysioProf put it:

I doubt that any of this really matters. Fraudulent work that leads to false conclusions is going to be realized by those to whom it is relevant to be wrong sooner rather than later if it is actually of any importance.

Myself, I have enough memories of trying to get experiments described in the scientific literature to work as described that I responded:

I reckon it might matter quite a lot to the researchers who end up wasting time, effort, and research funds trying to build something new on the basis of a reported result or technique that turns out (maybe later than sooner) to be fraudulent.

So, in the long run, maybe the community is just fine, but some honest individuals who are part of that community are going to get screwed.

If we're paying attention, however, we've noticed that the fortunes of Science are separable from the fortunes of individual scientists (at least to a certain extent). Our current set-up seems to accommodate a certain amount of "waste" in terms of scientific talent that bleeds out of the pipeline. What are you going to do?

Dario Ringach chimed in to support Comrade PhysioProf's general point:

Fortunately, time will do the job. There is no incorrect scientific result, fabricated or not, that will stand the test of time. Science is self-correcting, a feature that is difficult to find in any other human activity.

My experience is that by the time a retraction actually appears in print the community has already known for some time the study could not be replicated or that something was wrong.

Does the scientific community know when something is wrong with a paper long before misconduct investigations have run their course or retractions or corrections are even published online?

As Pinko Punko points out, that rather depends on who you count as the community -- people who are well-connected and "hear things" from reliable sources, who can schmooze at professional meetings with the bigwigs with large labs that have the equipment and the labor to devote to trying to replicate (and build on) interesting and important results right away, or everyone else? (Remember that simply attending the professional meetings doesn't automatically score you schmooze time at the bar with the people in the know -- frequently they want to hang out with their friends, and the conversation may be more circumspect if there are people they don't really know in the group.)

Surely there are interested scientists who are not CC'd on the email correspondence about The Exciting Reported Result We Just Can't Replicate In Our Lab. If you are not part of these informal networks, you're pretty much stuck trusting the literature to do the talking. I'm guessing that those not "in the know" and on the CC list of the well-connected are disproportionately early career scientists, and disproportionately trained at schools not recognized as super-elite hotbeds of scientific research (and less likely to have a current position at such a SEHOSR). I'd also bet that, despite our idealistic picture of science as an international activity and community, the backchannel communications between the scientists "in the know" about problems with a paper may be happening more within national boundaries than across them.

In short, if we're trusting informal routes of communication to alert scientists to problems in the literature, I think it's a stretch to assume that everyone who matters will know there's a problem (let alone the nature of the problem) well before formal retractions or corrections come down.

Unless, of course, you have a much different idea of who matters in the community of science than I do. Maybe you do. But then maybe you want to come clean about it rather than paying lip service to the idealistic view of science as a democratic community in which your personal characteristics are not as important as the contribution you make to the shared body of knowledge.

Now, I don't think people are really saying that if you are not plugged into the bigwig rumor mill, you don't count as a full member of the tribe of science. But I think I'm seeing a maneuver that's a little disturbing. It's almost as if people are suggesting that it's your own fault if you are deceived by a fraudulent paper published in the scientific literature. Careful scientists don't get fooled. There are things you can do to protect yourself from being fooled. If you got fooled, clearly you fell down on one of those steps on the checklist that careful scientists apply -- but that couldn't happen to me, because I'm more careful than that.

Short of not trusting anything published in the literature, I submit that there is no guaranteed method to avoid being deceived by a fraudulent paper published in the scientific literature.

And, I think laying the blame for being deceived at the feet of those who were deceived (rather than at the feet of the fraudster, or the lax vetting of the paper that resulted in its being published, or what have you) is not primarily intended to blame the victim of the deception (although it surely has that effect). Rather, I think it is a way to try to make oneself feel safer and less vulnerable in a situation in which one can't tell the fraudulent papers or the lying scientists just by looking at them.

Of course, it doesn't actually make us safer to tell ourselves that we were somehow smarter, or more skeptical, or more vigilant, than the ones who got burned. But maybe feeling safer just keeps us from freaking out.

Or maybe it keeps us from having to deal with bigger problems, like whether there are structural changes we should consider (for scientific journals, or the way hiring and promotion decisions are made, or the way grant money is distributed) that might reduce the payoff for fraudsters or give us more robust ways to ensure the reliability of the scientific literature. Change is painful, and there's plenty of hard work on scientists' plates already.

But sometimes our reluctance to reexamine the status quo, paired with our unwillingness to accept a world in which bad things can happen to scientists who didn't somehow bring those bad things upon themselves, can leave us stuck in a state of affairs that is decidedly suboptimal.

In closing, I want to remind you of the apt observations of Eugenie Samuel Reich in her book on fraudster Jan Hendrik Schön, Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World (reviewed here). In particular, Reich notes that the belief in the self-correcting nature of science is fairly useless unless actual scientists do the work to hold scientists and scientific results accountable. She writes:

W]hich scientists did the most to contribute to the resolution of the case? Paradoxically, it wasn't those who acted as if they were confident that science was self-correcting, but those who acted as if they were worried that it wasn't. Somewhere, they figured, science was going wrong. Something had been miscommunicated or misunderstood. There was missing information. There hadn't been enough cross-checks. There were problems in the experimental method or the data analysis. People who reasoned in this way were more likely to find themselves pulling on, or uncovering, or pointing out, problems in Schön's work that it appeared might have arisen through clerical errors or experimental artifacts, but that eventually turned out to be the thin ends of wedges supporting an elaborately constructed fraud. ... In contrast, those who did the most to prolong and sustain the fraud were those who acted as if the self-correcting nature of science could be trusted to come to the rescue, who were confident that future research would fill the gaps. This way of thinking provided a rationalization for publishing papers at journals; both at Science magazine, where editors expedited at least one of Schön's papers into print without always following their own review policy, and at Nature, where, on at least one occasion, technical questions about Schön's method were less of a priority than accessibility. Bell Labs managers reasoned similarly, taking the decision to put the reputation of a renowned research institution behind research that was known to be puzzling, and so crossed the line between open-mindedness to new results, leading others astray. (238-239)

Taken as a whole, science may get the job done on the scale of decades rather than years, but decisions made and actions taken by individual scientists working within this enterprise can put errors (or lies) into the shared body of knowledge, or affect how quickly or slowly the correction filter identifies and addresses those problems.

Make no mistake, I'm a fan of science's strategies for building reliable knowledge, but these strategies depend on the labors of human scientists, individually and collectively. Self-correction does not happen magically, and untrustworthy results (or scientists) do not wear their status on their sleeves.

All of which is to say that errors in the scientific literature are worth taking seriously, if for no other reason that they can do harm to the individual scientists whose labors are essential if the "self-correcting nature" of the scientific enterprise is to be more than a theoretical promise.

Categories

More like this

"Paradoxically, it wasn't those who acted as if they were confident that science was self-correcting, but those who acted as if they were worried that it wasn't."

That's not paradoxical at all! Those of us saying that science is self-correcting don't sit pollyannaish waiting for a miracle to happen. We check each others' work. We try to replicate experiments from other labs. We compare when our data doesn't match others. When our data (or our theories) don't fit, we write that in the scientific literature. A debate ensues. The scientific literature corrects itself. Of course errors are worth taking seriously! The question is whether it matters if those errors are due to fraud or to honest errors.

One of my experimental papers starts off by saying "Our results are incompatible with those of group Z. Here's our data. Here are some possible reasons why there's a discrepancy." Although there's still debate between our results and those of group Z, the literature is tending towards our side over theirs. It doesn't matter whether group Z made a mistake because they mis-analyzed their data or intentionally screwed-up; there is a disagreement that is being corrected.

I had a theoretical paper once that said "Result x from group X and result y from group Y are incompatible. Either group X or group Y are wrong." (It turned out result x wasn't being examined at high-enough resolution. At higher resolution, it turned out that group X's interpretation was wrong [or, more politely and more fairly said, incomplete].)

The scientific literature corrects itself because we ARE checking each others' work.

[Quotes paraphrased and names removed to protect anonymity.]

There's another interesting case to consider. Sometimes a crank paper gets published which is so dreadful that no credible scientist actually working in the relevant field is taken in. This is unusual... because for it to happen, the paper has to have got past reviewers and/or editors. There are a couple of ways this can still happen, however; particularly in cases where the are strong views on the subject matter outside of body of scientists actually working on the topic.

There are a number of such subjects. Evolution. Deep time. Global warming. Anthropology. Anything bearing on history. Medicine. Sexuality. And so on.

In these cases, it is worth looking further afield than the normal readership of scientific papers. Crank papers do appear from time to time in the legitimate scientific literature, from where they are used to prop up all kinds of odd ideas in public or political debate, where the discussion is completely out of step with debates within the scientific community.

The down side is that putting up a response can draw attention to worthless papers that otherwise have a day or two of public visibility and then fade into complete obscurity.

By Chris Ho-Stuart (not verified) on 30 Mar 2010 #permalink

I'm going to take a cynical perspective here: this isn't viewed as a huge problem by the scientific community in general because it's not a big problem for principle investigators. The ones that are actually stuck trying to replicate things that can't be replicated are often graduate students. They're in the process of training to do science, and it's easy for the PIs to assume some degree of incompetence in the student, rather than problems in another lab. Plus, grad student time is (relatively) cheap, and a few wasted months here or there won't make a big difference in a six year degree program.

I'm not attempting to claim that nobody takes irreproducible results or their grad students seriously, just that it's not always personal to the people who are in the best position to effect changes.

Short of not trusting anything published in the literature, I submit that there is no guaranteed method to avoid being deceived by a fraudulent paper published in the scientific literature.

Absolutely, and this statement applies equally to papers that make honest subtle mistakes but are not fraudulent. The peer review system is not designed to catch deliberate fraud. It can sometimes detect the subtle honest mistake, but that depends on having an alert reviewer with the right expertise to detect the mistake--not a given.

Richard Feynman chose not to trust anything published, and he had good reason for thinking so (see his anecdote "The Seven Percent Solution" from Surely You're Joking, Mr. Feynman! for an example where scientists were fooled by an honest mistake). However, as a theoretical physicist he could afford that luxury. Those of us who do experimental or observational work must confront the reality that lab time and resources are finite, so we could not duplicate all of the relevant published results even if we wanted to. We can only spot check the published results and cross check them against our own experiments. When we read a paper by Fulano et al., we must assume unless there is evidence to the contrary that Fulano et al. performed the described experiments and obtained the described results. Most of the time (there are exceptions, such as plagiarism or the reused graphs that eventually did for Schön), the only way to obtain that evidence is by doing an experiment and getting results that conflict with Fulano et al. (and in the Schön case, it was the failure of other groups to reproduce his results that eventually led to the detection of the reused graphs).

By Eric Lund (not verified) on 30 Mar 2010 #permalink

I think that any reasonable scientist, in preparation to follow up on what appears to be an important result, must replicate it as a first step. Nobody really wants to waste time pursuing a line of research based on a result they cannot replicate.

If you fail to replicate, the next thing that happens is that you start asking your colleagues at conferences if anyone has tried to replicate the study. You might also ask directly the authors of the original study for details that may explain the discrepancies. This smaller community of scientists, to which the validity of the result is critical, usually knows way ahead of any retraction or correction if a result is to be believed or not.

Some results affect not only a small number of scientists working on a topic but public policy and the community at large. An obvious example is the link between vaccinations and autism. In such case,s it is obvious that the sooner the literature gets corrected the better.

By Dario Ringach (not verified) on 30 Mar 2010 #permalink

But this is just one small component of the various intangible elements (luck, intuition, creativity, etc) that go into an individual's success in science. Right now there is someone out there who just lost a month's work due to a power outage.

There are other such factors that probably have a much larger impact on the self-correcting function of science. For example, the "file-drawer problem" (non-publication of negative results) and the problem of contaminated cell lines.

By Neuro-conservative (not verified) on 30 Mar 2010 #permalink

It strikes me that one trite way of pointing out that individual scientists shouldn't be blamed for being fooled by a fraudulent paper is that, rather necessarily, that paper had already fooled the peer reviewers who apparently approved it.

I was just having a discussion today with a visiting professor about an investigator in our field whose papers are always full of surprising, counter-intuitive and hard to explain results. In our field, it seems that labs working on similar projects do attempt to reproduce some of the findings, and they can't. This information is only ever informally disseminated, as journals do not really have outlets for short and sweet presentation of negative or conflicting results, or alternate interpretations. Only recently have many journals begun to encourage these sorts of correspondence. PNAS is the best, whither their formal "comment on" and "reply to process" as these are published in the actual journal table of contents. Science has technical comments in the same vein, but these are more rare. Nature has "Matters Arising" but these seem even more rare. Cell is the worst of the bunch because the comments are practically hidden. PLoS journals encourage commenting, but because technical, data containing responses aren't distinguished from more minor correspondence, it doesn't have the same force.

In short, if we're trusting informal routes of communication to alert scientists to problems in the literature, I think it's a stretch to assume that everyone who matters will know there's a problem (let alone the nature of the problem) well before formal retractions or corrections come down.

Those manuscripts that could possibly ever be formally retracted or corrected are only a small tiny fraction of those that might be later determined to have led people down wrong or unfruitful paths. There are a bajillion reasons why people waste time in science, going down pointless paths only to realize it later. From a quantitative standpoint, the existence of papers that should be retracted or corrected and the fact that some people keep citing such papers even after they are retracted or corrected pales in comparison to all those other reasons: stupidity, incompetence, poor experimental design, ill-considered hypotheses, corner-cutting out of laziness, unavoidable bad luck, etc.