The mechanics of getting fooled: the multiple failures in the fraud of Jan Hendrik Schön.

There's an interesting article in the Telegraph by Eugenie Samuel Reich looking back at the curious case of Jan Hendrik Schön. In the late '90s and early '00s, the Bell Labs physicist was producing a string of impressive discoveries -- most of which, it turns out, were fabrications. Reich (who has published a book about Schön, Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World) considers how Schön's frauds fooled his fellow physicists. Her recounting of the Schön saga suggests clues that should have triggered more careful scrutiny, if not alarm bells.

Of Schön's early work at Bell Labs, Reich writes:

He set to work and, extrapolating from decade-old literature on organic crystals, was, by 1998, producing data on the flow of charge through the materials. A hard worker, Schön was often seen in the lab preparing crystal samples by attaching wires to make them into circuits. But by 2000, he was increasingly working on his own. He also spent a lot of time flying back to his old lab in Constance, supposedly to continue with research begun as a PhD student. This sideline was tolerated by the management at Bell Labs, who saw exchanges with other labs as advantageous for both sides.

I'm sympathetic to the hunch that exchanges with other labs can be advantageous to both parties. But you'd think that close collaboration with other's in one's own lab might be viewed as similarly advantageous. That no one at Bell Labs was working closely with Schön, and that no one at Bell Labs had direct contact with any of Schön's collaborators at Constance, meant that, essentially, they had only Schön's word to go on as far as what he produced at Bell Labs or at Constance.

On one occasion, after a trip to Constance, Schön showed colleagues in a neighbouring lab at Bell an astonishing graph. It appeared to show the output of an oscillating circuit. He claimed he had built the circuit in Constance using crystals from Bell Labs. The result were fantastic - but not impossible - and admiring colleagues helped Schön prepare a manuscript reporting the new claims.

The fact that Schön was showing colleagues results, rather than just generating and publishing results without any consultation, probably made these colleagues feel like they were keeping track of his experimental work. But did they see any of the data used to generate the graph? Did they try at Bell Labsto build other circuits (or even a duplicate of the circuit Schön said he had built in Constance) using those crystals? Did they talk about any experimental difficulties that might have come up for Schön in Constance?

Maybe some of those preliminary exchanges would have been a good idea before helping Schön turn his exciting claims into a convincing manuscript.

Also, Schön had many co-authors for the papers he wrote that turned out to be fraudulent. It's worth asking what kind of direct knowledge these co-authors had of the experiments reported in the papers on which they were putting their names.

After all, if co-authors cannot vouch that the experiments were actually conducted as described, and produced the results reported in the paper, then who can?

They sent that manuscript, with more than a dozen others over the next year and a half, to the journal Science. Along with its British competitor Nature, Science was - and is - the most prestigious place to publish research. This was not only because of its circulation (about 130,000, compared with 3,700 for a more niche publication such as Physical Review Letters), but also because, like Nature, it has a well-organised media operation that can catapult the editor's favourite papers into the pages of national newspapers.

The media attention is sometimes justified, as the journals operate quality control via peer review, a process in which papers are pre-vetted by other experts. But Schön was a master at playing the system. He presented conclusions that he knew, from feedback at Bell Labs, experts were bound to find appealing. Editors at both journals received positive reviews and took extraordinary steps to hurry Schön's papers into print: on one occasion Science even suspended its normal policy of requiring two independent peer reviews because the first review the editors had obtained was so positive. Nature seems to have kept to its policies, but failed on at least one occasion to make sure Schön responded to questions raised by reviewers.

A journal's quality control mechanisms can only be counted on to work if they're turned on. You can't take account of the reviewer's insight into the importance or plausibility of a result, the strengths or weaknesses of a scientific argument, if you don't get that reviewer's insight. Nor can you really claim that a published paper is the result of rigorous peer review if the authors don't have to engage with the reviewers' questions.

That this relaxed attitude toward peer review was taken by editors of journals with such well-oiled operations for publicizing new papers in the mass media just amplified the risk of shoddy results being widely heralded as a major breakthrough.

By the middle of 2001, more than a dozen laboratories around the world were trying to replicate Schön's work on organic crystals, motivated by the prospect of building on the findings in Science and Nature. Yet without access to the details of his methods, which were strangely absent from the published reports, no one was successful. It was a depressing year for scores of scientists, whose first thought when they were unable to replicate the work was to blame themselves.

Rigorous peer review might have held up publication of Schön's papers until details of his method were spelled out. It did not.

Similarly, Schön's colleagues who were excited about his results and helping him prepare manuscripts reporting them might have pressed him to spell out the details of his methods in his manuscripts (and theirs, to the extent that those assisting him were credited as authors). They didn't.

At this point, the absence of a detailed account of experimental methods in a scientific paper (or at least a clear reference to where the precise methods one is using are described elsewhere) is a red flag.

One of the most cherished beliefs of scientists is that their world is "self-correcting" - that bad or fraudulent results will be shown up by other experiments and that the truth will emerge. Yet this system relies, far more than is generally realised, on trust: it was natural for his peers to question the way Schön interpreted his data, but taboo to question his integrity.

In 1830, the British mathematician Charles Babbage wrote of the distinction between truth-seekers and fraudsters in his Reflections on the Decline of Science in England, and on Some of Its Causes. The former, he said, zealously prevent bias from influencing facts, whereas the fraudster consciously allows his prejudices to interfere with observations. But what Schön was in fact doing was cleverer than simply falsifying his data, and claiming some miraculous breakthrough. By talking to colleagues, he worked out what results they hoped for - so when he fabricated results that seemed to prove their theories and hunches, they were thrilled.

Schön was, in effect, doing science backwards: working out what his conclusions should be, and then using his computer to produce the appropriate graphs. The samples that littered his workspace were, effectively, props. The data he produced were not only faked, but recycled from previous fakeries (indeed, it was this duplication of favoured graphs that would prove his Achilles' heel).

In other words, it was not just Schön who deceived his colleagues. Their self-deception (when presented with "hard evidence" that confirmed their scientific hunches) played an important role in packaging Schön's fraud as credible scientific results.

Indeed, while it's important to expect that scientists maintain a high level of integrity, perhaps it is a mistake to assume it without corroboration. And scientists should not get huffy if their colleagues want to see the experiment conducted, want to examine notebooks or raw data, want a clear description of the method so they can try to repeat the experiment. These steps are useful protection against self-deception, against which all scientists should be constantly vigilant. If a scientist has made a mistake in conducting the experiment, in analyzing the data, in interpreting the results, he or she should want to find that out -- preferably in advance of publication.

If a scientist wants to avoid the mechanisms that might detect error (or fraud), then there's a problem.

So who is to blame? Schön, for stretching the truth to breaking point and beyond? His bosses, for pushing their scientists towards marketable discoveries and prestigious publications that would garner good publicity? The journals, for so eagerly accepting his stream of discoveries? Or his colleagues, for not raising more questions?

In the Schön case, it looks like there is ample blame to go around. The challenge is applying the lessons learned here and avoiding the same kinds of problems in the future.

Hat-tip: Ed Yong

Categories

More like this

Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World by Eugenie Samuel Reich New York: Palgrave Macmillan 2009 The scientific enterprise is built on trust and accountability. Scientists are accountable both to the world they are trying to describe and to their fellow…
Eugenie Samuel Reich is a reporter whose work in the Boston Globe, Nature, and New Scientist will be well-known to those with an interest in scientific conduct (and misconduct). In Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World, she turns her skills as an…
tags: book review, Plastic Fantastic, How the Biggest Fraud in Physics Shook the Scientific World, physics, ethics, fraud, Bell Labs, Lucent Technologies, Jan Hendrik Schön, Eugenie Samuel Reich Physicist Jan Hendrik Schön was too good to be true. After graduating from the University of Konstanz…
Scientists love good gossip as much as any 'Us'-magazine reader, and we take the same car-wreck interest in seeing our lofty demagogues topple in disgrace. The 'Huang stem cell scandal's' shame infected not only the fraudulent scientists involved, but also the journal that published his…

"After all, if co-authors cannot vouch that the experiments were actually conducted as described, and produced the results reported in the paper, then who can?"

I think the situation is a bit more complicated than this. What about collaborations where the two (or more) collaborators have very different areas of expertise? A biologist who clones, expresses and purifies a novel protein is unlikely to have the expertise to judge whether his crystallography colleague was above board in the way he collected the x-ray diffraction data and refined the results to produce the structure. And yet collaborations like this occur all the time and it would be a loss to science if they didn't.

I recall that, when the scandal broke, there was a lot of reflecting on the role and responsibilities of co-authors. The best solution that I know of is specifically indicating who did which part of the work (I believe that PNAS does this).

By AcademicLurker (not verified) on 19 May 2009 #permalink

I agree that Schön's co-authors, and especially his superiors, should have kept a closer watch on what he was doing. Somebody should have inspected the lab notebooks, and the fact that Schön apparently didn't keep written hardcopy records should have been a red flag.

Without reading the papers, I find it harder to fault the referees. It depends on how well described the experiments were. The referees basically have to trust that the authors of the paper actually have performed the described experiments. They might suggest follow-up experiments if they think some alternative has not been adequately eliminated, and it's reasonable for them to request further details if they think an experiment has not been adequately described. However, the referees could be theorists who would not notice the sketchy description of the experiment, and they would not have had access to the lab notebooks even if the notebooks existed. I'm not sure it makes sense for the journal to request a copy of the lab notebooks, since the journals also publish theoretical papers for which there would not be lab notebooks.

By Eric Lund (not verified) on 19 May 2009 #permalink

It's not the job of reviewers to verify the results of a given paper, but I think that they ought to make sure that there is enough experimental detail for someone else to attempt to verify the results. If they don't know enough to understand what details might be needed to reproduce a given work, it probably ought to be reviewed by someone who can. The lack of explicit experimental methods ought to be a red flag - in synthetic organic chemistry, for example, it may be possible to compute NMR spectra and details for compounds, and so the presence of data for intermediates in a synthesis may not be enough to show that a synthesis was actually done. Of course, the total lack of intermediate data (as in the synthesis/? of hexacyclinol by Dr. LaClair - though there was one NMR of the penultimate intermediate) ought to be a showstopper.

i don't know how to get around the possibility that we're seeing what we want to see because we want to see it rather than because it's consistent with reality. If the experiments are self-consistent, a reader who doesn't actually redo the experiments isn't going to see a problem, and those that do are likely to think that the problem is them. Presumptive mistrust seems like a heavy burden to sustain.

"At this point, the absence of a detailed account of experimental methods in a scientific paper (or at least a clear reference to where the precise methods one is using are described elsewhere) is a red flag."

The problem often lies in that "clear reference". Authors will say things like "protein X was purified from E. coli as described in ref 24". But if you track down paper 24, it might refer to another paper, and another. Eventually you are left without an experimental protocol, or a protocol that describes how to purify a slightly different version of the protein, or from yeast instead of E. coli, or... the variations are endless. And reviewers tend not to put in the effort needed to find this type of problem.

As a grad student, I quickly learned to HATE papers published in Science or Nature for their lack of experimental details. I think things are a bit better now, because web publishing allows for long "supplemental" sections that can include more Materials and Methods than the print version. But still, I'm afraid it's more the norm than the exception to find that published work cannot be replicated by other scientists, simply because the published details are insufficient. This makes it hard to identify fraud, as opposed to bad paper writing (and reviewing).