There’s an interesting article in the Telegraph by Eugenie Samuel Reich looking back at the curious case of Jan Hendrik Schön. In the late ’90s and early ’00s, the Bell Labs physicist was producing a string of impressive discoveries — most of which, it turns out, were fabrications. Reich (who has published a book about Schön, Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World) considers how Schön’s frauds fooled his fellow physicists. Her recounting of the Schön saga suggests clues that should have triggered more careful scrutiny, if not alarm bells.
Of Schön’s early work at Bell Labs, Reich writes:
He set to work and, extrapolating from decade-old literature on organic crystals, was, by 1998, producing data on the flow of charge through the materials. A hard worker, Schön was often seen in the lab preparing crystal samples by attaching wires to make them into circuits. But by 2000, he was increasingly working on his own. He also spent a lot of time flying back to his old lab in Constance, supposedly to continue with research begun as a PhD student. This sideline was tolerated by the management at Bell Labs, who saw exchanges with other labs as advantageous for both sides.
I’m sympathetic to the hunch that exchanges with other labs can be advantageous to both parties. But you’d think that close collaboration with other’s in one’s own lab might be viewed as similarly advantageous. That no one at Bell Labs was working closely with Schön, and that no one at Bell Labs had direct contact with any of Schön’s collaborators at Constance, meant that, essentially, they had only Schön’s word to go on as far as what he produced at Bell Labs or at Constance.
On one occasion, after a trip to Constance, Schön showed colleagues in a neighbouring lab at Bell an astonishing graph. It appeared to show the output of an oscillating circuit. He claimed he had built the circuit in Constance using crystals from Bell Labs. The result were fantastic – but not impossible – and admiring colleagues helped Schön prepare a manuscript reporting the new claims.
The fact that Schön was showing colleagues results, rather than just generating and publishing results without any consultation, probably made these colleagues feel like they were keeping track of his experimental work. But did they see any of the data used to generate the graph? Did they try at Bell Labsto build other circuits (or even a duplicate of the circuit Schön said he had built in Constance) using those crystals? Did they talk about any experimental difficulties that might have come up for Schön in Constance?
Maybe some of those preliminary exchanges would have been a good idea before helping Schön turn his exciting claims into a convincing manuscript.
Also, Schön had many co-authors for the papers he wrote that turned out to be fraudulent. It’s worth asking what kind of direct knowledge these co-authors had of the experiments reported in the papers on which they were putting their names.
After all, if co-authors cannot vouch that the experiments were actually conducted as described, and produced the results reported in the paper, then who can?
They sent that manuscript, with more than a dozen others over the next year and a half, to the journal Science. Along with its British competitor Nature, Science was – and is – the most prestigious place to publish research. This was not only because of its circulation (about 130,000, compared with 3,700 for a more niche publication such as Physical Review Letters), but also because, like Nature, it has a well-organised media operation that can catapult the editor’s favourite papers into the pages of national newspapers.
The media attention is sometimes justified, as the journals operate quality control via peer review, a process in which papers are pre-vetted by other experts. But Schön was a master at playing the system. He presented conclusions that he knew, from feedback at Bell Labs, experts were bound to find appealing. Editors at both journals received positive reviews and took extraordinary steps to hurry Schön’s papers into print: on one occasion Science even suspended its normal policy of requiring two independent peer reviews because the first review the editors had obtained was so positive. Nature seems to have kept to its policies, but failed on at least one occasion to make sure Schön responded to questions raised by reviewers.
A journal’s quality control mechanisms can only be counted on to work if they’re turned on. You can’t take account of the reviewer’s insight into the importance or plausibility of a result, the strengths or weaknesses of a scientific argument, if you don’t get that reviewer’s insight. Nor can you really claim that a published paper is the result of rigorous peer review if the authors don’t have to engage with the reviewers’ questions.
That this relaxed attitude toward peer review was taken by editors of journals with such well-oiled operations for publicizing new papers in the mass media just amplified the risk of shoddy results being widely heralded as a major breakthrough.
By the middle of 2001, more than a dozen laboratories around the world were trying to replicate Schön’s work on organic crystals, motivated by the prospect of building on the findings in Science and Nature. Yet without access to the details of his methods, which were strangely absent from the published reports, no one was successful. It was a depressing year for scores of scientists, whose first thought when they were unable to replicate the work was to blame themselves.
Rigorous peer review might have held up publication of Schön’s papers until details of his method were spelled out. It did not.
Similarly, Schön’s colleagues who were excited about his results and helping him prepare manuscripts reporting them might have pressed him to spell out the details of his methods in his manuscripts (and theirs, to the extent that those assisting him were credited as authors). They didn’t.
At this point, the absence of a detailed account of experimental methods in a scientific paper (or at least a clear reference to where the precise methods one is using are described elsewhere) is a red flag.
One of the most cherished beliefs of scientists is that their world is “self-correcting” – that bad or fraudulent results will be shown up by other experiments and that the truth will emerge. Yet this system relies, far more than is generally realised, on trust: it was natural for his peers to question the way Schön interpreted his data, but taboo to question his integrity.
In 1830, the British mathematician Charles Babbage wrote of the distinction between truth-seekers and fraudsters in his Reflections on the Decline of Science in England, and on Some of Its Causes. The former, he said, zealously prevent bias from influencing facts, whereas the fraudster consciously allows his prejudices to interfere with observations. But what Schön was in fact doing was cleverer than simply falsifying his data, and claiming some miraculous breakthrough. By talking to colleagues, he worked out what results they hoped for – so when he fabricated results that seemed to prove their theories and hunches, they were thrilled.
Schön was, in effect, doing science backwards: working out what his conclusions should be, and then using his computer to produce the appropriate graphs. The samples that littered his workspace were, effectively, props. The data he produced were not only faked, but recycled from previous fakeries (indeed, it was this duplication of favoured graphs that would prove his Achilles’ heel).
In other words, it was not just Schön who deceived his colleagues. Their self-deception (when presented with “hard evidence” that confirmed their scientific hunches) played an important role in packaging Schön’s fraud as credible scientific results.
Indeed, while it’s important to expect that scientists maintain a high level of integrity, perhaps it is a mistake to assume it without corroboration. And scientists should not get huffy if their colleagues want to see the experiment conducted, want to examine notebooks or raw data, want a clear description of the method so they can try to repeat the experiment. These steps are useful protection against self-deception, against which all scientists should be constantly vigilant. If a scientist has made a mistake in conducting the experiment, in analyzing the data, in interpreting the results, he or she should want to find that out — preferably in advance of publication.
If a scientist wants to avoid the mechanisms that might detect error (or fraud), then there’s a problem.
So who is to blame? Schön, for stretching the truth to breaking point and beyond? His bosses, for pushing their scientists towards marketable discoveries and prestigious publications that would garner good publicity? The journals, for so eagerly accepting his stream of discoveries? Or his colleagues, for not raising more questions?
In the Schön case, it looks like there is ample blame to go around. The challenge is applying the lessons learned here and avoiding the same kinds of problems in the future.
Hat-tip: Ed Yong