Book review: Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World.

i-9889b55fc85bf8d0831f4ffe08c9f3c3-PlasticFantastic.jpg

Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World
by Eugenie Samuel Reich
New York: Palgrave Macmillan
2009

The scientific enterprise is built on trust and accountability. Scientists are accountable both to the world they are trying to describe and to their fellow scientists, with whom they are working to build a reliable body of knowledge. And, given the magnitude of the task, they must be able to trust the other scientists engaged in this knowledge-building activity.

When scientists commit fraud, they are breaking trust with their fellow scientists and failing to be accountable to their phenomena or their scientific community. Once a fraud has been revealed, it is easy enough to flag it as pathological science and its perpetrator as a pathological scientist. The larger question, though, is how fraud is detected by the scientific community -- and what conditions allow fraud to go unnoticed.

In Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World, Eugenie Samuel Reich explores the scientific career of fraudster Jan Hendrik Schön, piecing together the mechanics of how he fooled the scientific community and considering the motivations that may have driven him. Beyond this portrait of a single pathological scientist, though, the book considers the responses of Schön's mentors, colleagues, and supervisors, of journal editors and referees, of the communities of physicists and engineers. What emerges is a picture that challenges the widely held idea that science can be counted on to be self-correcting.

Reich describes Schön's scientific training at the University of Konstanz, and the chain of events by which he secured a postdoctoral appointment at Bell Labs under the supervision of Bertram Batlogg, the eminent scientist who had done important research on semiconductors. She explores how Schön worked to fit in as a member of the research team, attentive to expectations and working hard to meet them. Unfortunately, the work Schön did to meet expectations seems to have involved preparing persuasive "results," some at Bell Labs and some on frequent visits back to his Ph.D. lab at Konstanz, without actually conducting experiments or collecting data. Pleased with what Schön was producing, his coworkers didn't imagine that his findings were a fiction.

Here, the nature of interdisciplinary collaborations -- and their vulnerability to breaches of trust -- is especially relevant. Schön worked with chemists who grew organic crystals, which he was then supposed to turn into transistors and subject to various measurements. It is not obvious that the crystal-growers could be expected to have the expertise in transistors (either the theory behind them or the mechanics of building and operating them) to double-check Schön's work. Indeed, they assumed, with the rest of Schön's colleagues at Bell Labs, that he knew what he was doing, that he actually conducted the experiments he described, and that he had experimental data to back the results he claimed to have found.

All of these assumptions turned out to be bad ones.

Yet Schön's results were well received by the members of his team at Bell Labs. Part of this was likely due to excitement at being shown plots of data demonstrating that effects they thought might be possible were actual. Another component of the warm reception may have been business. As the research arm of AT&T, especially after its telephone monopoly was broken up in the 1980s, Bell Labs had to meet commercial goals as well as scientific goals. Thus, its researchers had an imperative to produce results that were interesting and that might possibly lead to marketable (and patentable) technologies. If Schön's organic transistors performed the way he said they did, this might open the door for innovations that would change what was possible in computers and other electronic devices. Especially as economic pressures on Bell Labs increased, those receiving Schön's results in-house clung to optimism. Schön seemed able to produce result after result, working rapidly and drawing very little in the way of resources. His work seemed to keep things moving forward towards "deliverables", whether published papers, patent applications, or potential real-world applications, to bring fame (and stock value) to Bell Labs, and demonstrate the importance of the work done by particular research units and individuals. For Schön, this track record of productivity was also a matter of securing a permanent position as a member of the technical staff at Bell Labs.

As you might imagine, Plastic Fantastic suggests some dangers of the private sector lab in a time of reorganization and shrinking resources. It's not obvious, however, that this is a problem unique to the private sector, especially in circumstances where universities, in response to shrinking resources, are encouraging their faculty to take a more entrepreneurial approach to their research activities. Needing a new big accomplishment (or many) to prove your value to the organization (and, by extension, your supervisors' value, your unit's value, etc.) could provide your average human scientist with temptation to cheat. And, given the number of people collaborating with Schön at Bell Labs and at the University of Konstanz, a lot of people had a stake in Schön's impressive list of reported achievements at Bell Labs being true. Interestingly, their investment in Schön's work seemed to manifest itself more in talking up the results (and helping tailor the language in the papers reporting them to persuade journal referees) than in actually checking to see if the results held up.

If these scientists felt a tension between Schön's promising new findings and well-tested findings that might be less exciting, it was not enough to put the brakes on Schön's deluge of results. Schön ended up bringing Bell Labs the sort of excitement most of its members would probably have chosen to avoid, if only they had seen it coming. Yet as Schön kept producing interesting results, no one seemed to entertain the notion that he might have been making them up. Might accepting this possibility have led them to scrutinize his work more carefully, helping them avoid tying the Bell Labs name and reputation to a meteoric rise followed by a sudden and disgraceful fall?

Central in Reich's account are the scientific journals and the role they played in endorsing -- indeed, hyping -- Schön's results. In an earlier post, we considered some of what Reich found as she reconstructed how Schön's manuscripts made their way through what was supposed to be a very rigorous peer review process. From that post:

Along with its British competitor Nature, Science was - and is - the most prestigious place to publish research. This was not only because of its circulation (about 130,000, compared with 3,700 for a more niche publication such as Physical Review Letters), but also because, like Nature, it has a well-organised media operation that can catapult the editor's favourite papers into the pages of national newspapers.

The media attention is sometimes justified, as the journals operate quality control via peer review, a process in which papers are pre-vetted by other experts. But Schön was a master at playing the system. He presented conclusions that he knew, from feedback at Bell Labs, experts were bound to find appealing. Editors at both journals received positive reviews and took extraordinary steps to hurry Schön's papers into print: on one occasion Science even suspended its normal policy of requiring two independent peer reviews because the first review the editors had obtained was so positive. Nature seems to have kept to its policies, but failed on at least one occasion to make sure Schön responded to questions raised by reviewers.

A journal's quality control mechanisms can only be counted on to work if they're turned on. You can't take account of the reviewer's insight into the importance or plausibility of a result, the strengths or weaknesses of a scientific argument, if you don't get that reviewer's insight. Nor can you really claim that a published paper is the result of rigorous peer review if the authors don't have to engage with the reviewers' questions.

In Plastic Fantastic, Reich walks us through the steps with several of Schön's manuscripts, noting the referee questions that Schön sidestepped. The referees seem to have trusted that the editors would require Schön to address their concerns before the papers were published, but that did not happen. Nor, for that matter, did the journals make any note of the concerns the referees raised that Schön failed to answer. Reich writes:

[I]f only the papers had acknowledged criticisms that came up during review, readers might have more easily overcome the sense of awe conveyed by the compelling data, and scrutinized the results more closely. In addition, the technical details that reviewers asked Schön to include would, if provided, almost certainly have made it easier for other scientists to follow up his work in their own laboratories. (115)

Here, it is hard not to wonder whether the priorities driving the journal editors made them especially vulnerable to a fraudster like Schön. It is obvious that the process that unfolds between submission of a manuscript and its publication is not a pure exercise of organized skepticism, but the detailed history of Schön's manuscripts suggest that organized skepticism was barely on the journal editors' radar. Even when referees were focused on the task of identifying unlikely results, likely mistakes, and pieces of the experimental procedures and theoretical framework that called for more detail and clearer explanation, the editors seemed to set these concerns aside in favor of publishing -- and publicizing -- a flashy result. Given that the editors didn't demand that Schön's manuscripts answer the referees' criticisms and questions before they were deemed worthy of publication, it's hard not to ask the extent to which peer review acts as window dressing rather than a real mechanism for ensuring credibility. Perhaps Schön's submissions were instructive in demonstrating what can happen when your main screening criterion is driven by what seems exciting or important, rather than by what has lined up enough robust evidence and fit with our theoretical understanding that it is likely to be true.

Perhaps, though, the journal editors were banking on the reputation of Bell Labs, and of Schön's coauthors like Bertram Batlogg, to ensure the scientific credibility of Schön's submissions. In the narrative that unfolds in Plastic Fantastic, Schön seemed to gain great advantage from the reputations of those standing with him, putting him in a position where his fellow scientists took his ideas or a results more seriously right out of the gates. But what does it mean to take an idea or a result seriously? Surely it shouldn't mean quenching the organized skepticism with which scientists are supposed to greet all new scientific claims -- even their own. Indeed, Plastic Fantastic describes the efforts of at least some of the scientists who didn't take Schön's results as established truth, whether because they recognized theoretical problems with them, or because they could not, despite serious time and effort, manage to reproduce his experiments.

The extent to which the reputations of Bell Labs and of eminent coauthors like Batlogg seemed simultaneously to call attention to Schön's claims as important and credible while insulating those claims from rigorous prepublication examination poses an important question for the scientific community: What ought we to infer from reputation, and what must we establish by doing our own legwork? In the aftermath of the Schön debacle, it would not be surprising to find physicists and engineers a bit more leery about giving reputation too much weight. Reputation is something built up over time; accountability is something requiring constant attention even after one's reputation is established.

Yet, as Reich notes, there seems to be a taboo in the scientific community against recognizing fraud as a possibility when holding one's fellow scientist accountable for their results. Results may be difficult to reproduce for myriad other reasons. While it may be hard to make sense of particular results given scientists' understanding of a particular class of phenomena, that might just point to a place where theory lags behind experimental results. The presumption is in favor of trusting one's fellow scientist (while carefully guarding against self-deception). But the Schön case suggests that it might be prudent to shift the burden of proof here, to demand positive evidence for the credibility of the results and of the scientist who produced them.

Just how to do this is left as a challenge for practicing scientists who wish to share the labor of knowledge-building without being made to feel foolish or wasting time and resources trying to build on scientific findings that turn out later to have been fraudulent. But Reich notes that the belief in the self-correcting nature of science is fairly useless unless actual scientists do the work to hold scientists and scientific results accountable. She writes:

[W]hich scientists did the most to contribute to the resolution of the case? Paradoxically, it wasn't those who acted as if they were confident that science was self-correcting, but those who acted as if they were worried that it wasn't. Somewhere, they figured, science was going wrong. Something had been miscommunicated or misunderstood. There was missing information. There hadn't been enough cross-checks. There were problems in the experimental method or the data analysis. People who reasoned in this way were more likely to find themselves pulling on, or uncovering, or pointing out, problems in Schön's work that it appeared might have arisen through clerical errors or experimental artifacts, but that eventually turned out to be the thin ends of wedges supporting an elaborately constructed fraud. ... In contrast, those who did the most to prolong and sustain the fraud were those who acted as if the self-correcting nature of science could be trusted to come to the rescue, who were confident that future research would fill the gaps. This way of thinking provided a rationalization for publishing papers at journals; both at Science magazine, where editors expedited at least one of Schön's papers into print without always following their own review policy, and at Nature, where, on at least one occasion, technical questions about Schön's method were less of a priority than accessibility. Bell Labs managers reasoned similarly, taking the decision to put the reputation of a renowned research institution behind research that was known to be puzzling, and so crossed the line between open-mindedness to new results, leading others astray. (238-239)

Jan Hendrik Schön himself remains something of a cipher. He comes across as a person who recognized the social aspect of scientific activity -- canvassing his colleagues for thoughts about what effects would be worth exploring and for what kinds of results they would need to see to be persuaded that an effect was real -- while all but abandoning the idea that his scientific work needed to accurately represent a physical reality probed through repeatable experiments. Working largely from the impressions of his coworkers (since Schön was not a cooperative subject for this book), Reich presents Schön as a people-pleaser, someone with a knack for figuring out what his supervisors wanted to see and then delivering it to them.

Why he chose science as the realm in which to make people happy is a lingering question.

Plastic Fantastic suggests that, ultimately, Schön's house of cards collapsed sooner than it might have if only he had understood the theory of the systems he was supposed to be exploring a little bit better. Because he didn't understand that theory, he unwittingly made up results that looked interesting but that could not be explained by reference to existing theories. Of course, he had no new theory to offer that might explain the findings he was reporting while also explaining the well-established behavior of other materials. Similarly, since he was making results up rather than arriving at them through experimentation, he did not recognize the implausibility of the experimental conditions he claimed to have used. WIth more knowledge of the theory and with a few real experiments under his belt, Schön might have constructed more plausible lies -- perhaps plausible enough that he might have gotten away with his fraud.

After all, not getting away with it was primarily a matter of other scientists in the community voicing their skepticism.

Plastic Fantastic is an absorbing book that reads like a psychological thriller or a crime novel set in the scientific community. The challenge, for the working scientist reading this account, is to figure out what might be required of her and her community to avoid falling prey to a fraudster like Schön.

* * * * *

Author Eugenie Samuel Reich was gracious enough to answer some of my questions about Plastic Fantastic and the Schön case. I'll be posting our question-and-answer tomorrow.

More like this

There's an interesting article in the Telegraph by Eugenie Samuel Reich looking back at the curious case of Jan Hendrik Schön. In the late '90s and early '00s, the Bell Labs physicist was producing a string of impressive discoveries -- most of which, it turns out, were fabrications. Reich (who…
Eugenie Samuel Reich is a reporter whose work in the Boston Globe, Nature, and New Scientist will be well-known to those with an interest in scientific conduct (and misconduct). In Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World, she turns her skills as an…
tags: book review, Plastic Fantastic, How the Biggest Fraud in Physics Shook the Scientific World, physics, ethics, fraud, Bell Labs, Lucent Technologies, Jan Hendrik Schön, Eugenie Samuel Reich Physicist Jan Hendrik Schön was too good to be true. After graduating from the University of Konstanz…
That post about how hard it is to clean up the scientific literature has spawned an interesting conversation in the comments. Perhaps predictably, the big points of contention seem to be how big a problem a few fraudulent papers in the literature really are (given the self-correcting nature of…

It sounds like a fascinating book.

But how does it challenge the idea that science is self-correcting? It seems to me that it provides futher proof that science is self-correcting.

Science didn't suffer. By publishing this book, science gets stronger. The journal editors put their reputations on the line and they suffered. But, if the peer reviewers spotted the problems, then we assume readers of the journal did also. Editors don't determine truth, the readers do.

There are good reasons why it's taboo for scientists to accuse other scientists of fraud based only on their experimental data. The only drawback is some frauds will slip through, but only for a while, until the system works the way it's supposed to.

People don't look for fraud because outright fraud is rare. Science is no exception; this is the case everywhere. Which is why grifters and conmen are able to make a consistent living, simply by being few enough that no single person or company finds it worth the cost to guard extensively against them.

And reviewers are not in the business of deciding whether a paper is correct. They decide whether it's plausible and interesting. Correctness is determined over the following years as the paper is read and referenced by others and its ideas tested and incorporated into other work.

Outright fraud is the RULE rather than the exception in "applied science" today.
Pseudo-science owns the field.
"make up a study that proves this drug is safe and we'll give you this money.."
'-yassuh, boss, rightaway boss.'
The hundredth monkey survives the test, so the results are based on only one monkey .. the 99 dead monkeys are clearly irrelevant to the purpose of the test.
This is the pseudoscience ratpack at work,
the race to the bottom of intellectual dishonesty (not to mention spiritual bankruptcy and a good deal of actual criminality.)
For the (apparently) relative few who have personal integrity & true scientific sense of inquiry, consider joining the 'union of concerned scientists'
http://www.ucsusa.org/scientific_integrity/

A great review of a really important topic. Frauds like the subject of this book, as well as Trofim Lysenko and others in their respective fields do tremendous damage to the entire enterprise of science.

Good on you for calling them out.

It's been said many times before by many others, but is worth repeating: Peer review isn't necessarily good at catching fraud. The review process, at least of an experimental paper, generally starts with the assumption that the authors (or at least the first author) actually did the work. Then, reviewers assess whether the experiment was well-designed, the context of the experiment (from citations and the author's explanation) makes sense, the data were analyzed in a valid manner, and the paper is coherently organized and clearly written. There are times when fraud may be obvious because it involves blatant contradictions, serious incoherence, or immediately apparent plagiarism, but we can't always count on that. A smart and determined liar can game the peer review system.

By Julie Stahlhut (not verified) on 31 Aug 2009 #permalink

Great review of a timely book.

Re: the comments, sure, it's plausible to say that the publication of the book is an example of how the research community is ultimately self-correcting. It would be nice if every person who brings up Alan Sokal would also note that his hoax led to important correctives in science studies (see for example the reflections in the issues of Social Text after the hoax). It would be nice if the editors at Science and Nature would fully interrogate the implications of this rupture in the same way.

Instead the assertion of "self-correction" seems to be restated as an article of faith; while those seeking to and make our understanding of the practice of science (as opposed to the theory of it) more nuanced are marginalised in various rhetorical ways....

Interesting... I'll have to read some more on this.

My undergrad alma mater (Caltech) operated under a serious honor code. We're talking take home finals.
To me this makes perfect sense and was actually quite useful in training scientists and engineers. Cheating is easy in the 'real world', so teaching students to value something more than 'don't get caught' is a very good thing.

The deliverables/economic/business pressures angle is more of a concern these days than the occasional pathological fraudster. These really do push otherwise honest scientists to fraud... most often selective omission or padding of results. This is the sort of thing that pushes science down the slope.

For a long time I have been wondering if and when someone will write a book about the escapades of Jan Hendrik Schön. This book is my next read.

I do disagree with the claim that "People don't look for fraud because outright fraud is rare. By its own nature, the fraudster does everything s/he can to hide or disguise the fraud. Their is no way to really know how many fraudsters their are out their in science. We only hear about the big names that were caught. Their are many, I believe, small fraudsters who's work is not an attention grabber as that of Jan Hendrik Schön. Yet, they do either occupy, when caught, the attention of their own small community or they simply fade away before the waves they created within them reach our shores. Science today is just another human business endeavour where greed is a major driving force. As such, certain scientists, their numbers probably constantly growing, will get their aspirations achieved by hook or by crook.

BTW, when the economy goes south, we better be doubly vigilant looking for those fraudsters.

It's pretty natural that we accept more easily what we expected to see and look harder at what goes against the grain. It slows down the corrective factor but eventually someone catches on. At least one psych study of scientific Studies have shown that if the conclusion goes against received wisdom, the methods get more scrutiny and are criticized; if the conclusion goes with the majority, the methods are given a pass.

I'm not sure how one goes about solving that but I have a suggestion: checklists for reviewers of things that one should always find in the procedure and that should be asked for if absent. And maybe an annual "scary story" about science gone wrong--perhaps around Hallowe'en.

As other people have mentioned above, reviewers generally presume that the research described in the manuscript under consideration was actually performed. That is for the practical reason that few reviewers have the time and resources to devote to actually performing the described experiment. Also, normal procedure in experimental science is to keep a lab notebook--this especially holds for commercial labs, who may require the notebook to establish ownership of intellectual property (the reviewers, who for obvious reasons were not affiliated with Bell Labs, would have had no way of knowing that Schön deviated from this practice). The ethos of "presumed innocent until proven guilty" is very much in play. As I recall from earlier readings on the topic, Julie Stalhut's comment about "blatant contradictions, serious incoherence, or immediately apparent plagiarism" seems to have come into play in this case: somebody noticed that a certain figure in one of Schön's papers looked just like one of the figures in another of Schön's which was not directly related to the first, and on investigation she found another half dozen or so instances where Schön recycled his figures. One such incident could have been an innocent mistake; repeated instances suggest that the papers are not what they claim to be.

It was reasonable of the reviewers to assume that the editors would require Schön to address their questions in some fashion, because that is standard procedure. Reputable non-GlamourMag journals (at least in my field) always operate this way, and GlamourMags are also supposed to operate this way. The reviewers do not always see those responses, because the Editor has discretion to find that the lead author's response is sufficient (or reject the paper after finding the response insufficient).

By Eric Lund (not verified) on 01 Sep 2009 #permalink

I don't know who came up with it, but a useful formulation for science is that it is supposed to make it hard for you to fool yourself - not on are your results supposed to check what you think against reality, but you're supposed to check your own ideas and results against themselves as well. Problems happen when the results mandated by other other pressures (money, status, desire to protect friends or institutions) aren't what other people want to hear - when the cost to be honest is too high (or the profit in being dishonest too high) - in that case, people may decide to stop questioning themselves or others, and then bad things happen.

Trust is a bank account - the process of science (the constant interrogation and testing of reality) provides the ability for others to use results without having to rederive them themselves. People who fake results, and the people who don't question the fakes, withdraw from the account - they get things they want (results, professional credit) without paying the price needed to make the results useful to others. Once the account is empty, though, and people can't do research well (because they don't know what of previous work is accurate), and the public can't trust us (because they don't know when we're saying what we and our bosses want to hear and when we are speaking what we believe to be true). When we decide that preservation of us and our friends' status is worth more than being honest, then we are spending something that we can't easily replace, and whose consumption we don't notice until far too late.