Frontier science versus textbook science

i-e7a12c3d2598161273c9ed31d61fe694-ClassicInsolence.jpgVacation time! While Orac is gone recharging his circuits and contemplating the linguistic tricks of limericks and jokes or the glory of black holes, he's rerunning some old stuff from his original Blogspot blog. This particular post first appeared on January 16, 2006. Enjoy!

During my usual weekly perusal of the New York Times, I was surprised to come across this rather perceptive article by Nicholas Wade in which he discusses the difference between "frontier" science and "textbook" science. No, I wasn't surprised because Nicholas Wade wrote a perceptive article, but rather because it was published in the New York Times. In it, he asks:

How then can the fraudulent claims by Dr. Hwang Woo Suk have been accepted by Science, a leading journal that rejects most papers submitted to it? How can the community of stem-cell scientists have allowed a very visible claim to have stood unchallenged in their field for 20 months? Little wonder that Richard Doerflinger, an official of the United States Conference of Catholic Bishops, ridiculed the dreams of therapeutic cloning in a statement last week, scoffing that scientists were chasing miracle cures "in pursuit of this mirage."

The contrast between the fallibility of Dr. Hwang's claims and the general solidity of scientific knowledge arises from the existence of two kinds of science - a distinction that is often blurred when new advances are reported first by scientific journals and then by the news media. There is textbook science and frontier science, and the two types carry quite different expiration dates.

Textbook science is material that has stood the test of time and can be largely relied upon. It may include findings made just a few years ago, but which have been reasonably well confirmed by other laboratories.

Dr. Hwang Woo Suk, as you may recall, is the Korean scientist who has now been disgraced for having published what are now widely believed to have been fabricated results indicating that he had created a line of patient-specific stem cells, as well as having committed many ethical lapses such as using eggs from women who worked for him and thus were potentially susceptible to pressure from him. He has blamed the fabricated results on subordinates, but clearly at the very least he is guilty of extreme sloppiness and at the worst outright fraud. The whole scandal has been a major black eye to Korea's previously lauded efforts in stem cell research and has provoked many attacks on the peer review process that allowed Dr. Hwang's papers to have been published in the journal Science, one of the most prestigious and difficult to crack scientific journals in the world.

Mr. Wade makes the point that research such as the kind that Dr. Hwang does is what he characterizes as "frontier" science; that is, science at the very edge of what is known or possible and warns against overreacting:

Science from the frontiers of knowledge, on the other hand, is wild, untamed and often either wrong or irrelevant to future research. A few years after they are published, most scientific papers are never cited again.

Scientific journals try to impose order on the turbulent flow of new claims by having expert reviewers assess their merit. But even at the best journals, reviewers provide only a rough screen. Many papers slip through that later turn out to be innocently wrong. A few, like Dr. Hwang's, are found to be fraudulent.

This rough screening serves a purpose. Tightening it up, in a vain attempt to produce instant textbook science, could retard the pace of scientific advance.

I'm not sure I'd be quite so blithe about the failure of peer review in this particular case, but Mr. Wade does make a good point. Much of science at the very frontiers turns out not to be correct. However, the way it is all too often reported in the press is that it is correct. We in science understand the difference between settled textbook science and the sort of frontier science that makes it into journals like Science. Indeed, we often lament that the very highest tier journals, such as Nature, Science, and Cell, tend to be too enamored of publishing what seems to be "sexy science," exciting or counterintuitive results that really grab the attention of scientists--in other words "cutting edge" or frontier science. Such journals seem to pride themselves on publishing primarily such work (which is one reason why they are so widely read and cited), while more solid, less "sexy" results seem to end up in second-tier journals.

This leads to a paradox. The science that is getting published in the highest profile, most prestigious journals is almost by definition the most tentative science. Given that, it is surprising how much of what is published in such journals actually does stand the test of time, but it should not be surprising that much of it does not. However, the very prestige of such journals gives such research seemingly more authority than research published in less prestigious journals. It is often said that one Nature, Science, or Cell paper is worth five or even ten papers in more pedestrian, middle-of-the-road journals as far as improving a scientist's CV (and chance of a good job or promotion) goes. Perhaps that is because publications in such journals are viewed as an indication that the work a scientist is doing is on the cutting edge. That perception, built up over time, is likely the major reason that it is very, very difficult to get a paper accepted and published in Science, Nature, or Cell. The vast majority of submissions are rejected, many without even being sent out for peer review because an editorial decision is made that they are not "interesting" enough (something that happened to me once). However, scientists understand that papers published in the most cutting edge journals are tentative. They're interested in the papers because such work is the most likely to advance the frontiers of science, but they also know that the papers have a higher than average probability of being wrong, either in part or in whole, or a dead end. Wade nails it when he writes:

But the roughness of the proceedings is not prominently advertised by journal editors, except when cases of blatant fraud are detected, whereupon they proclaim that peer review cannot reasonably be expected to detect fraud. They do not protest so much when newspapers report their journals' claims as if they were certifiably true. Because of Science's authority, Dr. Hwang's claims to have cloned human embryonic cells were prominently reported and presented to the public as if they were important breakthroughs.

I would also point out that, because of the imprimatur of Science, many scientists and physicians, myself included, considered Dr. Hwang's results to be major breakthroughs. Of course, part of this could be due to wish fulfillment, given the promise of fantastic new treatments for a variety of diseases that Dr. Hwang's results and new technique seemed to offer, but that's exactly the sort of situation when we as scientists should really be the most skeptical.

Many ideas for reforming peer review have been floated, but in reality I doubt that any of them would catch a determined fraud. Science and peer review inherently depend upon trust that the investigator presenting his data for publication has not fabricated it. The only real way to detect fraud would be to put such an onerous burden on peer reviewers that it would make finding qualified scientists willing to do be peer reviewers difficult unless they were paid. It would require seeing the raw data, and anyone who has done research knows just how hard it is to go through another's scientist's laboratory notebook to evaluate the raw data. One proposal, however, for reforming publication procedures and peer review that might actually help somewhat is this:

But last week Dr. Kennedy announced he was considering revising the journal's publication procedures, though not with any great hope of preventing future cases of fraud. He suggested that authors would be required to state in writing their specific contributions to a report, a reform perhaps aimed at Dr. Gerald Schatten of the University of Pittsburgh. Dr. Schatten accepted senior authorship of - and thus responsibility for - one of Dr. Hwang's papers, even though Dr. Schatten had performed none of the experiments and was not in a position to vouch for them. All the work was done in Seoul.

A second proposed change is to have all authors state that they agree with an article's conclusions.

Both procedures may seem to include a certain potential for generating strife. Each author could overstate his or her contribution, arousing the wrath of all the others. Some authors may think a conclusion too timid, while others consider it an overstatement.

Medical journals, including JAMA and several surgical journals, have been doing just this for a while now, with no undue burden or generation of strife. It may not prevent fraud, but it definitely makes one feel accountable as an author. I can say from personal experience that, when I sign off on one of those statements for a paper that I am a co-author on, I want to make damned sure that I have read the manuscript in its entirety carefully and that I do indeed agree with it, at least in general.

As Wade points out:

Tightening up the reviewing system may remove some faults but will not erase the inescapable gap between textbook science and frontier science. A more effective protection against being surprised by the likes of Dr. Hwang might be for journalists to recognize that journals like Science and Nature do not, and cannot, publish scientific truths. They publish roughly screened scientific claims, which may or may not turn out to be true.

Indeed. That is the very nature of science. What is published the first time is considered tentative. It may or may not be correct. If other scientists can replicate the results or, even better, replicate the results and use them as a foundation to build upon and make new discoveries, only then does it become less frontier science. And if the results are replicated enough times and by enough people and used as a basis for further discoveries, to the point that they are considered settled results, only then can they become "textbook" science. What, alas, the public often doesn't understand is that science is a process, not a bunch of facts, and that at its cutting edge it is often quite uncertain and controversial among scientists. To a lot of scientists, Dr. Hwang's work seemed fishy, but seeing it in Science allayed many suspicions, at least until other groups could replicate the research. In this case, it turned out that the skeptics were right.

More like this

Without evidence for a mechanism, he says he was told the results weren't convincing enough.
and again

Li resubmitted the work to Science with Place's additional molecular biology results, but it was again rejected. The letter he received said that because the work "would represent a substantial paradigm shift", the evidence just wasn't strong enough. Again, editors required demonstration of a mechanism.

In my opinion this is being to stringent. Requiring proof of a mechanism before allowing publication is too stringent.

This is the problem with too high a reliance on peer review. If the conventional paradigm is wrong or incomplete, work challenging that paradigm can't get published, can't get funded, can't get done.

Sorry, the first part of what I was trying to post got left off.

But sometimes peer review is too conservative. Take this example on RNA enhancement.

But apparently they had. So Li knew he was going to have to put together a watertight case to convince other scientists. He started to realize how difficult that would be when he first submitted his work for publication to
Science in August 2004. It was promptly rejected. He then submitted his paper to Nature that December; when it was rejected, he resubmitted it with new data in April 2005. He presented his findings at the annual conference of the American Association for Cancer Research in Anaheim, California, in May 2005, and the reception wasn't warm. "I got a lot of sceptical questions," Li says. Then, after an extensive delay, Nature again rejected Li's paper in December, 2005. Without evidence for a mechanism, he says he was told the results weren't convincing enough.

and again

Li resubmitted the work to Science with Place's additional molecular biology results, but it was again rejected. The letter he received said that because the work "would represent a substantial paradigm shift", the evidence just wasn't strong enough. Again, editors required demonstration of a mechanism.

In my opinion this is being to stringent. Requiring proof of a mechanism before allowing publication is too stringent.

This is the problem with too high a reliance on peer review. If the conventional paradigm is wrong or incomplete, work challenging that paradigm can't get published, can't get funded, can't get done.