Negative Results

The always interesting Sharon Begley has a WSJ column today on the new scientific journals that only publish negative results.

A handful of journals that publish only negative results are gaining traction, and new ones are on the drawing boards.

"You hear stories about negative studies getting stuck in a file drawer, but rigorous analyses also support the suspicion that journals are biased in favor of positive studies," says David Lehrer of the University of Helsinki, who is spearheading the new Journal of Spurious Correlations.

"Positive" means those showing that some intervention had an effect, that some gene is linked to a disease -- or, more broadly, that one thing is connected to another in a way that can't be explained by random chance. A 1999 analysis found that the percentage of positive studies in some fields routinely tops 90%. That is statistically implausible, suggesting that negative results are being deep-sixed. As a result, "what we read in the journals may bear only the slightest resemblance" to reality, concluded Lee Sigelman of George Washington University.

Example: In the 1990s, publication bias gave the impression of a link between oral contraceptives and cervical cancer. In fact, a 2000 analysis concluded, studies finding no link were seldom published, with the result that a survey of the literature led to "a spurious statistical connection."

Keeping a lid on negative results wastes time and money. In the 1980s, experiments claimed that an antibody called Rap-5 latches onto a cancer-related protein called Ras, exclusively. Scientists using Rap-5 then reported the presence of Ras in all sorts of human tumors, notes Scott Kern of Johns Hopkins University. That suggested that Ras is behind many cancers.

Oops. The antibody actually grabs other molecules, too. What scientists thought was Ras alone was a stew of compounds. In part because the glitch was published in obscure journals, researchers continued to use Rap-5 and reach erroneous conclusions, says Dr. Kern.

"If the negative results had been published earlier, scientists would have saved a lot of time and money," adds Bjorn Olsen of Harvard Medical School, a founding editor, with Christian Pfeffer, of the Journal of Negative Results in Biomedicine.

After a slow start in 2002, that journal is receiving more and better papers, says Dr. Olsen. One found that, contrary to other reports, the relative length of the bones of a woman's index finger and ring finger may not be related to her exposure to testosterone in utero. Another found that a molecule called PYY doesn't have a big influence on body weight; another, that variations in a gene that earlier studies had associated with obesity in mice and in American and Spanish women isn't linked to obesity in French men or women.

That may sound like the set-up for a joke, but studies that dispute connections between a gene and a disease are among the most important negative results in biomedicine. They undercut the simplistic idea that genes inevitably cause some condition, and show instead that how a gene acts depends on the so-called genetic background -- all of your DNA -- which affects how individual genes are activated and quieted. But you seldom see such negative results in top journals.

Although this is a fascinating trend, I'm skeptical of Begley's cynical conclusion:

Why are scientists coy about publishing negative data? In some cases, says Dr. Kern, withholding them keeps rivals doing studies that rest on an erroneous premise, thus clearing the field for the team that knows that, say, gene A doesn't really cause disease B.

Having generated my fair share of negative results (aka experimental failure), I think a more important reason is that negative results are, by definition, more ambiguous than positive results. When my Westerns didn't work, I didn't know if the negative result was real, or just a symptom of my incompetence. (Most of the time, I'm sure it was just my incompetence.)

Tags

More like this

We're Sorry This Is Late ... We Really Meant To Post It Sooner: Research Into Procrastination Shows Surprising Findings: A University of Calgary professor in the Haskayne School of Business has recently published his magnum opus on the subject of procrastination -- and it's only taken him 10 years…
One of the things that's hammered into your head as a baby scientist is the importance of running controls. Typically, you run a positive control--a 'gamed' experiment where you know what the outcome should be and which tells you that the experiment is working--and a negative control which should…
NICE! Gene therapy of pancreatic cancer targeting the K-Ras oncogene Cancer sucks, but some kinds of cancers suck worse than others.  One that really sucks is pancreatic cancer.  From the intro of this paper: Pancreatic cancer (PC) is the fourth leading cause of cancer death among men and women,…
I realize it's fundamental to being a crank, but the persecution complex of the IDers is getting really old. The latest is Bruce Chapman at Evolution News and Views, who no longer satisfied with grasping at the mantle of Galileo, is now groping for Semmelweis and Lister as well. The idea being,…

This is very exciting; I have long thought that a repository for reproducible negative outcomes of hypotheses would be a great resource for preventing grad students and postdocs (and lab directors!) from pursuing needless experiments or duplicating the investigation of hypotheses that don't pan out.

This is a very different issue than your failed Westerns, but if your Westerns worked and showed results that failed to support a given hypothesis, that would indeed be worthy of publication and dissemination in my mind.

I'm all for journals publishing more negative results, but I think it's more important to publish results refuting previously published work. The general rule for years (at least in the basic research end of the biological sciences) has been that you couldn't publish work finding the opposite of a well known previous result unless you A) demonstrated that it was present in more/broader conditions or B) could explain how the original study came to an erroneous result. This is very difficult (for A) or impossible (for B) to do, outside of perhaps basic genetics work.

I suspect that eventually for work to be published in a major journal, you'll need your work to be partially duplicated by another lab. It would slow progress, but improve reliability, and I'd be all for it...

By Crusty Dem (not verified) on 17 Sep 2006 #permalink