The Winner's Curse and Scientific Publishing

Neal Young, John Ioannidis, and Omar Al-Ubaydli have an article in PLoS suggesting that because the emphasis in scientific publishing is too much on the big positive results in the big journal, many results are going to be wrong. (Remember that Ioannidis published another paper saying that many results are going to be wrong on purely statistical grounds.) They borrow an idea from economics called the winner's curse.

Basically, the winner's curse is the idea than in some auctions with imperfect information, the winner will overpay. Applied to science, it means that when you have big journals -- like Science and Nature -- that are both selective and highly publicized, the impact of their published results may be disproportionate to whether they are actually true.

In the paper's words:

In auction theory, under certain conditions, the bidder who wins tends to have overpaid. Consider oil firms bidding for drilling rights; companies estimate the size of the reserves, and estimates differ across firms. The average of all the firms' estimates would usually approximate the true reserve size. Since the firm with the highest estimate bids the most, the auction winner systematically overestimates, sometimes so substantially as to lose money in net terms. When bidders are cognizant of the statistical processes of estimates and bids, they correct for the winner's curse by shading their bids down. This is why experienced bidders sometimes avoid the curse, as opposed to inexperienced ones. Yet in numerous studies, bidder behaviour appears consistent with the winner's curse. Indeed, the winner's curse was first proposed by oil operations researchers after they had recognised aberrant results in their own market.

An analogy can be applied to scientific publications. As with individual bidders in an auction, the average result from multiple studies yields a reasonable estimate of a "true" relationship. However, the more extreme, spectacular results (the largest treatment effects, the strongest associations, or the most unusually novel and exciting biological stories) may be preferentially published. Journals serve as intermediaries and may suffer minimal immediate consequences for errors of over- or mis-estimation, but it is the consumers of these laboratory and clinical results (other expert scientists; trainees choosing fields of endeavour; physicians and their patients; funding agencies; the media) who are "cursed" if these results are severely exaggerated -- overvalued and unrepresentative of the true outcomes of many similar experiments. For example, initial clinical studies are often unrepresentative and misleading. An empirical evaluation of the 49 most-cited papers on the effectiveness of medical interventions, published in highly visible journals in 1990-2004, showed that a quarter of the randomised trials and five of six non-randomised studies had already been contradicted or found to have been exaggerated by 2005. The delay between the reporting of an initial positive study and subsequent publication of concurrently performed but negative results is measured in years. An important role of systematic reviews may be to correct the inflated effects present in the initial studies published in famous journals, but this process may be similarly prolonged and even systematic reviews may perpetuate inflated results. (Citations removed.)

They go on to argue that the winner's curse is the result of the structure of the scientific publishing market. For one, it is an oligopoly. The vast majority of highly cited publications are in a very few high profile journals. Further, these journals pride themselves on the notion that their selectivity makes them good i.e. the fact that they take 5% of papers makes them accurate and important rather than the actual procedures employed in those papers.

And this structure for scientific publishing is particularly arbitrary now in light of the ready availability of web publishing. Back when journals were actually printed, there were space limitations. Now that the Internet is the primary means for disseminating papers, the argument that you should be that selective does not hold water.

This structure of scientific publishing as a market makes it prone to information cascades and a herd mentality. The first person in a field publishes something, and then everyone else is inclined to follow them without questioning their core premises.

Young et al. go on to propose some solutions including a relaxation of selectivity, encouraging the publication of negative or contrarian results, and a concerted move towards web publishing.

Definitely read the whole thing.

For my part, I have a several comments:

1) The authors mention that one of the way out of the winner's curse in auctions is that experienced traders will systematically underbid. The analogy for science is that if we took articles published in more prestigious journals less seriously, we would be less prone to error.

I actually think that this happens already. There is an ongoing office joke in most biology departments about how Nature and Science articles are never right. This joke derives partially from the observation that more than just a few of them are fast-and-loose with controls and descriptions of their methods. I think that most of us do read and pay attention to these papers, but we approach them with considerable skepticism. (Unfortunately this is not often the case in the popular press -- which is half the reason I write this blog.) I would be much more likely to believe a paper published in a prestigious but more thorough journal like Cell or Neuron (these articles are legendarily long, by the way) then I would one in Nature or Science.

So in a sense, this "experienced trader" correction is present among scientists trying to understand a field.

2) On the other hand, information cascades and the winner's curse is particularly problematic in fields like mine -- behavioral neuroscience. In behavioral neuroscience, the experiments are long and expensive, often include less then 10 data points for each experimental group, and are produced by a very small group of specialized labs. This means that the first person to publish on a question is often given what amounts to the right of way. It would be rare that someone would go back and replicate the exact experiment.

This makes comprehensive reviews trying to tie together disparate results into some coherent theoretical picture both difficult and vital. It also means that behavioral neuroscience sometimes get stuck on incorrect or incomplete theories just because they were originally well-televised. (An example, IMHO, would be the spatial theory of hippocampal function. It explains a great deal, but over time we have realized that it is more complicated than that.)

3) This is a ringing endorsement of open access journals -- partially because they can publish more results (including negative ones) and partially because they give greater access to papers about large data sets. Hear, hear for PLoS!

4) This work emphasizes, I think rightly, the need for reproducibility of results and the importance of evaluating methodological soundness over novelty when accepting papers for publication. But that this reappraisal is necessary and happening is a comment on science in itself. Science succeeds not because it is right all the time from the beginning but because it is self-correcting. This is what pseudoscience advocates who jumped on Ioannidis's earlier paper didn't understand. The fact that scientists are willing to submit to a radical reappraisal of their methods of publication is a testament to the trustworthiness of science, rather than its wrongness.

4) Not to suggest that economists are free from the horrible biases of us: Nasim Taleb is arguing the Merton and Scholes should have their Nobel revoked. Merton and Scholes won the 1997 Nobel in economics for a means for pricing securities. Taleb argues that this method left investment companies feeling relatively insulated from risk, leading to much of the recent market failure with toxic securities.

Knowing very little about the subject, I cannot evaluate whether Taleb is correct. What I can say is that scientists believing in the wrong theory has never resulted -- to my knowledge -- in the need for a trillion dollar bailout.

So we have that going for us...

Hat-tip: Economist and Marginal Revolution

Categories

More like this

No trillion dollar bailouts? How about the multi-trillion dollar GW industry which has sprung up on the base of the most egregiously misrepresentational glorification of video-game computer "models" ever perpetrated?

By Brian Hall (not verified) on 05 Sep 2009 #permalink

P.S. Don't you think your motto is rather simplistic?

:P

By Brian Hall (not verified) on 05 Sep 2009 #permalink