Here at Scienceblogs, we spent a lot of time debunking various types of unscientific falsehoods (aka “woo,” religious believers, and the conservapedia.) As far as I’m concerned, that’s just great. The world always is always suffering from a shortage of skepticism. We need more empiricism and less certainty.
But it’s worth reminding ourselves of the obvious: peer-reviewed science is also vulnerable to bad biases, false suppositions and sloppy interpretations. Data doesn’t generate itself. Over at Overcoming Bias, they’ve compiled a short list of recent examples. Here are the most damning:
A recent PLoS Medicine looked at 111 studies of soft drinks, juice, and milk that cited funding sources. 22% had all industry funding, 47% had no industry funding, and 32% had mixed funding. … the proportion with unfavorable [to industry] conclusions was 0% for all industry funding versus 37% for no industry funding.
In 2005, the Journal of American Medical Association found that of medical studies since 1990 cited 1000 times or more, 1/3 were contradicted by replications, and 1/4 had no replication attempts. Of 49 highly cited original clinical research studies, 45 claimed that the intervention was effective. Of these, 7 (16%) were contradicted by subsequent studies, 7 others (16%) had found effects that were stronger than those of subsequent studies, 20 (44%) were replicated, and 11 (24%) remained largely unchallenged. Five of 6 highly-cited nonrandomized studies had been contradicted or had found stronger effects vs 9 of 39 randomized controlled trials (P = .008).
That JAMA paper is downright scary, and not because 32 percent of medical studies were later contradicted. What worries me is that a medical study can be cited more than 1000 times but never be repeated or tested. It’s not that scientists don’t make mistakes, it’s that the scientific process requires repetition and confirmation in order to catch these mistakes. But if oft-cited studies aren’t being tested, then what is?