Is there publication bias in animal studies?

ResearchBlogging.orgLast month, in response to some truly despicable activities by animal rights zealots, I wrote a series of posts about how animal rights activists target even researchers' children and appear to fetishize violence. This simply continued a string of posts that I've done over the years, the longest (and, in my not-so-humble-opinion, the best) deconstructs a lot of the bad scientific arguments used by animal rights activists to claim that animal research is useless, or nearly so, as well as other arguments made by extremists. One of the key points emphasized in these responses is that, regardless of their shortcomings, animal models for many conditions provide useful data, have lead to medical breakthroughs, and are better than any of the alternatives currently touted by animal rights activists. Someday, for example, cell culture and computer models may allow us to replace the use of animals for a lot of studies, but that day is not today, nor is it likely to arrive anytime soon.

Not surprisingly, then, I've had a few readers make me aware of a recently released study published in PLoS Biology, entitled Publication Bias in Reports of Animal Stroke Studies Leads to Major Overstatement of Efficacy. I actually knew about this study last week, because I'm on the PLoS press list. But the study was embargoed until Monday night, and for some reason I let Mike Adams distract me from taking on a real scientific study. On the other hand, it's always a good time to have some fun with our favorite woo-meister of all. It's just fortunate that my readers didn't let me forget about this study.

Science-based medicine depends upon preclinical studies in cell culture and animal models in order to determine disease mechanisms and, just as importantly, to test new therapies before testing them in humans. I've pointed out before that animal studies don't always correlate as cleanly as we would like with human studies. However, for all their imperfections, animal studies can allow us to study phenomena that require three dimensional structure with all the different types of cells normally in the organ in question. One example I like to use is the study of tumor angiogenesis, which requires complex interactions between the tumor cells, vascular cells, and the stroma. Although I'm aware of models that examine endothelial cells, fibroblasts, and tumor cells in three dimensional coculture that can produce some pretty cool results, but they are still just cells in dishes. They're cells in dishes using sophisticated culture systems, but cells in dishes nonetheless.

It's thus of great interest to know what the predictive capability of animal models is. In the case of this study, the authors performed a metaanalysis of animal models of acute ischemic stroke to try to estimate the effect of publication bias on the reported results. As you may be aware, publication bias is an insidious generalized form of bias that creeps into the medical literature because studies showing a positive result are more likely to be published than studies that show a negative result. Also known as "the file drawer effect" (where negative studies tend to be left in the "file drawer" rather than to be published, publication bias is a problem in the clinical trial literature, so much so that clinical trial registries such as Clinicaltrials.gov, have been set up to make sure that the results of all human clinical trials see the light of day. The authors lay out this problem right in the introduction:

It's not surprising that positive clinical trials are more likely to be published--and published in more prestigious journals--than negative studies, because positive studies are scientifically and clinically much more interesting. They produce results that change clinical practice and, presumably, improve our medical practice. On the other hand, although it isn't always appreciated, negative trials can be very useful, too. They can lead to physicians abandoning therapies that they thought to be effective, and that can advance medical therapy as well.

What's not as well characterized is whether publication bias is a major problem in animal studies. This study presents evidence suggesting that it might be, at least in models of ischemic stroke and interventions designed to limit and ameliorate the damage done by the cessation of blood flow to a segment of the brain. Basically, the investigators examined a database of animal models of stroke and interventions and identified 16 unique systematic reviews of the literature on this topic that encompassed 255 publications. Only ten publications reported no significant effect of their interventions on volume of dead brain tissue after a stroke, and only six were completely negative, reporting no significant findings. I will admit right here that I don't fully understand all the mathematics and analyses involved, but it is possible to do statistical analyses of the studies to look for patterns suggestive of publication bias, specifically excesses of imprecise studies with large effect sizes. The authors describe their findings in the abstract:

The consolidation of scientific knowledge proceeds through the interpretation and then distillation of data presented in research reports, first in review articles and then in textbooks and undergraduate courses, until truths become accepted as such both amongst "experts" and in the public understanding. Where data are collected but remain unpublished, they cannot contribute to this distillation of knowledge. If these unpublished data differ substantially from published work, conclusions may not reflect adequately the underlying biological effects being described. The existence and any impact of such "publication bias" in the laboratory sciences have not been described. Using the CAMARADES (Collaborative Approach to Meta-analysis and Review of Animal Data in Experimental Studies) database we identified 16 systematic reviews of interventions tested in animal studies of acute ischaemic stroke involving 525 unique publications. Only ten publications (2%) reported no significant effects on infarct volume and only six (1.2%) did not report at least one significant finding. Egger regression and trim-and-fill analysis suggested that publication bias was highly prevalent (present in the literature for 16 and ten interventions, respectively) in animal studies modelling stroke. Trim-and-fill analysis suggested that publication bias might account for around one-third of the efficacy reported in systematic reviews, with reported efficacy falling from 31.3% to 23.8% after adjustment for publication bias. We estimate that a further 214 experiments (in addition to the 1,359 identified through rigorous systematic review; non publication rate 14%) have been conducted but not reported. It is probable that publication bias has an important impact in other animal disease models, and more broadly in the life sciences.

The authors used two analyses, first one called a "trim and fill" analysis, which looks at the bias in the data set used for a metaanalysis to impute the number and most probable results of unpublished experiments in order to estimate what the meta-analysis treatment effect would be in the absence of publication bias. Given that this is an estimate based on a "fill-in" method, at best it can only be a rough estimate, and its unclear how accurate its estimates are, given that there is a lot of variability between different trim-and-fill estimators and models in various meta-analyses. It also assumes that asymmetries in the funnel plot are all due to publication bias (i.e., that all or most of the missing studies are negative), when that is not necessarily true. There can be other reasons why studies are not published (they couldn't pass peer review, for instance), and the unpublished studies may not all be negative. They also used Egger regression, which is subject to a different set of potential biases.

Still, the numbers estimated for publication bias are not that surprising, given that they are in line with estimates for clinical trials. Given that a lot of animal experiments are, in essence, clinical trials that could never be done on humans for ethical reasons, this should not be surprising. The authors produced this graph as an estimate of how much the efficacy of each intervention for ischemic stroke is overestimated:

i-0a16e84930f2603e322b6adb2f7b6277-pubbias-thumb-450x478-43902.jpg

What is interesting about the graph above is how different the calculated effect overestimate size is depending on the specific intervention. This implies that the animal models used for this study are better for estimating some outcome measures than others. Despite the confounding factors:

For meta-analyses of individual interventions, we do not believe that these techniques are sufficiently robust to allow the reliable reporting of a true effect size adjusted for publication bias. This is partly because most meta-analyses are too small to allow reliable reporting, but also because the true effect size may be confounded by many factors, known and unknown, and the empirical usefulness of a precise estimate of efficacy in animals is limited. However, these techniques do allow some estimation both of the presence and of the likely magnitude of publication bias, and reports of meta-analysis of animal studies should include some assessment of the likelihood that publication bias confounds their conclusions, and the possible magnitude of the bias.

So, basically, all we can conclude from this study is that, for one intervention and one type of animal model, there appears to be publication bias, the effect of which can only be very roughly estimated and which varies depending upon which intervention is studied. It is unknown whether publication bias exists for other animal models and, if so, how much, but it would be shocking indeed if it did not exist for at least some animal models of disease and treatment.

Animal studies are very important in science-based medicine, because they provide the first test of an intervention in something other than a test tube or tissue culture plate. Positive results in animal studies often, depending upon a number of factors, lead to clinical trials. That is the entire point for studies of human disease in which a treatment is tested in an animal, as opposed to purely basic science studies, such as the creation of transgenic mice to test the effect of knocking out or overexpressing a gene product. Consequently, to minimize the chances of animal models misleading, it is as important to reduce publication bias in animal studies as it is in human studies. However, because far more animal studies are performed than human studies this will be difficult.

One thing that studies like this demonstrate is that nothing is sacred in science. Animal rights activists claim that scientists unquestioningly and mindlessly support the contention that animal studies represent the best models for disease that we have. Studies like this demonstrate that such is not the case. Not only that, but they demonstrate that scientists continue to seek to minimize the use of animals and to make sure that animals that are used in research are not wasted. Critical studies like this one point out the flaws in how animal research is done and suggest ways to correct those flaws and maximize the chances that the results of animal research will inform rather than mislead.

REFERENCE:

Sena, E., van der Worp, H., Bath, P., Howells, D., & Macleod, M. (2010). Publication Bias in Reports of Animal Stroke Studies Leads to Major Overstatement of Efficacy PLoS Biology, 8 (3) DOI: 10.1371/journal.pbio.1000344

Categories

More like this

One of the most persistent narratives latched on to by advocates of "integrative medicine" is that the "mind" can somehow "heal" the body. Sometimes, the claim is that such interventions work through "powerful placebo" effects. Sometimes it involves the abuse of emerging science, such overblown…
The single most necessary task for a physician practicing science- and evidence-based medicine is the evaluation of the biomedical literature to extract from it just what science and the evidence support as the best medical therapy for a given situation. It is rare for the literature to be so clear…
Commenter RogerH pointed me to this article by Welton, Ades, Carlin, Altman, and Sterne on models for potentially biased evidence in meta-analysis using empirically based priors. The "Carlin" in the author list is my longtime collaborator John, so I really shouldn't have had to hear about this…
If there is one difference that defines scientific medicine compared to "alternative medicine" it is the application of the scientific method to health claims. Science and the scientific method require transparency: transparency in methodology, transparency in results, transparency in data analysis…

I have to say I'm OK with a slight overstatement of efficacy in animal studies, as long as potential risks are not understated.

Why? Well, animal studies are often a sort of validation step, right? They are, in that respect, supposed to weed out interventions that do harm or that are unlikely to work in humans. However, as we know, humans and animal models are not identical, so as much as there is the chance that something that does work in animals does not do so in humans, let alone actual real-life patients, there is also the chance that something that has little effect in animals has more in humans.

So, as said, I am personally OK with a slight bias towards "testing more interventions in humans" in order to find efficacious ones as long as the safety of the treatments is established first.

/Daniel

PS: I've had similar thoughts recently on a screening method for alcohol abuse. The screening method should damn well be biased to rather give some false positives than giving false negatives, since its main purpose is to reduce the number of cases sent to better, more definitive, but also more costly analysis.

Hey, long-time reader, first-time commenter.

I would not really view the results of this meta-analysis as evidence that animal research itself is flawed; simply that the mindset of the researchers needs to be changed. If any anti-animal experimentation kook cites this study as a supposed flaw of animal studies, then the implication is that if publication bias can be eliminated, then these sorts of studies would have value and be worth doing.

Also, publication bias includes the lower publication probability of studies which do not show a statistically significant effect in either direction (positive or negative).

With stroke, there is huge variability in terms of site/severity/time elapsed before getting medical care. Then you're comparing it to animals subjected to a simulated stroke the same way each time.
On top of that, a vascular neurologist told me (a biochemist who didn't think about the problem much before) recently of an interesting mismatch between some animal models of stroke and the clinical situation:
Animal experiments are often done by blocking a cerebral artery in a healthy animal, and then looking at whether an intervention reduces infarct size.
But in humans, people who get strokes have atherosclerosis for years, then they might get a TIA (transient ischemic attack). The brain adapts to reduced blood flow, either by changing the permeability of the blood brain barrier or growing/enlarging new "work around" vessels.

Nicely written post.

The extremists will only seize on the âbiasâ part of the study title to further their own agenda that all animal research is bad. They arenât interested in reasoned discourse about its utility and importance. Extremists will point to negative studies as an example of the pointlessness of research, rather than acknowledging that negative studies are as informative as positive ones. Their audience is the public, not scientists. Trying to appeal to them on scientific grounds is an exercise in futility.

To echo David/#2, the flaw seems to be in the dissemination, categorization, and prioritization, of research results, rather than the research itself. Itâs an information age problem: We have so much information that making it meaningful has become a significant challenge.

Question: How many of the antis will even be able to follow Orac's explanation of the study, much less the study itself?

As has already been stated, they're just going to lock on the word "bias" and tune out everything else.

The extremists will only seize on the âbiasâ part of the study title to further their own agenda that all animal research is bad.

Oh, absolutely. Well said. In addition, it does not help when reporters of mainstream popular science journals engage in sensationalizing their reports by using egregiously broad or ambiguous titles.

Case in point: The report on this same study in Nature News, which is titled: âAnimal studies paint misleading pictureâ â a sweeping generalization with rather unfortunate connotations, and which, in all probability, will become a rallying point for the committed anti-animal experimentation folks. More here.

One reason for part of the "bias" is that it's sometimes hard to distinguish whether a non-result is real or whether it's because the scientist is not technically skilled or knowledgeable enough to perform the experiment correctly. (Of course, that bias doesn't affect just animal studies- that would be true of sufficiently complicated in vitro work as well.)

A very interesting study, thanks for pointing it out.

Phoenixwoman - There are legitimate reasons to be concerned about some animal models in experimental protocols as well as the usefulness of the data produced. DrugMonkey recently wrote a post about researchers wanting additional animals to be used in their studies under the erroneous idea that pushing the p-value even lower than 0.5 would make their results appear stronger. I think something that everyone can agree on is that a reduction in the number of animals used in invasive procedures to achieve the same results can only be a good thing.

This analysis makes such big assumptions that it seems to beg the question, tbh.

I'm left to trust the peer-review process that the reasoning and statistics employed are correct. But even you confess that you don't understand the statistical analyses involved.

Not to say it's not perfectly probable to assume publication bias exists at the same level as in other fields.
The medical trial field just has to adopt the registry approach as a whole.
Are there some posts on the pros and cons of such registries?

I wonder if the authors considered the bias of double-blinded studies...

I have a very good friend who spent about the last two years in a research/clinical neurology fellowship where she studied strokes in a mouse model. She's moved onto the NIH where I suspect she'll continue to study strokes somehow. I can't tell you what exactly she was studying, but I can tell you that inducing strokes in mice is hard and that after two years all her results were either negative or inconclusive. What that means in terms of her work, I honestly don't know. However, I read this and immediately thought of her.

FWIW, she's smart, scary intelligent, and my go-to person for all things neurological and beyond. When she tells me she's having a hard time and that despite best efforts her results are coming up negative, I respect that.

EMJ: I wasn't criticizing the study, just pointing out that the people most inclined to wave it about like a banner will in all likelihood be the least prepared (short of actual illiterates) to understand what it's really saying.

I'm fascinated by the concept of metacompetence -- the basic knowledge or perceptive abilities one must have in order to make accurate assessments of one's competence in, or knowledge of, a given field or skill. There are a lot of fairly clever people out there who think that just because they know a lot about word-slinging or cookery or auto mechanics, that they can view the entire world simply in terms of word-slinging or cookery or auto mechanics. (That's a key reason that so many creationists are engineers, for example. PZ can do chapter and verse on his run-ins with engineers whose lack of biological metacompetence led them to think they knew more about biology than he did.)

It makes you wonder: is there publication bias in studies about publication bias?

By Ginger Yellow (not verified) on 31 Mar 2010 #permalink

Can we do without "animal rights activists will certainly say x?" You're basically making up the opinions of other people so you can be angry about them.

Most of the animal rights supporters I know are actually biology students (although one is a physicist and one is in neuro-psych), and I'm sure none of them would have any difficulty reading the study.

Of course, none of them would condone the threats made by the animal rights activists you're referring to, either. I certainly wouldn't, and I wouldn't want to know anyone who would. The more that is trotted out to universally demonize anyone who is even vaguely pro-animal rights the clearer it becomes that there is simply zero interest in the facts surrounding people's actual beliefs and intentions.

This is incredibly disappointing coming from a source who purports to be rational and to make fact-based claims. I tend to enjoy your other articles, and I honestly don't understand why you continue frothing at the mouth over strawmen when it comes to this subject.

By Chris Whitman (not verified) on 31 Mar 2010 #permalink

Aren't we still under the authority of the Nuremberg Protocol where the world was so horrified at the Japanese medical experiments that an international treaty was forged that said any drug going into a human must first be used on an animal?

Chris Whitman @ 17: For the record, here's what I typed: "EMJ: I wasn't criticizing the study, just pointing out that the people most inclined to wave it about like a banner will in all likelihood be the least prepared (short of actual illiterates) to understand what it's really saying."

Who are the people who would be most inclined to wave it about like a banner, focusing on the word 'bias' and nothing else?

Why, the same people who are still attacking Dario Ringach after their threats hounded him out of his research field.

The same people who think trashing research labs and "freeing" white mice are wonderful activities.

The same people who stalk and harass research workers nationwide.

Chris, I'd sure like to hope that your friends that you mention aren't like the "Negotiation is Over" or ALF crowd. Your friends would, I hope, know better than to release minks into the woods.

As for the NiOers and ALFers, I'd like to set some of these people down with Dr. Temple Grandin sometime. She's literally written the book on humane handling of animals, and her recommendations have revolutionized the meatpacking industry. But they'd probably attack her, too.

Aren't we still under the authority of the Nuremberg Protocol where the world was so horrified at the Japanese medical experiments that an international treaty was forged that said any drug going into a human must first be used on an animal?

Last time I looked, we were. Though over the last decade we've not been too good about honoring the "not torturing humans" part of the protocol.

Nice analysis Orac, as you say these studies of bias do rest on a lot of assumptions. Still, I wouldn't be surprised if the numbers they come up with are not far off the real numbers, they seem more or less in line with what has been found in other areas of scinece where such analysis has been attempted.

One thing that struck me about this paper, and even more so the accompanying review by van der Worp et al. is that they indicate that a lot of the differences observed between results obtained in pre-clinical/translational animal studies to human clinical trials is due to publication bias, poor experimental design and poor clinical trial design rather than any fundamental problems with the animal models. I've expanded a little in a comment to the Nature news article.

http://www.nature.com/news/2010/100330/full/news.2010.158.html

No doubt AR activists will use these papers, and the horrinle title of the Nature news piece, to argue that animals are not good experimental models, when in fact that indicate that in many cases animals are better models than they sometimes appear to be.

Frankly these two reviews by Macleod's group should be a call to arms to improve collaboration and communication between the diffferent parts of the biomedical research enterprise (funders, journals, scientists, clinicians) to take all necessary measures to ensure that we get the best out of the animal models we need to use.

It makes you wonder: is there publication bias in studies about publication bias?

I believe some researchers recently conducted a meta-analysis that attempted to answer that exact question, but they never published it because the results were inconclusive. ;p

Chris Whitman: Who are you addressing, Orac or the commenters?

By Phoenix Woman (not verified) on 01 Apr 2010 #permalink

It makes you wonder: is there publication bias in studies about publication bias?

I vaguely recall Ben Goldacre mentioning this, but whether on the blog or in the book, I don't know.

The answer was "yes", though.

Thanks, orac. I think I got a bit too angry about this over on Derek Lowes blog. It still annoys me, though, that there is not a way to force results out in the open.