Sniffing out bias in a sea of industry research funding.

One arena in which members of the public seem to understand their interest in good and unbiased scientific research is drug testing. Yet a significant portion of the research on new drugs and their use in treating patients is funded by drug manufacturers -- parties that have an interest in more than just generating objective results to scientific questions. Given how much money goes to fund scientific research in which the public has a profound interest, how can we tell which reports of scientific research findings are biased?

This is the question taken up by Bruce M. Psaty in a Commentary in the Journal of the American Medical Association [1]. Our first inclination in distinguishing biased reports from unbiased ones might be to look at the magnitude of the goodies one is getting from one's private funders. But Psaty draws on his own experience to suggest that bias is a more complicated phenomenon.

Early in his medical academic career, Psaty conducted research on β-blockers that caught the eye of the pharmaceutical industry. Psaty had no desire to be a well-compensated speaker for a corporation, but he agreed to help produce a set of slides communicating the state of the scientific knowledge on β-blockers.

At a meeting set up by a communications company to produce a slide set, I participated with representatives from the manufacturer and senior scientists whose work I knew well. Over lunch, we chatted about interests, projects, and families. The preliminary outline for the slide set contained a number of traditional topics, such as the effects of β-blockers on blood pressure or anginal symptoms. As we developed content, I soon found myself advocating the use of studies that featured the manufacturer's product as the best illustrations. My experiences at a pleasant luncheon and in the scientific discussions made me feel as if the other consultants and I had a kind of social duty to reciprocate both the kindness and the investment made by the sponsor in the slide set. Accordingly, I spoke out about the importance of using some of the sponsor's studies as examples. At the time, I failed to recognize that this sense of duty might be in conflict with an intention to create an unbiased presentation about the risks and benefits of β-blockers.

It turns out that I am not alone. In a study of medical residents, 61% were confident that drug company promotions did not influence their practice, but only 16% were equally confident that their colleagues were not influenced by those same drug company promotions. (1477)

This is a reminder of why it takes a community to make scientific knowledge: we have an easier time being on guard against the biases of others than against our own biases. Moreover, Psaty's experience of being biased by a pleasant lunch meeting demonstrates that it does not require buckets of money to cloud a scientist's objectivity:

The frequently expressed view that industry gifts or consulting fees are too small to influence behavior simply misses the point that, regardless of their size, they influence behavior, and a self-serving bias distorts the way that individuals perceive themselves. (1477)

We're invested not only in minimizing our hassles, but also in seeing ourselves as good guys. The problem is that being good guys to everyone might involve taking on duties that pull us in opposite directions. Here, honoring duties to the scientific community, the medical community, and the public pulls in the direction of producing an objective study, scientific paper, or slide show, while honoring duties to the sponsor pulls in the direction of reciprocating kindness and helping the sponsor get a good return on its investment.

Psaty's own experience suggest that industry money, even in small amounts, can introduce bias. But must it? It's hard to know for sure.

It is not possible to look at a disclosure about industry funding of research, consulting, or speaking and know how to interpret the disclosure or its potential effect on a published clinical trial. Indeed, a funding disclosure does not necessarily mean that any bias is present. Even if the disclosures included exact dollar amounts, the interpretation would remain difficult. (1478)

Psaty is not arguing against the disclosure of financial interests. After all, having these disclosures (e.g., included in journal articles) may help remind scientists, doctors, and the public to be critical consumers of the information presented. However, since the mere fact of a financial interest does not tell us whether a paper is biased, Psaty argues that it would be better if consumers of scientific information had a set of strategies they could use to detect actual bias, rather than just the potential for bias. He then points to the sorts of details within scientific papers that may provide a good measure of objectivity:

The CONSORT recommendations for reporting the results of clinical trials provide an excellent checklist that includes key elements of the aims, methods, results, and discussion. For instance, the methods should address approach to randomization, allocation concealment, and blinding. The results should include information on recruitment, baseline data, and loss to follow-up as well as the outcomes and adverse events. (1478)

Good experimental design is a crucial ingredient in generating meaningful scientific findings, so clear information on the experimental design makes it easier for the reader to assess the findings. But bias may display itself not just in what is and isn't reported, but in how the scientific questions under study are framed:

[W]hat is the quality and the scientific merit of the hypothesis addressed by an industry-funded trial? Answers to this question, which require an understanding of the current state of the science are important because, with several notable exceptions, the various institutes of the National Institutes of Health have largely turned the evaluation of drug treatments over to industry. ... On occasion, marketing interests have shaped or dominated short-term decision-making processes. Power resides in the ability not only to pose particular questions and shape trial designs but also to obtain results and disseminate findings. ...

Do the selected hypothesis and its associated outcome address an important public health question? What comparison group was selected for study? If the trial has an active-treatment comparison group, were the control agent and dose appropriate? (1478)

What kind of difference does the precise question being studied make to the results? It might make the difference between results indicating that a new drug is safe and effective compared to a placebo and results indicating that the new drug is safe and effective compared to other drugs on the market -- including those that have gone off patent. The later comparison may be the more relevant one from the point of view of patients and health care providers.

Beyond the set up of the study and the completeness of the results, as well as the relevant information required to judge their power, Psaty argues that scrutiny of the discussion section of a research report is warranted:

What is the fit between the results and the discussion points? How do the investigators weigh the balance of risks and benefits? ...

In an effort to dismiss safety findings, do the investigators treat adverse effects and complications with the same intense skepticism that is usually reserved for findings of efficacy? (1478)

The discussion, in other words, ought to engage with the results actually reported, whether or not these results lived up the the researchers' expectations. Scientists are supposed to be cheerleaders for the facts as they have been established (through careful scientific labor, with an assist from the scientific community in judging the credibility of the results and the interpretation of them). They are not supposed to be cheerleaders for their corporate sponsors or for the outcomes these sponsored hoped that the research would produce.

Could Psaty's proposals work? He makes a persuasive case that the current conditions under which biomedical research are funded mean that many, many researchers have financial interests to disclose. Setting aside as questionable all the research findings generated by researchers who have received any funding from drug companies or other corporate sponsors is impractical. But assuming that all well-intentioned researchers produce work that is free from biasing influence from such sponsorship is naïve.

So Psaty's suggestion that the papers themselves be examined for signs of bias seems reasonable. But this kind of evaluation requires effort. It is harder than looking at the disclosures of financial interests listed at the end of a paper and toting up the dollar values of those interests.

Moreover, such thoroughgoing scrutiny is not foolproof. For one thing, Psaty's criteria for sniffing out bias assume that the paper under evaluation reports accurate results, rather than results that were fabricated, falsified, or otherwise consciously intended to mislead. A skilled cheat who is making up results can also make up experimental conditions that ensure that the interpretation of the fake results is fair and balanced.

In theory, at least, scientists want to become aware of their own biases so that these do not interfere with the project of building a body of reliable knowledge about the world. And given the extent to which scientists rest their own knowledge-building activities on the reports of other scientists, they have an interest in identifying which of these reports are fairly free of bias and which are not.

The practical question becomes what Psaty's strategy for sorting the reliable reports from the biased ones does to the amount of time it takes for scientists to keep up with the literature. I have to wonder how corporate sponsors of biomedical research would regard grant proposals allocating more of the researchers' time for literature searches and careful evaluation of the literature.
______
[1] Bruce M. Psaty, "Conflict of Interest, Disclosure, and Trial Reports," Journal of the American Medical Association (April 8, 2009), Vol. 301, No. 14, 1477-1479.

Categories

More like this

Of course, the "strategies" for detecting bias are the same strategies one should apply whether there is industry sponsorship or not. Everybody is liable to bias.

If I read a paper that tells me (for instance) that treatment X reduces the risk of death from heart attack in the next 10 years by 25%, I sort of know how to interpret that. If, at the end of the paper, I see that industry funding was involved, how should I revise my interpretation? That the reduction is only 15%? 0%? unknown? I don't know what to think. I think disclosure is overrated, since we don't know how to weight it in our interpretation of the study's findings.

All industry-funded science I have experience with is similarly afflicted with bias. Many of the upper echelon of management consider science like a vending machine: money in, good results out.

This is another example of the problem of audit; you want to assume, for the *most part*, that the work that you're auditing is legitimate, or the audit process is too resource intensive. You can verify some studies by doing them again, but unless you're going to create a new branch of scientists whose sole responsibility is verification of existing results, you're going to be hard pressed to have the resources to do it.

And, for the most part, *you don't want to*... the staggering percentage of scientific literature is legitimate, and works just fine as a self-policing structure; why spend money trying to verify results that you already reasonably expect are legitimate when you could spend that money coming up with new, interesting, valuable observations?

The trick is to have varying types of audit mechanisms and impose them randomly on studies. When peer review works well, this is actually what happens - one reviewer is going to ask for some sort of clarification that another may not, and as the scientist making the submission you can never be sure who is going to be reviewing your paper, or what sorts of questions they may ask.

IMO, the biggest problem with science fraud (or bias) isn't inside the scientific community, but outside. People who aren't scientists don't understand how the system of checks and balances works, there are no public records that show how well they work, and the public is only aware of the gross exceptions. If you hear of a massive fraud in a study once a year, and you don't know how many legitimate studies are vetted in a year (or how many illegitimate studies are caught by the existing audit processes), you lose a disproportionately large amount of confidence in the system for each fraud incident.

Just like airline travel; everyone remembers the crashes and how many people are killed, nobody really thinks about the millions of man-hours of safe flight.

The hallmark of science veracity is replication. A study by Ioannidis in JAMA 2005 says 80% of RCT claims replicate when tested again. On the other hand only 20% of claims coming from observational studies replicate. Replication for industry RCTs is likely higher as they require two studies to clear the FDA. Also, to get a drug approved industry data and methods are sent to FDA and checked/re-run. Many claims coming from observational studies have now been tested in RCT. My informal count is only 1/30 claims replicated.