A big pain for biomedicine: anesthesiologist commits massive research fraud.

The headlines bring news of another scientist (this time a physician-scientist) caught committing fraud, rather than science. This story is of interest in part because of the scale of the deception -- not a paper or two, but perhaps dozens -- and in part because the scientist's area of research, the treatment of pain, strikes a nerve with many non-scientists whose medical treatment may have been (mis-)informed by the fraudulent results.

From Anesthesiology News:

Scott S. Reuben, MD, of Baystate Medical Center in Springfield, Mass., a pioneer in the area of multimodal analgesia, is said to have fabricated his results in at least 21, and perhaps many more, articles dating back to 1996. The confirmed articles were published in Anesthesiology, Anesthesia and Analgesia, the Journal of Clinical Anesthesia and other titles, which have retracted the papers or will soon do so, according to people familiar with the scandal (see list). The journals stressed that Dr. Reuben's co-authors on those papers have not been accused of wrongdoing.

In addition to allegedly falsifying data, Dr. Reuben seems to have committed publishing forgery. Evan Ekman, MD, an orthopedic surgeon in Columbia, S.C., said his name appeared as a co-author on at least two of the retracted papers, despite his having had no hand in the manuscripts. "My names were forgeries on the documents," Dr. Ekman told Anesthesiology News.

Fabrication, you'll recall, is making up data rather than actually collecting them. Science is an activity that attempts to build a body of reliable knowledge about the world -- which means that its claims need to be supported with actual data, not data from someone's imagination.

Apparently, Dr. Reuben compounded his lies about the data he collected (or in this case, didn't collect) by lying about which other scientists were responsible for the work he reported. In theory, all the authors of a scientific paper are putting up their credibility to warrant the credibility of the results reported in the paper. In the case of an "author" who was put on the paper without his knowledge, his name (and track record within his scientific community) may make the reported results look more credible -- but since he didn't decide to be an author (and maybe didn't even participate in the research), this added credibility is illusory.

I'd be interested to see, for all of Reuben's authors on the retracted papers:

  • the portion of the reported results for which the author feels personally comfortable vouching
  • whether the author was aware of being an author when the manuscript was prepared and submitted
  • whether the author actually exists

I'm guessing if you're willing to make up data out of whole cloth, inventing co-authors out of whole cloth isn't a tremendous leap of depravity.

The retractions came after an internal investigation by Baystate turned up evidence of widespread fraud in Dr. Reuben's research. Jane Albert, a spokeswoman for Baystate, said the inquiry was undertaken after an internal reviewer at the medical center had raised questions last year. Ms. Albert said the hospital's investigation raised "no allegations concerning any patient care. This was focused on academic integrity." ...

Rumors of a problem with Dr. Reuben's research have been circulating among academic anesthesiologists for a year, according to people familiar with the matter.

"Interestingly, when you look at Scott's output over the last 15 years, he never had a negative study," said one colleague, who spoke on the condition of anonymity. "In fact, they were all very robust results--where others had failed to show much difference. I just don't understand why anyone would do this or how anyone could pull this off for so long."

The titles of Reuben's papers suggest to me that he was conducting clinical (rather than laboratory) research. I'm guessing that this makes the possibility that Reuben had "magic hands" an unlikely explanation -- controlling human patients precisely is a much harder thing than setting up your lab apparatus and making your reagents much better than anyone else. Unless the design of his clinical trials was significantly different from those used by other researchers in the field, you'd expect that at least some of those other researchers would be able to find similar results. Indeed, especially since his published papers presumably laid out his experimental design, you'd expect that other researchers in the field would try to use that experimental design to find similar results.

But only Reuben seemed to be obtaining these robust results. Since we're talking about research that was used to inform treatment decisions -- where many different physicians were administering the treatment -- this should have set off alarm bells. It did set off alarm bells, of course, but the question is why it took so long. And I'm not sure there's a neat answer to the question of how long the scientific community ought to wait for one researcher's "robust result" to be replicated by others. This is one of those situations where scientists need to work out how to balance their excitement about potentially useful new results, and their default trust of the member of their professional community who produced the results, with their skepticism.

What's particularly surprising given the dimensions of the case, Ms. [Josephine] Johnston [an attorney specializing in research integrity at the Hastings Center] said, is that Dr. Reuben's research managed to raise no alarms among peer reviewers. However, she added, "the peer review system can only do so much. Trust is a major component of the academic world. It's backed up by the implication that your reputation will be destroyed if you violate that trust."

It seems safe to assume, at this point, that Dr. Reuben's reputation has been destroyed.

On what peer review ought to have uncovered, we should recall that what actually occurs in peer review may not match what non-scientists imagine. Peer reviewers are not, in most cases, trying to replicate the experiments reported, nor are they even likely to reread all the references cited. Rather, they're looking at the manuscript to see if there's a coherent scientific argument, if the data included support that argument, if the interpretation of results makes sense against the background of the other scientific knowledge in this scientific area, and whether there are any obvious mistakes.

Blogging about this case at Respectful Insolence, Orac also wonders how Reuben was able to make up data in so many papers for so many years:

Whenever I see an example of fraud like this, I wonder: How could he get away with, in essence, making it up for so long? Clinical trials are complex; they inevitably involve statisticians who analyze the data. Many journals these days will not even consider publishing the results of a clinical trial without a biostatistician listed among the authors.

You might expect that a problem with the statistical analysis of the data would be the kind of thing that would jump out at peer reviewers. However, that assumes that the folks reviewing manuscripts for anesthesiology journals are biostatisticians (rather than, say, anesthesiologists). I don't know if that's a good assumption. Also, I don't think we can rule out the possibility that each of these bad papers had a biostatistician listed among the authors. Whether the biostatistician listed among the authors knew that he or she was an author is another question.

So, how did Reuben's fabrication come to light? According to the New York Times:

Dr. Reuben's activities were spotted by Baystate after questions were raised about two study abstracts that he filed last spring, Ms. Albert said. The health system determined that he had not received approval to conduct human research, Ms. Albert said.

This, of course, raises another question: what was the status of those clinical trials whose "results" were reported in the twenty-odd retracted papers? Did these trials with human subjects not happen at all? Were they conducted as described, except that real data were not collected and/or reported from them? Given the logistics of conducting clinical trials, shouldn't there have been some earlier clues that something was not on the level in Reuben's research?

And, given that lack of IRB approval for two abstracts provided the smoking gun that helped expose Reuben's fraud, I have to wonder whether the earlier studies Reuben published had protocols submitted to, and approved by, Baystate Medical Center's IRB. I hope that IRB is now re-examining the level of oversight it brings to research conducted at Baystate Medical Center. Even though the Baystate Medical Center investigation of Reuben "was focused on academic integrity" and didn't raise "allegations concerning any patient care," it's reasonable to worry that the participants in Reuben's clinical trials (if such participants were not also fabricated) were not actually undertaking the risks they did in the service of the production of reliable knowledge about pain treatment. That constitutes a harm to them.

Where did Reuben get the funding to conduct his research? It may not surprise you to learn that a good portion of this funding came from a pharmaceutical company. Anesthesiology News reports:

A cornerstone of Dr. Reuben's approach has been the use of the selective cyclooxygenase-2 inhibitor celecoxib (Celebrex) and the neuropathic pain agent pregabalin (Lyrica), both manufactured by Pfizer. Dr. Reuben has received research grants from the company and is a member of its speakers' bureau. However, a source told Anesthesiology News that Pfizer recently alerted its speakers to remove any reference to Dr. Reuben's data from their presentations. Pfizer was unable to comment by the time this article went to press. The company has not been accused of wrongdoing in the matter.

The New York Times notes:

The drug giant Pfizer underwrote much of Dr. Reuben's research from 2002 to 2007. Many of his trials found that Celebrex and Lyrica, Pfizer drugs, were effective against postoperative pain.

"Independent clinical research advances disease treatments and improves the lives of patients," said Raymond F. Kerins Jr., a Pfizer spokesman. "As part of such research, we count on independent researchers to be truthful and motivated by a desire to advance care for patients. It is very disappointing to learn about Dr. Scott Reuben's alleged actions."

Drug companies routinely hire community physicians to conduct studies of already-approved medicines. In some cases, prosecutors have charged companies with underwriting studies of little scientific merit in hopes of persuading doctors to prescribe the medicines more often.

"When researchers are beholden to companies for much of their income, there is an incredible tendency to get results that are favorable to the company," said Dr. Jerome Kassirer, a former editor of The New England Journal of Medicine and the author of a book about conflicts of interest.

This is one of those situations that makes the slogan "trust, but verify" seem like really good advice.

Even without imputing evil intent, there is reason to believe pharmaceutical companies aren't just interested in what the truth is in clinical trials of their drugs. They want the compounds they have developed (and patented) to work, and to be purchased. This is how you stay in business. Pfizer is not a disinterested party here.

But there's a big difference (at least from a public relations point of view) between subtle bias -- the sort of thing you might plausibly dispute is even operating -- and a guy working on your funding who makes up results out of whole cloth to support the efficacy of your product. Pfizer wants as much distance from Reuben as possible, stat. I'm guessing that some people at Pfizer who help hire community physicians to conduct independent drug trials are probably re-examining the level of oversight -- or at least of vetting of the researchers -- they should bring to such arrangements in the future.

I'd be interested to know when the scrubbing of Reuben's data from Pfizer speakers' presentations was initiated. Was it prompted by the launch Baystate investigation? By the withdrawal of the papers from the journals at the conclusion of the investigation? Or did Pfizer notice something fishy and alert Baystate to a potential problem?

For all that the influence of pharmaceutical companies in research raises suspicions, I wonder, too, the extent to which Reuben's results (mow revealed to be fraudulent) set the direction for further drug development at Pfizer and its competitors.

Of course, the impact of Reuben's fraud is bound to be felt well beyond Pfizer's R&D department. The New York Times reports:

Dr. Steve Shafer, the editor in chief of Anesthesia & Analgesia, which published many of the papers, said he was considering withdrawing any study in which Dr. Reuben served a pivotal role.

"He was one of the most prolific investigators in the area of postoperative pain management," Dr. Shafer said. His fraud "sets back our knowledge in the field tremendously."

Anesthesiology News quotes Dr. Shafer further:

"We are left with a large hole in our understanding of this field. There are substantial tendrils from this body of work that reach throughout the discipline of postoperative pain management," Dr. Shafer said. "Those tendrils mean that almost every aspect will need to be carefully thought through. What do we still believe to be true? Do the conclusions hold up to scrutiny?"

Dr. Shafer said that although he still believes "philosophically" in multimodal analgesia, he can no longer be absolutely certain of its benefits without confirmation from future studies.

There is a reason, dear readers, that philosophers don't have the authority to prescribe drugs. Believing in something philosophically is well and good, but science-based medicine requires that beliefs on which treatment decisions are based be grounded in empirical evidence.

Now, those working in the field of anesthesiology and relying on the goodness of results reported in that field's body of scientific literature have a problem. Not only have more than 20 studies that they took as reliable been revealed to be untrustworthy, but there also seems to be good reason to regard the scientist who generated these studies as untrustworthy -- making his other publications in the literature suspect as well. (Depending on what kind of involvement one judges likely from Reuben's co-authors in his fraudulent papers, these co-authors might also be judged presumptively untrustworthy, leaving an even bigger hole in the reliable literature.)

This knowledge-hole is non-trivial. It is a problem for researchers in the field who trusted the work reported by Reuben and his co-authors and who may have spent time trying to build on it. It is a problem for health care providers who based their treatment of patients on the fraudulent results. It is a problem for those charged with teaching medical students about pain management. The fraudulent research may even have informed how hospitals, insurance companies, and regulatory agencies (like the DEA) conceived of proper pain management.

To recover from this fraud, the scientists in this field need to track down money for new research to generate reliable (rather than fraudulent) knowledge. The money spent to fund Reuben's original research is now long gone. Will federal funding agencies be able to provide these funds? Will drug companies? Or will we have to live with less knowledge about pain treatment then we thought we had?

Finally, the New York Times provides the obligatory quote from Reuben's attorney:

Paul Cirel, a lawyer for Dr. Reuben, said that he could not discuss the case because Baystate had investigated it as part of a confidential peer-review process. Baystate officials "were aware of extenuating circumstances," Mr. Cirel said.

I wonder what could count as an "extenuating circumstance" in a case like this, where you've fabricated the results of more than 20 publications. "My client was sick the day of the mandatory ethics lecture," perhaps?

Categories

More like this

Science as it is practiced today relies on a fair measure of trust. Part of the reason is that the culture of science values openness, hypothesis testing, and vigorous debate. The general assumption is that most scientists are honest and, although we all generally try to present our data in the…
Several news agencies are reporting that a massive academic fraud case has surfaced. A single researcher apparently fabricated data used in the publication of at least 21 journal articles published over a 12-year period. After an internal reviewer raised concerns, Baystate Medical Center…
Wow! This is massive! From Anesthesiology News: Scott S. Reuben, MD, of Baystate Medical Center in Springfield, Mass., a pioneer in the area of multimodal analgesia, is said to have fabricated his results in at least 21, and perhaps many more, articles dating back to 1996. The confirmed articles…
There's an article in yesterday's New York Times about doubts the public is having about the goodness of scientific publications as they learn more about what the peer-review system does, and does not, involve. It's worth a read, if only to illuminate what non-scientists seem to have assumed went…

> Since we're talking about research that was used to
> inform treatment decisions -- where many different
> physicians were administering the treatment -- this
> should have set off alarm bells. It did set off
> alarm bells, of course, but the question is why it
> took so long.

For the same reason Madoff got away with his Ponzi scheme for so long; the audit mechanism of peer review (as you note here) is intended to catch and prevent bad science, not outright fraud. Generally, the audit system assumes that people who get into the loop (medical doctors, former CEOs of stock markets, etc) have passed basic trustworthiness metrics, and what is in place is there to prevent error or self-delusionary types of fraud (this data actually doesn't mean what you think it means).

When you think about it, this really *is* the proper role of peer review, for the simple reason that verifying every experimental result is not only expensive and time consuming, it's completely unnecessary for the staggeringly high percentage of science publications that *are* honest.

IRB's form another part of the overall audit mechanism for science practitioners, but again, there are holes in responsibility that are simply too costly to fill in.

From a security/audit analysis standpoint, the real problem is that we have research that is funded in whole or in part by agencies that have a vested interest in the outcome. I am *not*, by the way, blaming the agencies directly here, as again the vast majority of cases where this sort of thing breaks down are not Big Pharma Conspiracy moments, much as conspiracy theorist would like some people to believe. Instead, it's simply that the system of checks and balances and audit isn't robust enough to cover possible malicious attackers.

There are three possible ways to fix this problem, two specific to the problem space and one more general. One is to have an oversight mechanism that is deliberately constructed to handle this sort of funded research. IRBs and peer review have enough on their plate, and they don't have the proper mindset to catch this sort of thing anyway (of course, you'll note, this still wouldn't solve the problem of researchers who falsify results for non-economic incentive reasons). The second is to disallow direct commercial institutional funding of research... if Pfizer wants to fund pharmaceutical research, let them dump funds into a general fund and let some oversight body disburse those funds to researchers. This has lots of advantages, but will probably cut down significantly on how much institutions are willing to contribute to research funding (and again, like the previous method, this won't and can't catch outright fraud).

The last method is to have a organizational structure that is dedicated to scientific policing, the same way a state's board of medicine polices the doctors. Of course, you can have problems with these too (there are many stories of possible corruption with medical licensing boards or engineering licensing boards), but as a mechanism they work to reduce this sort of fraud because they can proactively audit researchers and revoke their license to practice, as it were, if the researcher is caught. Such an organization still wouldn't prevent fraud from happening outright, but a properly designed auditing scheme would certainly cut down on the length of time that someone could get away with fraud.

Of course, every scientist would loathe such a organization.

Since we actually run, analyse and publish clinical trials I'd like to add my two cents.

1. I have never heard of a journal needing a biostatistician to be an author. It's a good idea. You need somebody who knows what they are doing certainly.

2. I imagine it would be substantially easier to make up the results of a clinical trial and get them published (particularly since when he's making it up it's always a positive finding) than it is to actually run a clinical trial and get it published.

3. There are ways for biostats to catch made-up data. But you have to go looking for it. This type of checking is not part of the normal process for analysing a clinical trial.

4. Falsfied data is not detectible at peer-review unless the fradulent bastards have made an obvious mistake.

By antipodean (not verified) on 12 Mar 2009 #permalink

Janet -- this is a very long essay, but what is your point?

What do you think should be changed, so that there are no longer bad people in this world?

By Neuro-conservative (not verified) on 12 Mar 2009 #permalink

This might be a case where having repositories for raw data available upon publication would help - at minimum, he would have had to make up a lot of data to get to the point where he could fool people, and someone could potentially (had they been suspicious earlier) figured out what's going on.

The problem isn't people do bad things, because you can't stop them from doing harmful things without stopping them from doing anything (meaningful). Part of the job of institutions and groups of scientists is to minimize the effect of people committing mischief - if someone falsifying research can essentially destroy a field, it might be a good idea for practitioners of that field to decide how to prevent it from happening (or, rather, prevent it from carrying on long enough to destroy the field). Having available data that can more easily be checked might be one such way. It seems like a good idea to ask how one might minimize damage from fradulent research, unless you believe science to be an expendable endeavor.

Of course, doing research that no one is likely to replicate or to be able to replicate (total synthesis of large natural products might be one such area - while one could in theory redo a synthesis, the money that it would take is unlikely to come from any known funding agency, and the intermediates are likely to be inaccessible without many-step synthesis - thus the yields and data on intermediates are effectively unverifiable) might in itself be problematic, or at minimum depending on the research for anything important. If no one can check what anyone does, that is likely to make it possible (and eventually likely) that someone will commit fraud in that area, and makes the knowledge in that field much more questionable than it seems.

By Robert Bird (not verified) on 13 Mar 2009 #permalink

As someone who had major abdominal surgery last Saturday and is currently experiencing awful postoperative pain...it just makes me wonder whether this man's fraud is causing me literal pain (to go along with the metaphorical pain of reading about scientific fraud).

Like Pogo the enemy is us.

There is no random audit procedure by universities, drug companies or journals.

This is hardly the worst episode. See Darsee whose co-authors included Eugene Braunwald.

The solution is trivial. Journals should declare that they will randomly audit 1 paper per year. it works for the IRS.

editor IJOEH.com

By David Egilman MD (not verified) on 14 Mar 2009 #permalink

You might expect that a problem with the statistical analysis of the data would be the kind of thing that would jump out at peer reviewers. However, that assumes that the folks reviewing manuscripts for anesthesiology journals are biostatisticians (rather than, say, anesthesiologists). I don't know if that's a good assumption

It's not a good assumption for many specialist clinical journals. They will often only get statistical review if the analysis is non-standard or obviously wrong. Also, the statistical review may not be from a card-carrying statistician, though this is more likely to cause false rejection rather than false acceptance.

Also, there's no reason why imaginary data is harder to analyze than real data. All you need to do is to look at a successful paper and make up variations on the numbers. If you are smart, you'll use a random number generator so as to avoid creating patterns.

I picked one of the papers from PubMed to look at the statistics. There's nothing suspicious about it as far as I can see. The statistical analysis is a bit 1980s, but that happens a lot. I would have asked for more detail on the p-value computations (the paper gave counts for subtypes of problems but didn't say how these were combined), and probably asked for confidence intervals. It wouldn't have made me suspect fraud, though.

-thomas