In the current issue of The Scientist, there’s a pair of interesting pieces about how professional life goes on (or doesn’t) for scientists found guilty of misconduct by the U.S. Office of Research Integrity (ORI).
Alison McCook’s article, Life After Fraud, includes interviews with three scientists against whom the ORI has made formal rulings of misconduct. A big concern voiced by each of these scientists is that after the period of their debarment from eligibility to receive federal grants or to serve on a Public Health Service (PHS) committee has expired, the traces of their punishment persist online. McCook writes:
Each year, the U.S. Office of Research Integrity (ORI) investigates dozens of charges of scientific misconduct. And each year, the ORI adds a handful of names to a list of researchers found guilty of falsifying figures, fabricating data, or committing other academic infractions. As of April 1, 2009, this Administrative Actions list, presented on the ORI Web site, carried 38 names. These people are barred from receiving federal funds and/or serving on a Public Health Service committee, typically for a period of 3-5 years. Once the debarment term is up, the name disappears from the list. In theory, the punishment–and the shame–of the ordeal is over.
However, any time the ORI makes a formal ruling of misconduct, that information ends up on the Internet. The ORI’s newsletter and annual reports, which used to be hard copies sent to federally funded schools, are now all electronic. The NIH Guide, a weekly report that lists findings of misconduct to help grant reviewers flag scientists who apply for federal funds before their exclusion period is over, is online, too. And the Federal Register, the official publication of every federal agency, is available as a daily email digest. So Google anyone’s name who has ever been penalized by the ORI, and even if their debarment was lifted more than a decade ago, even if they signed a document stating they accepted the ORI’s decision well before the Internet became such a staple of daily life, the description of the finding against them–and the penalty they received–will pop up. In some cases, it’s the first article that appears.
Not surprisingly, each of the three scientists interviewed for the article uses an assumed name, so as not to give more Google-juice to past ignominy.
Biologist “Daniel Page” was charged with plagiarism, falsification of qualifications, and breach of confidentiality:
Page was researching a steroid hormone that appeared to improve the immune response to viral and bacterial infections. A startup company, which we’ll call Vaxeen (the real company did not respond to requests for comment), approached him to see if he could provide a wet research lab to do pharmaceutical work in animals, to test the hormone’s ability to boost vaccine efficacy. Page agreed, and signed a contract that guaranteed him $280,000 over 2 years in funding, with the promise of more. Page decided he wanted to do a broader project, so he began writing an NIH grant with a Vaxeen scientist (whom he declines to name). This scientist provided Page with some preliminary data from the company, which Page added to the application without attribution, since the Vaxeen scientist would be represented in the list of coauthors.
Just days before Page planned to submit the grant, however, the Vaxeen scientist told him that the company did not want his name on the application. Page took him off the author list, but forgot to remove the company’s results from the preliminary data section. “Absolutely, if I had thought of it, I would have put [that data] in the background of the grant, and attributed it to the company,” he says. “Rarely a week goes by where I don’t think about this.”
Over time, Page pieced together what happened next. A reviewer of his grant who had also worked with Vaxeen (and declined to be named in this story) recognized the data, recused himself from the review, then likely contacted the company (the reviewer can’t recall if he contacted Vaxeen or not). However the company found out, it then asked Page for a copy of his written permission to use the data. Page said he thought he had received permission from his collaborator, without needing a written agreement. “Absolutely, I was caught,” he says. OSU investigated–going through his files, computer hard drive, and all communications with the company–and concluded that Page “had committed scientific misconduct under federal and university guidelines,” according to a university statement issued at the time.
It’s worth noting that, if Page’s reconstruction is correct, he was not the only one who breached confidentiality (by using proprietary data from Vaxeen in his grant application) — the reviewer who communicated with Vaxeen about his grant application also breached the confidentiality of the review process.
That aside, this case looks like one of sloppy mistakes rather than intentional evildoing. But as Page himself acknowledges, the mistakes were significant ones. Presenting preliminary data that was not his to present, and failing to acknowledge the source of that data (including the scientist or scientists responsible for generating it), could be expected to give reviewers the impression that Page himself had generated the data. It was probably also a bad idea not to have a clear agreement with Vaxeen, as well as with the particular Vaxeen scientist who was initially involved in the grant application, about the parameters of their collaboration.
To their credit, Page’s colleagues at Ohio State University encouraged him to move forward in the aftermath of the ORI judgment against him. SInce then, he received tenure, published many articles, and has had his work widely cited.
Researcher “Gerry Levick” was found to have misrepresented his qualifications and expertise on a grant application:
The charges stemmed from the wording Levick used on an NIH training grant application he submitted in 1988, when he was 39. He no longer has the original application, and sometimes struggles to remember exactly what he wrote. First, the agency alleges he claimed he had an MD degree from the University of Manchester–Levick admits that he wrote that on his application, but his real degree was an MBChB, a Bachelor of Medicine and Bachelor of Surgery that, in the United Kingdom, represents a combined undergraduate and graduate degree that serves as the initial step students take who want to become doctors. (Levick eventually obtained an MD from a university in Sri Lanka.) He says he wanted to simplify the process since this type of degree doesn’t exist in the United States. Second, Levick said he was based at Harvard Medical School (HMS), when his real affiliation was, according to Levick, “the Child Study Unit” at Children’s Hospital Boston, a teaching hospital of HMS. His funding, he says, came from the Research Foundation of Harvard University. When describing his role at Harvard, “I think I said that I was, uh, associated. I think the word was associated, or a fellow. I actually don’t remember.” (A Harvard Medical School spokesperson confessed that there are many groups associated with Harvard, but he had never heard of the Research Foundation.) The final charge is that he falsely claimed to have 13 patents–Levick says he wrote 13 patents “and technologies,” representing new tools modeled on older inventions.
“They were looking to see if the t’s were crossed, the i’s were dotted. And admittedly sometimes they weren’t,” he says. “Maybe I wasn’t careful enough, maybe I was. But the sum and substance of this stuff has no merit.”
Levick is steamed because, in his judgment, the investigation of his old grant applications stemmed from animosity between himself and another member of the board of trustees at New York Chiropractic College (NYCC). His enemy, Levick suggests, was less interested in matters of scientific integrity than in finding some weapon to undercut Levick’s power on the board.
Academe is as good a place as any to make enemies. It strikes me that this is a fine reason to be scrupulously honest rather than sloppy — you don’t want to leave ammunition lying around.
Levick also views himself as never having admitted guilt in the case against him. Yet he accepted a deal from the ORI:
He contacted the ORI. “They said ‘well, we can make you an offer.’” If he signed a voluntary exclusion agreement, he would forego federal funds for 3 years, ending in 1997. “And I said, ‘and that’s the end of that?’ They said ‘yeah.’ I did ask them whether this would appear anyplace. And they said ‘no.’ And I said ‘okay.’” …
Once he realized that his misconduct was on the Web for all to see, he wrote to then-head of the NIH, Elias Zerhouni, asking if he could take down the information, considering that he was “unemployable as a result.” One month later, Levick received an email from a representative from the NIH’s Office of the Director. It simply said:
You recently contacted Dr. Zerhouni via e-mail concerning the voluntary exclusion you signed ten years ago. Your concern relates to access to this information on the web. I assume that you refer to the citation in the [date omitted] NIH Guide for Grants and Contracts. If so, please understand that this is a publication and therefore is not subject to redacting. In addition, it is clear from the announcement that the exclusion was for a term of three years and is no longer in effect.
I do not have the text of Levick’s voluntary exclusion agreement in front of me, but I’m willing to bet it contained something that read an awful lot like an admission that he had misrepresented his credentials in that grant proposal. This is generally the nature of a deal: you get something, but you also have to give something.
Moreover, there is a significant difference between reaching the end of one’s period of exclusion from eligibility for federal grants and having the fact of that (expired) exclusion hidden from the rest of the scientific community. More on this in a bit.
Finally, there is the case of “John Franklin”, whose ORI judgment popped up when a date Googled him:
Had he known that the details would be so permanently fixed on the Internet, however, he says he never would have signed the document accepting the ORI’s ruling of misconduct. But it was the mid-1990s, before the Web became such a fixture itself. …
The problems began for him while he was an associate professor at Harvard Medical School, working on a technology to diagnose cancer from blood plasma. He found that nuclear magnetic resonance (NMR) scans of blood lipids appeared to spot tumors before X-rays, and months or years before people showed clinical signs. Franklin published his findings in the New England Journal of Medicine; however, soon after, the journal published research by independent groups that were unable to confirm his results.
Franklin says that elevated levels of blood fat or improper handling or preparation of samples can influence NMR scans and lead to false results. Franklin was consistently able to make the diagnostic work, and he denies ever fudging any data along the way to improve its performance. However, a company that licensed the technology based on Franklin’s initial data failed to show in its own research that the technique worked. …
After the negative results, the company had to fold. It sued Franklin and Beth Israel Hospital, where Franklin was working; a sheriff came to Franklin’s house on a weekend to serve him papers; and the school launched an investigation, consisting of two informal 1-hour meetings with a committee of administrators and professors.
Before HMS could make a ruling, Franklin left voluntarily. He had lost his funding from the company when it went under, and “I figured, with a heart attack and everything, I needed a change of life.” HMS’s official statement noted that the school investigated the allegations but Franklin resigned “prior to the completion of the institutional proceedings.” The NEJM never corrected, retracted, nor issued an expression of concern about Franklin’s paper.
Eventually, Franklin received a letter from the ORI saying it was conducting its own hearing about his NMR data, inviting him to attend and defend himself. Franklin says he could not bring the data with him, though, as Beth Israel was holding onto it due to the ongoing lawsuit with the company, so he didn’t attend. Then, when he received a letter about the agency’s concerns related to a grant application about NMR, “I wasn’t surprised.” Even though he says he did nothing wrong, he signed the letter, essentially accepting the agency’s ruling and penalty. “I did believe then that the public record would be expunged in three years,” he says. “I probably wouldn’t have signed if I had known it would come up on the Internet 10 years later.”
We’ve discussed how difficult it can be to replicate experimental results even when all the scientists involved are scrupulously honest. In light of these challenges, proving that a scientist falsified experimental results — or making an air tight case that you could not have falsified your results — can be brutally hard. It cannot have helped that Franklin did not have access to the data that might have helped him defend himself.
However, once again, if you are committed to your claim that you are innocent of wrongdoing, it is a really bad idea to sign a piece of paper admitting to that wrongdoing. Signing a paper that says you did X if you did not do X is a lie. Even if you are party to that lie in order to end an investigation and walk away with what you’re betting is a milder punishment, it’s still a lie. Taking the deal and then later swearing up and down, “But I didn’t really do it!” marks you as someone willing to set the truth aside when it is convenient to do so.
That’s not a quality scientists tend to admire.
In light of the fact that the record of an ORI debarment exists (and can be turned up with a Google search) long after the debarment period has ended, Richard Gallagher argues that ORI findings of scientific misconduct should be omitted from archived publications online. He writes:
For the sake of fairness, these sentences must be implemented precisely as intended. This means that at the end of the exclusion period, researchers should be able to participate again as full members of the scientific community. But they can’t.
Misconduct findings against a researcher appear on the Web–indeed, in multiple places on the Web. And the omnipresence of the Web search means that reprimands are being dragged up again and again and again. However minor the misdemeanor, the researcher’s reputation is permanently tarnished, and his or her career is invariably ruined, just as surely as if the punishment were a lifetime ban.
Both the NIH Guide and The Federal Register publish findings of scientific misconduct, and are archived online. As long as this continues, the problem will persist. The director of the division of investigative oversight at ORI has stated his regret at the “collateral damage” caused by the policy (see page 32). But this is not collateral damage; it is a serious miscarriage of justice against researchers and a stain on the integrity of the system, and therefore of science.
It reminds me of the system present in US prisons, in which even after “serving their time,” prisoners will still have trouble finding work because of their criminal records. But is it fair to compare felons to scientists who have, for instance, fudged their affiliations on a grant application when they were young and naïve?
Here, of course, we encounter the tension between the rights of the individual scientist and the rights of the scientific community. An oversight or mistake in judgment that may strike the individual scientist making it as no big deal (at least at the time) can have significant consequences for the scientific community in terms of time wasted (e.g., trying to reproduce reported results) and damaged trust.
The damaged trust is not a minor thing. Given that the scientific knowledge-building enterprise relies on conditions where scientists can trust their fellow scientists to make honest reports (whether in the literature, in grant proposals, or in less formal scientific communications), discovering a fellow scientist whose relationship with the truth is more casual is a very big deal. Flagging liars is like tagging a faulty measuring device. It doesn’t mean you throw them out, but you do need to go to some lengths to reestablish their reliability.
To the extent that an individual scientist is committed to the shared project of building a reliable body of scientific knowledge, he or she ought to understand that after a breach, one is not entitled to a full restoration of the community’s trust. Rather, that trust must be earned back. One step in earning back trust is to acknowledge the harm the community suffered (or at least risked) from the dishonesty. Admitting that you blew it, that you are sorry, and that others have a right to be upset about it, are all necessary preliminaries to making a credible claim that you won’t make the same mistake again.
On the other hand, protesting that your screw-ups really weren’t important, or that your enemies have blown them out of proportion, might be an indication that you still don’t really get why your scientific colleagues are unhappy about your behavior. In such a circumstance, although you may have regained your eligibility to receive federal grant money, you may still have some work left to do to demonstrate that you are a trustworthy member of the scientific community.
It’s true that scientific training seems to go on forever, but that shouldn’t mean that early career scientists are infantilized. They are, by and large, legal adults, and they ought to be striving to make decisions as adults — which means considering the potential effects of their actions and accepting the consequences of them. I’m disinclined, therefore, to view ORI judgments of scientific misconduct as akin to juvenile criminal records that are truly expunged to reflect the transient nature of the youthful offender’s transgressions. Scientists ought to have better judgment than fifteen-year-olds. Occasionally they don’t. If they want to stay a part of the scientific community that their bad choices may have harmed, they have to be prepared to make real restitution. This may include having to meet a higher burden of proof to make up for having misled one’s fellow scientists at some earlier point in time. It may be a pain, but it’s not impossible.
Indeed, I’m inclined to think that early career lapses in judgment ought not to be buried precisely because public knowledge of the problem gives the scientific community some responsibility for providing guidance to the promising young scientist who messed up. Acknowledging your mistakes sets up a context in which it may be easier to ask other folks for help in avoiding similar mistakes in the future. (Ideally, scientists would be able to ask each other for such advice as a matter of course, but there are plenty of instances where it feels like asking a question would be exposing a weakness — something that can feel very dangerous, especially to an early career scientist.)
Besides, there’s a practical difficulty in burying the pixel trail of a scientist’s misconduct. It’s almost always the case that other members of the scientific community are involved in alleging, detecting, investigating, or adjudicating. They know something is up. Keeping the official findings secret leaves the other concerned members of the scientific community hanging, unsure whether the ORI has done anything about the allegations (which can breed suspicion that scientists are getting away with misconduct left and right). It can also make the rumor mill seem preferable to a total lack of information on scientific colleagues prone to dishonesty toward other scientists.
Given the amount of information available online, it’s unlikely that scientists who have been caught in misconduct can fly completely under the radar. But even before the internet, there was no guarantee such a secret would stay secret. Searchable online information imposes a certain level of transparency. But if this is transparency following upon actions that deceived one’s scientific community, it might be the start of effective remediation. Admitting that you have broken trust may be the first real step in earning that trust back.