Why do scientists lie? (More reminiscing about Luk Van Parijs.)

Yesterday, I recalled MIT's dismissal of one of its biology professors for fabrication and falsification, both "high crimes" in the world of science. Getting caught doing these is Very Bad for a scientist -- which makes the story of Luk Van Parijs all the more puzzling.

As the story unfolded a year ago, the details of the investigation suggested that at least some of Van Parijs lies may have been about details that didn't matter so much -- which means he was taking a very big risk for very little return. Here's what I wrote then:

The conduct of fired MIT biology professor Luk Van Parijs, as reconstructed in the investigation of his work of the last eight years or so, gets curiouser and curiouser. From the October 29th Boston Globe, a follow-up story by Bombardieri and Cook tells us that problems have surfaced not only with the research Van Parijs did at MIT, but also with papers he authored about research he did at Brigham and Women's Hospital while a graduate student. But the twist here is that it's not entirely clear how his fraud in these cases would have helped him. From the Globe article:

The new revelation deepens the mystery about a rising star who was popular with students and colleagues and appeared to be a gifted biologist. In both of the new cases, it appears that Van Parijs said he had done work that he had not done, work that would have been a small part of the overall experiment.

In one case, the data in question would not have affected the conclusion, said Dr. Abul Abbas, who directed the Brigham laboratory where Van Parijs worked and was the senior author on both papers. For the second paper, the questionable data may have affected the outcome, Abbas said.

So, it seems we have a guy fabricating or falsifying data that might not even change the conclusion of the papers for which these "data" were created.


I can think of a couple of plausible explanations here. One might be that he felt he needed more data to wave around to strengthen the impact of the actually good data he collected. (Replication is good, and more is better.) Another is that possibly some of the "good" data wasn't all that good either, but it was more convincingly faked. This might not be that crazy an idea. Suspicions about Van Parijs's work from his Brigham and Women's years are tied to some plots that look more similar than they should:

In the two papers, Van Parijs was investigating the function of T cells, which are part of the immune system. Van Parijs ran samples of the cells through a device known as a flow cytometer, which sorts the cells by the characteristics in which the scientists are interested. This produces plots, essentially diagrams with large numbers of dots, with each dot representing a cell.

In both papers, there are plots that appear to be almost identical, even though the paper says they are sets of cells from different mice. Using only one mouse would have saved time. The plots are not exact copies, though, which Abbas told the Globe has made him more concerned, because if the data are fraudulent, it implies they were done intentionally. Changing data or inventing it is considered a very serious offense, regardless of the effect the act has on the conclusions made in a research paper, scientists said.

(Bold emphasis added.)
This is the kind of fakery that was bound to be caught -- with a little reflection, Van Parijs could surely have turned out a better faked plot.

The other possibility here, which seems very weird, is that the fabrication and falsification were not done with the intention of producing "better" results, nor of affecting the reported results at all. But this would make Van Parijs ... a scientist who is lying to other scientists just because he can? Is this the scientific equivalent of torturing cats before moving on to your first murder of a human?

Perhaps. Again from the Globe article:

It is not unusual to see cases of fraud involving data that are tangential to the main point of a research paper, as is alleged in some of Van Parijs's work, according to C.K. Gunsalus, a special counsel at the University of Illinois at Urbana-Champaign, and specialist on research integrity.

''It is very common, and there is also a common defense, which is 'I have a PhD and I wouldn't have done something so stupid,' " said Gunsalus. Often, she said, this defense is successful. She also said that it was common to see a pattern of escalation, with small infractions building over time to larger ones.

(Again, the bold emphasis is mine.)

Do scientists who lie about insignificant things lose their taste for gathering real data, or do they get a taste for putting one over on other scientists? Either way, it seems clear that, in a field that is all about figuring out how things really work, telling lies is a Very Bad Thing. At this point, I'd imagine, citing a paper on which Van Parijs is an author would add about nothing, in terms of evidential support, to anyone else's serious scientific work -- this despite the fact that fact that Van Parijs's postdoctoral advisor, David Baltimore, told the Globe that "he knows from work that his lab has done following up on Van Parijs's research that a lot of what he did is, in fact, verifiable." (That "knowledge" rests on the assumption, of course, that members of the lab are doing legitimate experiments and analyses of these ... because surely Van Parijs was the only one who would ever dare to do otherwise.)

By the way, C.K. Gunsalus is the authority to consult on scientific integrity (and lack thereof) in university research settings. Despite some quite reasonable worries people have expressed (like YoungFemaleScientist in this excellent post) about folks accused of scientific misconduct being ruined forever even if the charges turn out to be baseless, Gunsalus has argued that more frequently the lack of real penalties allow the cheats to stay in the system and cheat again. There's a fairly high recidivism rate on cheating in science, according to Gunsalus; it's hardly ever the case that someone is caught for misconduct without having a history of similar deeds. (And that seems to be how the Van Parijs case is shaping up.) And what's the message to the rest of the scientific community if someone is caught fabricating and falsifying data, but is only given a slap on the wrist because it didn't effect the conclusions (or maybe it did, but other labs have "validated" the results)? The message is that lying isn't really a big deal.

Do you see now why some of us get worked up about dishonesty that seems insignificant in the grand scheme of things?

Gunsalus has a downloadable offprint that bundles two of her best articles: "How to Blow the Whistle and Still Have a Career Afterwards" and "Preventing the Need for Whistleblowing: Practical Advice for University Administrators." Both are beautifully written and full of practical advice. If you're a scientist or a science student (or a university administrator), you need to read them!

More like this

Just over a year ago, MIT fired an associate professor of biology for fabrication and falsification. While scientific misconduct always incurs my ire, one of the things that struck me when the sad story of Luk Van Parijs broke was how well all the other parties in the affair -- from the MIT…
You may recall the case of Luk Van Parijs, the promising young associate professor of biology at MIT who was fired in October of 2005 for fabrication and falsification of data. (I wrote about the case here and here.) Making stuff up in one's communications with other scientists, whether in…
Recently, I wrote a post about two researchers at the University of Alabama at Birmingham (UAB) who were caught falsifying data in animal studies of immune suppressing drugs. In the post, I conveyed that this falsification was very bad indeed, and examined some of the harm it caused. I also noted…
Abi at nanopolitan nudged me to have a look at Nature's recent article on what has become of targets of recent scientific fraud investigations. He notes that, interspersed with a whole bunch of poster boys for how not to do science, there are at least a couple folks who were cleared of wrongdoing…

Humans and human minds are complicated systems, and I would hypothesize that the vast majority of the "small" cheating that starts out isn't rational at all. The small things that make you wonder, "what would they gain?" The question probably doesn't have a rational answer.

Yeah, I'm sure that some scientific dishonesty is done in a cynical and planned manner calculated to further one's career. But I suspect that more of it creeps up on one like an alcoholism habit, a gambling addiction, an eating disorder, or a compusive video voyeur disorder.

I have no conclusions or policy suggestions to take away from all of this, it's just something I suspect is at least sometimes, perhaps often, true.


And here I worry if my data are 100% proof and OK, and if I have done enough samples, and if my conclusions can really be drawn from my data...

I guess some people are just pathological liars. Perhaps lazy? But I do not understand how can one be a scientist without a "science fever" of working endless days and nights in order to finally get to see the data one day at 3am....

Given that storage costs are getting to be a non-issue, why is it not yet de rigueur to supplement all research articles with a link to the raw data?

That won't stop people making up data -- but it's more work to, say, measure out dilutions from your positive control so as to generate a scint counter printout or Western that looks the way you want it to, than it is to just invent numbers and claim you got them from the printout or blot.

Open data would also do away with other niggling problems, such as "representative data are shown" translating to "the only one that worked is shown".

It occurs to me that what might be gained isn't so much intrinsic to the work itself (that is, stronger results, or results more favorable to whatever you want them to be), but instead small considerations that can loom large when it's time to make a decision. "Do I stick around the lab for the three hours while this round of the experiment finishes, and collect accurate results, or do I go catch a movie and report what 'everyone knows' the results will be? After all, it's not like it's going to affect the results much...."

That is to say, it seems like cheating would be more prevalent when nothing much is at stake. If you know it's not going to adversely affect someone else's work, and it isn't really a big part of the experiment anyway, what's really so bad about fudging a little here and there? And if everyone knows what the results are going to be ahead of time, and the experiment seems like a formality more than anything else, why bother doing it when you can use the "Data Enrichment Method?"

(Do note that I am not in any way advocating these points of view. It is absolutely important to do what you say you're going to do, and do it well, especially when performing scientific experiments. What I'm trying to do is illustrate how easy it is to think what you're doing isn't really wrong.)

And, of course, once you've gotten used to being a little lazy, you might find yourself tempted to do so on a larger scale.

"Whoops! I forgot to collect a crucial piece of data for this experiment. Doing it over would take forever, and admitting I messed up could cost me my funding. This research is really important to me, and the university needs the money, and it's not like we're doing anything with human subjects... I'll just approximate, just this once. Nobody noticed before, and if I do really really well on the next part of the experiment, I can just correct for the error. It would hurt a lot more people if I didn't continue..."

Of course, once you've messed up that badly, you've no choice but to go with it. If you've got fabricated elements in one experiment, well, your next experiment should line up with it (otherwise you'll be found out). Even if you wanted to stop, it would just make all your previous efforts for nothing. Besides, you're not one of those awful scientists wanting to pull one over on people, or making up work that's completely mind-blowing. That would be hubris. You're trying to give results that are as accurate and un-harmful as possible, with whatever modifications are necessary to save your own job.

While I wouldn't go so far as to put this kind of behavior in the "good people sometimes make bad decisions" pool (the first steps, perhaps, but that's debatable), it does seem that this sort of behavior (unethical behavior that seems to have been done without intent to alter results or aggrandize their own prowess) is of a slightly different bent than what we 'usually' see in terms of falsification.

Your befuddlement brings that to light for me, Dr. Free-Ride. You're a good scientist, and it seems like to you, the only possible intent for such widespead dishonesty is a malicious or selfish one (after all, why do something so bad if you're not going to get anything out of it?). While this seems selfish to me, it seems, if such a thing is possible, modestly selfish. He may have been lazy and cowardly, but he wasn't trying to make himself a superstar, and he seems to have tried to make his misdeeds harm as few others as possible. I don't think that means he should be punished any less harshly, but I do think that it [may have been - I'm not inside his head, so I don't know for certain] isn't as strange as it first seems.

By periphrasis (not verified) on 31 Oct 2006 #permalink

Read a newspaper, watch a television. "the race is not to the swift" and all that. There is nothing unusual about cheating; even cheating which brings many to immediate harm. Scientists are human. If you prick them, they bleed. If you reward cheating, they will cheat.

Many papers do include raw data, in the form of online supplimentary material. But since many of the data reduction software packages are copyrighted and/or subject to other IP regulations, they often can't be distributed.

So even if a reviewer shows that the raw data, crunched in standard, publically available way, does not give the published results, an editor who is really keen on a stroy can brush off that review by stating that the unavailable data reduction method might be smarter.