Scientists behaving badly . . . "As a scientist whose papers have been rejected by top journals in many different fields, I think I can offer a useful perspective . . ."

Steven Levitt writes:

My view is that the emails [extracted by a hacker from the climatic research unit at the University of East Anglia] aren't that damaging. Is it surprising that scientists would try to keep work that disagrees with their findings out of journals? When I told my father that I was sending my work saying car seats are not that effective to medical journals, he laughed and said they would never publish it because of the result, no matter how well done the analysis was. (As is so often the case, he was right, and I eventually published it in an economics journal.)

Within the field of economics, academics work behind the scenes constantly trying to undermine each other. I've seen economists do far worse things than pulling tricks in figures. When economists get mixed up in public policy, things get messier. So it is not at all surprising to me that climate scientists would behave the same way.

I have a couple of comments, not about the global-warming emails--I haven't looked into this at all--but regarding Levitt's comments about scientists and their behavior:

1. Scientists are people and, as such, are varied and flawed. I get particularly annoyed with scientists who ignore criticisms that they can't refute. The give and take of evidence and argument is key to scientific progress.

2. Levitt writes, about scientists who "try to keep work that disagrees with their findings out of journals." This is or is not ethical behavior, depending on how it's done. If I review a paper for a journal and find that it has serious errors or, more generally, that it adds nothing to the literature, then I should recommend rejection--even if the article claims to have findings that disagree with my own work. Sure, I should bend over backwards and all that, but at some point, crap is crap. If the journal editor doesn't trust my independent judgment, that's fine, he or she should get additional reviewers. On occasion I've served as an outside "tiebreaker" referee for journals on controversial articles outside of my subfield.

Anyway, my point is that "trying to keep work out of journals" is ok if done through the usual editorial process, not so ok if done by calling the journal editor from a pay phone at 3am or whatever.

I wonder if Levitt is bringing up this particular example because he served as a referee for a special issue of a journal that he later criticized. So he's particularly aware of issues of peer review.

3. I'm not quite sure how to interpret the overall flow of Levitt's remarks. On one hand, I can't disagree with the descriptive implications: Some scientists behave badly. I don't know enough about economics to verify his claim that academics in that field "constantly trying to undermine each other . . . do far worse things than pulling tricks in figures"--but I'll take Levitt's word for it.

But I'm disturbed by the possible normative implications of Levitt's statement. It's certainly not the case that everybody does it! I'm a scientist, and, no, I don't "pull tricks in figures" or anything like this. I don't know what percentage of scientists we're talking about here, but I don't think this is what the best scientists do. And I certainly don't think it's ok to do so.

What I'm saying is, I think Levitt is doing a big service by publicly recognizing that scientists sometimes--often?--do unethical behavior such as hiding data. But I'm unhappy with the sense of amused, world-weary tolerance that I get from reading his comment.

Anyway, I had a similar reaction a few years ago when reading a novel about scientific misconduct. The implication of the novel was that scientific lying and cheating wasn't so bad, these guys are under a lot of pressure and they do what they can, etc. etc.--but I didn't buy it. For the reasons given here, I think scientists who are brilliant are less likely to cheat.

4. Regarding Levitt's specific example--he article on car seats that was rejected by medical journals--I wonder if he's being too quick to assume that the journals were trying to keep his work out because it disagreed with previous findings.

As a scientist whose papers have been rejected by top journals in many different fields, I think I can offer a useful perspective here.

Much of what makes a paper acceptable is style. As a statistician, I've mastered the Journal of the American Statistical Association style and have published lots of papers there. But I've never successfully published a paper in political science or economics without having a collaborator in that field. There's just certain things that a journal expects to see. It may be comforting to think that a journal will not publish something "because of the result," but my impression is that most journals like a bit of controversy--as long as it is presented in their style. I'm not surprised that, with his training, Levitt had more success publishing his public health work in econ journals.

P.S. Just to repeat, I'm speaking in general terms about scientific misbehavior, things such as, in Levitt's words, "pulling tricks in figures" or "far worse things." I'm not making a claim that the scientists at the University of East Anglia were doing this, or were not doing this, or whatever. I don't think I have anything particularly useful to add on that; you can follow the links in Freakonomics to see more on that particular example.

More like this

I had a weird experience dealing with journals and peer review a little while ago. Recent discussions of the CRU e-mail hack (especially Janet's) has made me think more about it, and wonder about how the scientific community ought to think about expertise when it comes to peer review. A little…
Having made reference to the referee system in my post about a paper being accepted, this seems like a good point to dust off an old post about the peer review system in physics. Like many of the other Classic Edition posts I've put up here, this one dates from July of 2002. Apparently, I wrote a…
There's been a lot written recently about academic publishing, in the kerfuffle over the "Research Works Act"-- John's roundup should keep you in reading material for a good while. This has led some people to decide to boycott Elsevier, including Aram Harrow of the Quantum Vatican. I'm generally in…
It's quite likely, if you're reading anything else on the internets besides this blog for the past few weeks, that you've already gotten your fill of ClimateGate. But maybe you've been stuck in your Cave of Grading and missed the news that a bunch of emails from the Climate Research Unit (CRU)…

The best scientists I have known took Popper seriously and sincerely tried to falsify their own theories. That's really hard for human beings to do, given that we're all about self-affirmation. But it's the bar that the scientific method sets. And it can be done. I remember as a graduate student sitting in a seminar led by Hans Bethe and Willie Fowler and watching them debate a point of stellar dynamics. They were patient and gentle with every suggestion and idea except their own - a lesson I never forgot.
I'm really surprised this aspect of how one should practice science hasn't come up in the climate email dialog - why aren't these people the first to poke holes in their own claims??

Nick:

My guess is that these guys have been busy poking holes in their own claims, but what they're strategizing about is how to deal with what they consider, rightly or wrongly, to be misleading reports in the news media.

The climate scientists who've been busy beating up on their own claims have done a really good job of hiding those activities from the public - maybe they should be put in charge of email security for their peers.

Seriously - the quality of science on both sides of the climate debate is profoundly compromised by a combination of confirmation bias and groupthink. Anyone who publicly exposes the slightest ambiguity is taken out and shot by their respective camps. If one takes F. Scott Fitzgerald's definition of first rate intelligence (the ability to keep two opposed ideas in ones head at the same time and still function) as the metric, researchers on both sides of the climate debate apparently think there is no one in the political class or general public who measures up.

For example, most climate researchers I know privately admit that it's a gross oversimplification to concentrate attention on a single number (e.g. 350.org), but they justify it because a) the media and public are incapable of grasping the complexity of climate change, and b) the "other side" oversimplifies too. But there is a price to be paid for hiding the complexity and ambiguity. Public confidence in climate scientists will suffer when new data require new targets, just as confidence in public health researchers did when new data required new guidelines (cf. horomone replacement, prostate and breast cancer testing).