Cleaning up scientific competition: an interview with Sean Cutler (part 1).

Sean Cutler is an assistant professor of plant cell biology at the University of California, Riverside and the corresponding author of a paper in Science published online at the end of April. Beyond its scientific content, this paper is interesting because of the long list of authors, and the way it is they ended up as coauthors on this work. As described by John Tierney,

Dr. Cutler ... knew that the rush to be first in this area had previously led to some dubious publications (including papers that were subsequently retracted). So he took the unusual approach of identifying his rivals (by determining which researchers had ordered the same genetic strains from a public source) and then contacting them. He told me:

Instead of competing with my competitors, I invited them to contribute data to my paper so that no one got scooped. I figured out who might have data relating to my work (and who could get scooped) using public resources and then sent them an email. Now that I have done this, I am thinking: Why the hell isn't everyone doing this? Why do we waste taxpayer money on ego battles between rival scientists? Usually in science you get first place or you get nothing, but that is a really inefficient model when you think about it, especially in terms of the consequences for people's careers and training, which the public pays for.

Cutler doesn't argue for the end of competition between scientists. Rather, as he explained in a subsequent TierneyLab post, he thinks it's important for scientists to compete ethically -- to play by the same rules as they do their research and submit their findings for publication, rather than getting across the finish line first by sabotaging other scientists in the running.

As you might imagine, I'm on board with Cutler's view of how scientist should approach competition. And, I am delighted that he agreed to let me interview him for my blog. (On account of its length, I'm breaking the interview up into two posts.)

JS: Can you explain your experiment in cooperative science, and what motivated you to do it?

SC: Well, first off, the idea that it was an experiment is a tad misleading. It was a personal experiment in how I wanted to live my life -- not a scientific experiment. Perhaps I can publish it in the Journal of Irreproducible Results one day, if I am so lucky.

The background is this. After several years of work I found myself sitting on a major discovery in one of the most competitive fields in plant biology. "Competitive" in science is usually code for "cut throat", and can be associated with scientists who abuse their power to get ahead unfairly. I thought to myself -- what is the one thing that those "cut throat" types would not do in my situation -- because I really do not want to end up like them. Contacting people I might of scoop seemed like an interesting approach. My colleague, ethicist and friend Coleen Macnamara thought it was a great idea, which was encouraging. I sent emails out to people who I determined were sitting on the same jackpot discovery as me, but I gathered that they didn't realize it. That got the ball rolling.

JS: What's your take on why competition, rather than cooperation, has come to be seen as the right way to do science?

SC: Just read Jim Watson's Double Helix or watch the movie And the Band Played On. There are too many narratives of success coupled to unethical behavior. These campfire stories are deeply ingrained in the psyche of science and scientists. I have heard one particularly troubling message over and over again in my career: "you can do whatever you want, as long as you publish great work". Sadly, I have heard of graduate students getting that advice from their mentors! That said, I am not against competition or individuals working in isolation. What we need to do is supplant the current narrative with a better one. We also need to be militant about calling the jerks on their behavior whenever we see it -- I seem to have a unique talent for this.

JS: You've said that you're aware of scientific competition that has relied on unethical strategies -- peer reviewers making unfair use of confidential information from manuscripts submitted to journals, scientists neglecting requests for published materials from other scientists, etc. Do you think this kind of ethical breach is more prevalent than outright fabrication, falsification, and plagiarism? Do you think, because of its prevalence, it might be more serious?

SC: Yes, in my opinion it is much more serious and more damaging to the culture of science, perhaps not the public perception of science, but to the environment that scientists operate in. I would wager that this stuff is 100 to 1000 fold more common than fraud -- but I couldn't say for sure (that doesn't stop me from guessing though!). I hear about the behavioral breaches very often but I rarely hear of outright fraud. You risk your career if you fake data, but there are really no serious consequences for being unethical in the behavioral sphere. Sure, you may be known as a "shark", but the praise that comes from your fantastic papers seems to make that tolerable. That is why the institutional structure of rewards and consequences needs tweaking.

JS: Do you think that the institutional structure protects those that commit "ethical crimes"?

SC: Perhaps tacitly, and sometimes explicitly. I think science would benefit from more transparency so that jerk scientists just cannot get away with this stuff. For example, anonymous review probably does more to protect "sharks" than protect honest reviewers, which is sad. I would wager, that if one were granted access to analyze the reviewer data held by the journals, I think some interesting patterns would arise and a lot of horribly unethical scientists would be exposed. I would love to see someone use journal reviewer data for this purpose- that would be awesome. Google, are you listening? The irony is, if you proposed that to a journal, they would say "but the process is anonymous," to which I would respond, yes -- you are using anonymity to protect jerks! Even a coded data set or an unpublished analysis to prove the point would be awesome -- no need to expose people, just prove that there are unambiguous problems at work and that the problem needs to be fixed.

JS: This kind of ethical breach, as you've also noted, typically violates journal policies. Why do you think enforcement of these policies has been so lax? Are there practical ways for journals (and granting agencies) to enforce such policies more rigorously? Are written policies about consequences for violating reviewer and author policies sufficient?

SC: I wish I knew, but let me give a cynical answer. Look who is making the rules: successful scientists who usually got to where they are by "winning" in competitive fields. It is like asking company executives if they want a rule that makes them disclose their personal sales of their company's stock (which is an SEC mandate, by the way). But before these rules were imposed, why would they agree to that? It has to come from outside the system, because insider trading is a proven method for acquiring wealth and no one wants to lose a proven strategy. Don't get me wrong, it is still a minority of scientists that are unethical jerks -- but all it takes is a couple jerks in the room to say "we could never tackle this problem" and they can shut the discussion down.

Additionally, there may not be a clear sense of the scope of the problem at journals, and this only helps to slow progress. One high level person at a journal told me, and I am paraphrasing here, "breaches of the reviewer agreement are thankfully rare." I almost fell off my chair when I read that and I wrote a rather nasty email in response (which I regret doing -- I make mistakes too). That is what we are up against. Many of the people with the most power think that there is not a significant problem.

Having said all of that, my shenanigans enabled me to get Bruce Alberts on the phone for a lengthy discussion, and he assured me that this will change, and that Science Magazine will make this topic an important component of their upcoming discussion about ethical policies at their journal. The historical focus on this topic has been on "data ethics", not behavioral ethics, but I think things will change. The challenge is for people to speak up and say, "Yes, there is a problem -- and we need to fix it". If Science Magazine institutes a policy, the rest of the journals will follow suit. This won't fix the problem that jerks will always exist, but at least it changes the ground rules and creates clear consequences about what happens when you get caught getting winning dishonestly.

* * * * *
In part 2, we wrestle with some of the details that complicate the task of helping scientists compete more ethically. Also, Cutler gives a tantalizing hint of a miraculous solution.

More like this

I look forward to hearing the details of how he specifically approached the other researchers -- how did he explain his results without risking being scooped himself?

I am also a little skeptical of the overall narrative you are spinning. Note that Cutler's lab got first and last authorship on the Science paper. For all intents and purposes, the other guys were scooped.

By Neuro-conservative (not verified) on 18 May 2009 #permalink

Every scientist has stories about how she's been unfairly fucked by scurrilous reviewers, but is there any real evidence that reviewer malfeasance is a systemic problem?

JS said: "...enforcement of these policies..." And right there you've got it. It is simple and easy to write management policy (eg: "There is no bullying in schools.") It is another, and far more costly, task to implement and enforce this policy (eg: Full-time close supervision). Often, it becomes a budgetary and staffing nightmare to find the human hours needed to implement and enforce. Then there are the pseudo-legal arguments that follow, the appeals process...

All that said, what a fine move of Sean Cutler to turn competitors into collaborators!

I notice that Dr. Cutler successfully scooped all of his rivals by including them on the paper. They are co-authors and he is the corresponding author being interviewed (even if by blogs :). I wonder if he'd have been so sanguine about this procedure if one of his rivals had contacted him first saying they'd like to publish his data and offered him a co-authorship for his trouble.

I think that this person is interesting and seems good, but extending this more broadly raises several dangers. One risk is it might promote a worse evil that ultimately leads to either sloppy, unconfirmed work or, even, promotes deliberate fraud.

An issue is authorships and a desired requirement for substantive contribution. If you did not substantially contribute to intellectual genesis of the actual results, the required intellectual analysis of the actual results presented, or the experimental production of major results, what right do you claiming credit for the research?

This inattention to authorship has promoted some scientific frauds or large errors. The discovery of error or fraud was delayed, because others were unduly impressed that senior people were co-authors. However, these did not contribute to the work or could not actually evaluate the work enough to notice if it was even done.

We already have too many cases where others that do not contribute extensively to the paper get undeserved credit. Depending on the context in which you do research, for example, politically-important collaborators, senior faculty, group leaders, division directors, department chairs, etc. expect or reward pro forma authorships without their actual participation.

Are we supposed to extend this evil, by including all members of a research community, so no one objects and everyone gets "partial credit" for all work in the field?

This provides powerful disincentive for someone to critically examine or verify the work. An advantage of competition to publish is that diverse people often are working toward the same end. Thus, this might make the people doing the work more careful in some ways, knowing that others are doing similar experiments (and, hopefully, also more fraud-avoiding). This might have a tendency for "idea people" to have great ideas with only poor supportive evidence to first publish them. By co-opting a lot of co-authors thinking along somewhat similar lines, this adds more authority to poorly-tested or untested ideas than they deserves.