Science's neighborhood watch

The commenters here at ScienceBlogs are da bomb! Just look at the insight they contributed to my previous post on fakery in science. Indeed, let's use some of that insight to see if we can get a little bit further on the matter of how to discourage scientists from making it up rather than, you know, actually doing good science.

Three main strategies emerged from the comments so far:

  • Make the potential payoff of cheating very low compared to the work involved in getting away with it and the penalty you'll face if caught (thus, making just doing good science the most cost-effective strategy).
  • Clear out the deadwood in the community of science (who are not cheating to get Nobel prizes but instead to get tenure so they can really slack off).
  • Make academic integrity and intellectual honesty important from the very beginning of scientific training (in college or earlier), so scientists know how to "get the job done" without cheating.

I like all of these, and I think it's worth considering whether there are useful ways to combine them with one of the fraud-busting strategies mentioned in the previous post, namely, ratting out your collaborator/colleague/underling/boss if you see them breaking the rules. I'm not advocating a McCarthyite witch hunt for fakers, but something more along the lines of a neighborhood watch for the community of science.

One of the motivations for faking I mentioned earlier is that it seems to get you all the rewards for scientific success without the bothersome wait for the research to work. At Evolving Thoughts, John Wilkins has a lovely discussion of the decision process of the potential cheater trying to make a rational calculation about whether to roll up sleeves and do science or, instead, to commit fraud:

This explains why researchers might choose to fabricate data. It's a way of getting an edge over your competitors, no matter what the strategy. It reduces the resources and time required, both economic factors in science, and maximises the chances of getting credit. The risk of discovery is relatively low. We're just asking for trouble.

In the early days of science, scientists were "gentlemen" who valued their "standing" amongst the scientific community. Things haven't changed much (despite the changes in class and inclusions of gender and culture) - scientists still need to jealously guard their standing in their community, and fight to attain it (mutatis mutandis this also applies to any profession, particularly the academic ones). This is what keeps science honest, and enables progress to be made.

But aspects of science have become corrupted by other influences. The baleful influence of pharmaceutical companies and manufacturers of medical equipment, of tobacco funding and lobby groups in Congress and Parliament, of National Institutions like educational bodies, of commercialisation and intellectual property rights of host institutions, of defence involvement in research, and so on, have made the simple peer review less useful. Often the reviewers are people chosen for an editorial outcome, like rejection or publication. Moreover, the sheer number of journals in print and electronic, and the size of the scientific community itself, make it hard.

But in my view the single greatest corrupting influence in science is the scarcity of funding from governments and the increase in paperwork and red tape. About fifty years ago, anecdotally, scientists did almost no paperwork and were given funds when they asked for it. A time of postwar reverence for science, scarcity of researchers, and increasing economic wealth in the West, meant that the only measure of success was how one was regarded in the field by one's peers. Now you must be assessed by citation indices, journal impact, granting body reviews, and so on.

This is an exercise in game theory. If you want to discourage fraud, you have to make it that the payoffs can only be achieved honestly. A large part of this is to make the payoffs more directly offered by the professions concerned. But some of it has to be also making publishable research more difficult to achieve, and making careers not depend so much of quantity of papers as quality. Therein lies the rub.

It's hard to get research to work. It's a bother to write it up. It's a pain to get it published, especially in a "high impact" journal. Add to that the pressure of spending all kinds of time scrambling for money to support the research (which means identifying a "sexy" problem, catching up on the literature, gathering preliminary data, and writing grant proposals that must also undergo peer review). What's the payoff? If you find something really good, maybe you get fame (your name in the textbooks!), fortune (patent rights!), and the opportunity to do a little superior dance in the general direction of all those other violently competitive scientists in your field.

Oh yeah, and the warm feeling you get from having made a contribution to scientific knowledge.

But really, beating those bastards in the lab across town, that's the reward that stays with you. Especially since you're pretty sure that one of them was behind sinking the review of that important grant proposal that you submitted last year, and then because you didn't get the grant, you didn't get tenure, and now here you are starting off again at another school, so IN THEIR FACE!!

Putting one over on people you hate because they've bloodied you in the fierce competition for resources and recognition ... maybe wouldn't leave you feeling as conflicted as putting one over on people you like and respect. There might be a clue here about ways to raise the cost, or lower the payoff, of being a cheater.

What of the parasites in the scientific community? In his comment, Rob writes:

I imagine (like you I'm operating without data) that the most common form of scientific fraud is not a person who fakes a major breakthrough for the sake of fame, money and groupies, but a person who is basically trying to be deadwood. They want to pad their tenure file with nondescript results in marginal journals, hoping that the tenure committee will not know enough about the field to recognize fluff. Really, they want to go from doing very little work to doing absolutely no work.

The "striving to be deadwood" crowd are also the kind of people who put no effort into teaching, but avoid criticism from students by giving them all As. I imagine a big tactic they use is to try to make their results so boring that no one will try to replicate them, look at their lab books, or send them email while they are on vacation.

This denizen of the scientific community isn't involved in gladiatorial combat with competitors -- he just wants to be left alone. (You know, there are many entertaining blogs to read if the phone stops ringing and the students go away. Plus a lot of nifty computer games. Playing computer games at your desk isn't grounds for being fired once you've gotten tenure, is it?) If this "scientist" is a committed slacker, then it will be very hard for him to invest even the minimal effort needed to do legitimate scientific research (even if that research is as lacking in innovation as vanilla pudding). He may find it easier to work out what solid research should look like -- say, by reverse engineering some middle of the road papers in middle of the road journals -- than to actually bring himself to wrestle with an experimental system. As long as there is enough apathy among those evaluating this fellow's output, he may be able to fudge his way into a life-time appointment (Endowed Chair of Tetris).

Instead of being motivated by hatred of his enemies, this creature is dead inside. It would seem he has no feeling at all for the other members of the community of science, beyond, "Don't bug me!" This kind of disconnection would seem to lower the cost of being a cheater quite a lot.

Which brings us to what Leah suggests in her comment: we shouldn't be surprised when professors, or postdocs, or graduate students cheat on their research if we've given them undergraduate training where cheating was treated as no big deal. Students of science are not just learning the guiding theories, explanatory frameworks, or crucial laboratory techniques of the field they are studying. They are also learning what's important to their scientific teachers and mentors and what is not. If your lab instructor thinks cooking the data is fine in your p-chem lab, why wouldn't you assume that cooking the data is also fine in preparing a grant proposal or a scientific manuscript? If your biology advisor spends most of her contact hours telling you how sweet it's going to be to scoop a competing lab with which she's been communicating, why wouldn't you assume that beating your competitors should be your main motivation (and that communicating with other scientists in your field is a sucker's move)? If your physics professor just wants to be left alone, why wouldn't you assume that you can pretty much do what you like so long as you don't draw too much attention to yourself?

It's not like the typical graduate advisor spends a lot of time explaining the rules of the community of science to the new graduate student. The PI has often lived in the world of science so long that its rituals seem utterly normal. The new grad student is frequently trying like crazy not to show him- or herself to be dumb or incompetent, and so is trying to ration the number of (potentially stupid) questions put to the advisor, postdocs, or senior grad students. If you look like you know what you're doing, chances are it's because you're trying to look that way.

There are moments when it feels like you're trying to enter a community that probably knows it's too good for you. Showing weakness -- even by asking a question about the right way to do something -- feels very dangerous.

What would happen if this dynamic were to change? Imagine a community of science whose new members were recognized as valuable rather than likely-to-be-cut-soon from their entry? What if PIs, postdocs, and senior grad students all took it upon themselves to explain the rules of conduct of the scientific enterprise -- even in their most mundane manifestations in the laboratory and the classroom? What if, moreover, members of lab groups and departments regularly communicated about better and worse ways to do things in the community of science, and sought each other's insight on tough calls? What if, through successes and mishaps, they all kept reminding each other how cool it is to be part of the process that builds new knowledge of the world?

Could this keep later competition between scientist who have interacted with each other as part of such a community more friendly than brutal? Might this kind of intense contact with a community scare away deadwood aspirants toward some profession where a community (whose members might bug you) is optional?

I think maybe it's time to find out.

Categories

More like this

I am sympathetic to diagnoses of social problems that identify cultural causes, factors like a hyper competitive atmosphere. The problem with these sorts of explanations is that they don't help much in actually solving the problems. We can try individually to be better people, and to talk up the idea of being more cooperative. But that is generally just hollow gesturing. Since at least Bacon, a "cause" has been something one can manipulate to achieve a desired end. Cultural causes aren't easy to manipulate. Causal explanations that point to the incentive structure of a community at least point to something we can change.

Rob, I agree that it's a lot harder to change a culture than (say) tenure rules or pay scales. However, it's scary just how important the microculture of the research group is in developing people's scientific habits, and just how much power the head of each research group has over the culture of the research group. So, in fact, I suspect certain features of the scientific culture (aka "How we do things in my lab") are more easily manipulated than one might guess.

(I'm sure Aristotle would have good stuff to say here about the interplay between the character of the individual scientist and the character of the community training her or him. The key point, to my mind, is that the causal arrows go both directions, so changing either one could be a good way to change the other.)

How about getting rid of the whole concept of tenure?

The sub-standard do-nothing people who cheat in order to get tenure so they can spend a working life doing nothing much would at one stroke have their motivating force removed. I agree with whoever it was in the comments below (rob helpy-chalk?) who said that these people are more of a threat than the high-profile cases like Hwang Woo-suk, because the high-profile ones have an extremely high probability of being found out because of the number of people poring over their papers, and the risk/reward calculations of anyone in their team who is aware of their cheating

By potentilla (not verified) on 23 Jan 2006 #permalink