How scientists see research ethics: 'normal misbehavior' (part 2).

In the last post, we started looking at the results of a 2006 study by Raymond De Vries, Melissa S. Anderson, and Brian C. Martinson [1] in which they deployed focus groups to find out what issues in research ethics scientists themselves find most difficult and worrisome. That post focused on two categories the scientists being studied identified as fraught with difficulty, the meaning of data and the rules of science. In this post, we'll focus on the other two categories where scientists expressed concerns, life with colleagues and the pressures of production in science. We'll also look for the take-home message from this study.

The focus groups of scientists in this study saw much worrisome behavior in response to the challenge of getting along with other scientists. Central to this challenge is the problem of how to negotiate collaborations of various sorts in an environment that rewards individual achievement. One scientist in a focus group describes the situation:

"... along those lines I think [we must be] aware ... not to cut people out. It is like, go out of your way to include people that might have made any kind of contribution whatsoever ... in my field in particular [there are] innumerable instances where people are cooperating well until something really spectacular is found. And then all of a sudden people are just lopped-off at the knees ... literally on the day something was found, it just [starts] to crumble and ... people just don't speak to each other anymore, or [are] trying to block publications, just sort of a mess." (46)

The interpersonal tensions described here are both foreseeable and distressing to scientists. But if they're foreseeable, might they also be avoidable?

Given the constraints, there are different strategies for avoiding such tensions. Some of these might hinge on individual-level behavior, but others might turn on changes at the institutional or community level, including changes in the reward structures for scientific activity. (We've touched on this kind of issue many, many times before, and I don't imagine we're done with the subject.)

Indeed, the responses of the scientists in this study seem to identify a range of strategies for success as a scientist that they feel might technically comply with the rules (for example, falling short of fabrication, falsification, and plagiarism) while doing actual harm to the members of the scientific community and to their shared enterprise.

The pressure to produce -- coupled with uncertainties about ownership of ideas, the proper way to assess scientific output (quantity or quality?), the management of competing interests, and the division of labor in research -- is associated with a number of behaviors that do not quite reach the threshold of FFP but nevertheless are regarded by scientists as misconduct. The problems mentioned by members of our focus group included: manipulation of the review system, (improper) control of research by funders, difficulties in assigning authorship, exploitation of junior colleagues, unreported conflicts of interest, the theft of ideas from conference papers and grant proposals, publishing the same thing twice (or more), withholding of data, and ignoring teaching responsibilities. (46)

Scientists view these behaviors as misconduct, but they still see them happening. This raises the question of what the scientific community should -- or could -- do about it.

Given the other responses gathered from the focus groups, the best way forward would probably not involve simply imposing additional rules against these behaviors.

The four big areas of concern De Vries et al. identified in the focus group responses are interconnected. For example "life with colleagues" can bleed into "the pressures of production," especially when scientists are involved in building new knowledge and new scientists:

The fear of competition from one's students and post-docs highlights a structural dilemma in the training of scientists: to succeed in science it is important to attract the most talented graduate students and new PhDs, but these bright young researchers, once trained, become one's competition (46-47)

More generally, that doing science requires a position and funding (sometimes from different entities with different priorities) makes it hard not to compromise some of your scientific commitment to be tough-minded in your pursuit of objectivity. One focus group participant describes the pressures vividly:

"For example, a particular study that I'm involved in is about drugs to ... offset the effect of radiation ... [The] company that makes [the] drug ... does not want a certain control group in the study and will not fund the study if that control group is there ... there's nothing illegal about [this], and I know for a fact it happens all the time and that's the way it goes. It's because government can't pony up enough money to do all the clinical research that needs to get done. In this ... study ... the individual who's going to be principal investigator is an untenured assistant professor ... And you know, screwing around with this drug company, negotiating the study, has cost her a lot of time, and she, it's going to make it harder for her to get tenure. And the pressure is clearly on her to knuckle under. I mean, she could have started this study months ago if she'd just said, sure, I'll do whatever you want, give me the money. (47)

It's important to note that feeling such pressures is not the same as giving in to them. However, the greater these pressures, the more likely it is that good scientists may end up making bad choices.

We've seen the sweep of the perceptions and concerns voiced by the focus groups in this study. At this point, we might as how representative these focus groups are. How widely shared are their perceptions and concerns within the larger scientific community? De Vries et al. constructed a survey to help them answer this question:

[U]sing what we learned in the focus groups, together with data from earlier studies, we developed a survey which we distributed to a sample of scientists funded by the NIH. We presented our respondents with a list of 33 misbehaviors ranging from the fairly innocuous (have you signed a form, letter, or report without reading it completely?) to the more serious (have you falsified or "cooked" research data?) and asked two questions:

  1. In your work, have you observed or had other direct evidence of any of the following behaviors among your professional colleagues, including postdoctoral associates, within the last three years?
  2. Please tell us if you yourself have engaged in any of these behaviors within the last three years?

Because reports of what others are doing is not a reliable measure of the incidence of behavior -- several respondents may report the same incident -- we use self-reports to describe the prevalence of misbehavior. In a few places we do use respondents' accounts of the behavior of their colleagues, but only to allow a glimpse of scientists' perception of a behavior's prevalence. (47)

There's another paper [2] that focuses on the results of this survey; I'll be blogging about that paper soon. In the meantime, on the question of whether the focus groups are a reasonable representation of the scientific community:

Our focus group data predicted well the responses from the national sample. (47)

The quick answer is, yes.

So, what should we do with these findings? First, De Vries et al. draw some lessons for policymakers:

Our conversations with scientists lead us to conclude that a certain amount of "normal misbehavior" is common in the dynamic field of science. This is not to suggest that these behaviors should be condoned, but, following Durkheim, we see these behaviors as playing "a useful and irreplaceable role." ...

[N]ormal misbehaviors show us the "pinch points" in the organization of science. It is particularly important to notice that when scientists talk about behaviors that compromise the integrity of their work, they do not focus on FFP; rather, they mention more mundane (and more common) transgressions, and they link these problems to the ambiguities and everyday demands of scientific research. When policymakers limit their concerns to the prevention of infrequently occurring cases of FFP, they overlook the many ways scientists compromise their work in an effort to accommodate to the way science is funded and scientists are trained. (47-48)

Fabrication, falsification, and plagiarism may be egregious enough that even policymakers see what's wrong with them. But these are likely the tail-end of a trajectory of normal misbehavior, not where a scientist starts going off the rails. Where things start going bad is more likely to be in the gray areas of ambiguity about methodology and results. While these gray areas are unavoidable, the pressures upon scientists to produce and to distinguish themselves may make it seem expedient to sacrifice objectivity and fairness toward fellow scientists.

Ratcheting down the pressures may make the gray areas less dangerous.

Our focus group data demonstrate that any effort to reduce misbehavior and misconduct must pay attention to the nature of scientific work and to the internal processes of science. (48)

Policymakers, in other words, need to know something about the complexity of scientific research -- what makes the tough calls tough -- in order to establish better conditions within which scientists can make good decisions.

Of course, scientists may need to step up and take responsibility for accomplishing what policies and regulations imposed from outside the scientific community cannot accomplish. Luckily, scientists have a vested interest in getting their community to a well-functioning state.

We are aware that mandated training in the "responsible conduct of research" (RCR) focuses on FFP and the normal misbehavior identified by our focus group participants, but the very ordinariness of the latter shields it from the attention of national policymakers and institutional officials. (48)

To me, it feels like there's a bit of a tension here. Given the resistance scientists display to burdensome rules imposed by policymakers, what will it accomplish to have those policymakers pay more attention to normal misbehavior? Arguably, isn't it the scientists mentoring other scientists and interacting with their scientific peers who need to pay more attention to normal misbehavior?

And paying attention to normal misbehavior is not enough. Having strategies for responding to normal misbehavior would be better.

When we look beyond FFP we discover that the way to better and more ethical research lies in understanding and addressing the causes of normal misbehavior. This is not a call for increased surveillance of the mundane work of researchers, a response that would create undue and problematic interference in the research process. Rather, the presence of normal misbehavior in science should direct attention to the social conditions that lead to both acceptable and unacceptable innovations on the frontiers of knowledge. (48-49)

Ahh, here's where policymakers and institutional officers could be a real help to scientists. Paying attention to normal misbehaviors -- and understanding the conditions that give rise to it -- could go hand in hand with adjusting the institutional contexts (including reward structures) and social conditions. In other words, scientists within the community, policymakers, and institutional officers share the responsibility for understanding the foreseeable outcomes of the system as it now stands -- and for creating a system that leads to better outcomes.

________
[1] Raymond De Vries, Melissa S. Anderson, and Brian C. Martinson (2006) "Normal Misbehavior: Scientists Talk About the Ethics of Research" Journal of Empirical Research on Human Research Ethics 1(1), 43-50.

[2] Brian C. Martinson, Melissa S. Anderson, A. Lauren Crain, and Raymond De Vries (2006) "Scientists' Perceptions of Organizational Justica and Self-Reported Misbehaviors" Journal of Empirical Research on Human Research Ethics 1(1), 51-66.

More like this

In the U.S., the federal agencies that fund scientific research usually discuss scientific misconduct in terms of the big three of fabrication, falsification, and plagiarism (FFP). These three are the "high crimes" against science, so far over the line as to be shocking to one's scientific…
Regular readers know that I frequently blog about cases of scientific misconduct or misbehavior. A lot of times, discussions about problematic scientific behavior are framed in terms of interactions between individual scientists -- and in particular, of what a individual scientist thinks she does…
Last week, we started digging into a paper by Brian C. Martinson, Melissa S. Anderson, A. Lauren Crain, and Raymond De Vries, "Scientists' Perceptions of Organizational Justice and Self-Reported Misbehaviors". [1] . The study reported in the paper was aimed at exploring the connections between…
Let's wrap up our discussion on the Martinson et al. paper, "Scientists' Perceptions of Organizational Justice and Self-Reported Misbehaviors". [1] You'll recall that the research in this paper examined three hypotheses about academic scientists: Hypothesis 1: The greater the perceived…

This might be a stupid question, but I'll ask it anyway:

Why don't we look at science-based issues of quality control in applied science (e.g. engineering bridges, drug manufacture, nuclear power) and see what they do that works? And then adapt or modify that to academic needs? The question of quality has been around for some time, and there is quite a lot of "how to have good quality" research in numerous fields. I don't see the need to start over from scratch, but somehow that does seem to be the intent of a lot of the proposals (the focus groups and so forth).

In the instance where the focus group participant felt pressure from the drug company to include a particular control, it is extremely possible that the company's own Quality and Legal depts. were not aware of this breach of ethics, and she could have contacted them to get someone within the company advocating for her design. She could have called the FDA directly for a consultation on the study design, they often make time for such things. Failing that, lots of companies are working on the same, or similar targets--could she not have cut her losses and said, "You know what, I am not comfortable with this, I think I better talk to someone else about this study." But an academic researcher isn't going to have access to the full GLP training complete with contact numbers of the company lawyers. So there's some serious training issues there.

What I would be more concerned with, especially when it comes to what DeVries et al. are describing, is enforcement. In my particular career, Really Really Bad scientists who screw up their studies and get caught are blacklisted--put on the FDA's debar list. Their next career moves are pretty much limited to teaching high school science or finding some unregulated field (good luck with that...) and starting all over at entry level, if someone feels generous enough to give them a job. Do you see anyone, anyone who is willing to revoke Nobels and put Big Names out of work for ever and ever? Nope, me neither. Who wants to be the bad guy who tells Jim Watson that on account of filching Rosalind Franklin's work, he's gonna lose his shiny Swedish medal and die in ignominy? Show of hands? No? How about blacklisting Dr. Imanishi-Kari and revoking her doctorate? No? No one wants to step up? Oh well, so much for quality and ethics...

I do agree that there are numerous structural aspects of research that make it fraud-friendly. The apprentice-based system controlled entirely on the whims of the lab head, whose tenure cannot be revoked unless, well, damn, now that I think about it, you probably could actually kill some grad students and keep your job from Leavenworth. So yeah, the system needs a major overhaul, but I don't think that asking the beneficiaries of such a system what they think needs changed is such a brilliant idea. It's like asking me exactly how Lindt's chocolate factory could improve their raspberry chocolate bar.

We could eliminate a huge chunk of this problem (sub-FFP misconduct) by banning one phrase from the scientific literature: "data not shown". There's no shortage of storage room in cyberspace; if you've done the experiment, show the result. Likewise, don't show "a representative result from three separate experiments" and nothing more, show the representative result in the body of the paper and the other two in supplementary data.

It seems (partly from personal experience, partly from second-hand accounts) that as people proceed upwards in their scientific career, there's two distinct styles that develop. And by the time someone has become a star, someone with their own lab, big funding, lots of subordinates, those styles really crystallize. To be sure, anybody reaching those organisational heights will be very driven, focused and self-confident, and with an ego to match.

But people seem to become either really nasty or really nice. The nasty type is the researcher with the sharp elbows, always looking out for number one. People who, as described above, will cut out a collaborator the moment it pays to do so and will weigh every decision according to how it benefits them. Grad students and postdocs are people to be used to further their own careers, then discarded (gently if possible; they can become useful again). These people are of course usually very good researchers and working with them or under them may be perfectly fine as long as you know what you are getting into. IF you likewise make sure you're getting what's coming to you it may be a very fruitful, if stressful, collaboration. This kind of researcher succeeds by grabbing all they can get away with.

The nice person is the researcher who shares with everyone. They're happy to put the lowliest grad student on a paper if they need a publication and they have done the barest minimum needed to get a coathorship. They're happy to be on board any collaboration, share data and code (as long as it's not immediately publishable; they're nice, not stupid), happy to take on second-stringers or partial failures on the chance they may bloom in a different lab setting. Thjis kind of researcher succeeds by getting paid in kind. People being helped by them (that aren't the nasty sort) are happy to help out his grad students and postdocs in turn, add their lab to a project and so on. When someone who trained under the nice researcher has a star student of their own, they'll be happy to refer that student to their old mentor for their postdoc work.

The wishy-washy in-between attitude seems to become rarer higher up and is perhaps not a stable strategy, so people will tend to gravitate towards either of these two extremes. And of course the postdocs and grad students working under these people will tend to pick up the same mode of operation in turn.

I see this article as going to the question: How concretely the misbehaviour in research can be avoided. First, I think - stop calling it misbehaviour, call it dishonesty. Even the "gray areas" are not so grey: everyone knows them, there is a list of "grey areas".
I believe that in a city where there is a university, the most of dishonesty is going on in that university. But, when a person has stolen a shirt from the store, he is called a thief and is dealt with accordingly; there could also be an article in a local newspaper.

In a university, things are done differently:
1. Whatever is the dishonest act, it is handled by the colleagues of the perpetrator, the members of that university.
2. It is handled in total secrecy to protect the person and the place. Why? When a thief or a forger is arrested, or an investigation is started, his name is made public as the name of the SUSPECT. But, a university is not, it appears, a free society.
3. The accusers, the witnesses and the victims are mostly the collaborators of the perpetrator. Often - persons whose future depends on the perpetrator and/or on his colleagues.
4. It comes to nobody's mind to explain stealing of a shirt from the store by any "pressures".

My two conclusions from these points (and there could be more of such points noted and some were noted in the article and the comments) are very simple:
1. Strictly forbid any secret procedures, abolish all rules of secrecy (under any pretext, particularly under the completely faked pretext of the fear of the law suit). Make universities a free society. By their nature, they must enjoy more, not less freedom, especially the freedom of speech, than the society outside.
2. I believe that this first amendment can eventually do the job, but, meanwhile, I believe that a university cannot be given a right to handle any allegations of dishonesty: university "court" is never impartial.

Why don't we look at science-based issues of quality control in applied science (e.g. engineering bridges, drug manufacture, nuclear power) and see what they do that works? And then adapt or modify that to academic needs?

I was thinking along the same lines, and then realized why the status quo is the way it is...

Let's imagine an alternative basic science reality: every finding is expected to be replicated twice, grants are meted out to accomplish that, journals, hiring committees and tenure boards reward replication comparably to initial findings, unreplicated claims bring down some sort of sanction on the initial authors, all findings must be published and agglomerated to avoid selection bias and multiple testing.

* Would you want to be a basic researcher in that system?

* Could you read a journal without sliding into a coma?

* Would it be more or less cost-effective than the existing system?

I think the answer is that the existing system, where most papers are mostly right, mostly, and anything important gets sorted out over time, is a much better way to go, given that we're not building nuclear reactors.