Injustice, misbehavior, and the scientist's social identity (part 2).

Last week, we started digging into a paper by Brian C. Martinson, Melissa S. Anderson, A. Lauren Crain, and Raymond De Vries, "Scientists' Perceptions of Organizational Justice and Self-Reported Misbehaviors". [1] . The study reported in the paper was aimed at exploring the connections between academic scientists' perceptions of injustice (both distributive and procedural) and those scientists engaging in scientific misbehavior. In particular, the researchers were interested in whether differences would emerge between scientists with fragile social identities within the tribe of academic science and those with more secure social identities. At the outset, the researchers expected that scientists at early career stages and female scientists in male-dominated fields would be the most likely to have fragile social identities. They hypothesized that perceptions of injustice would increase the likelihood of misbehaving, and that this link would be even greater among early-career scientists and female scientists.

We started with a post walking through the methodology of the study. In this post, we'll examine the results Martinson et al. reported. Part 3 will then consider what conclusions we might draw from these findings.

First, how much injustice did the study participants report?

Both early-career and mid-career scientists in the study reported relatively low levels of distributive injustice, while they reported significant procedural injustice (averaging 65 on a 0-100 scale where 0 would be perfectly just). You'll recall that distributive justice relates to the fair allotment of resources like grant money, lab space, graduate students, and the like, while procedural justice has to do with whether your scientific results and your job performance are evaluated fairly.

As well, in running the statistics on their data, the researchers found that:

(1) those perceiving procedural injustice are somewhat more likely than others to also report perceiving distributive injustice, and (2) higher reports of intrinsic drive are positively correlated with perceptions of both procedural and distributive injustice. (59)

This first item could mean that those who are aware of procedural injustice are more likely to notice distributive injustice that might fly under the radar of scientists who feel like they're getting a fair shake from peer reviewers and their departments and universities. Or, it might mean that situations in which there is procedural injustice are more likely to have distributive injustice, too -- sort of an unholy package deal.

The second item has to do with intrinsic drive, the extent to which a scientist is responding to internal factors rather than merely external ones in tackling her tasks as an academic scientist. (You'll recall from the earlier post that I'm betting that the role on intrinsic drive in individual scientists and in the workings of scientific communities is complicated.) To the extent that procedural and distributive injustice are external obstacles to your goals as a scientist, if you are highly driven by internal factors, you are going to notice those external roadblocks.

Remember that the scientists in the study were asked to respond to a survey, identifying which of 33 different behaviors they had engaged in. (These behaviors were identified in another study, which I discussed here, in which focus groups made up of scientists talked about behaviors they had observed in their scientific communities and that they felt threatened the integrity of science.) Ten of those behaviors were ranked (by institutional compliance offices) as most likely to result in sanctions if a scientist were caught engaging in them:

  1. Falsifying or "cooking" research data
  2. Ignoring major aspects of human-subjects requirements
  3. Not properly disclosing involvement in firms whose products are based on one's own research
  4. Relationships with students, research subjects or clients that may be interpreted as questionable
  5. Using another's ideas without obtaining permission or giving due credit
  6. Unauthorized use of confidential information in connection with one's own research
  7. Failing to present data that contradict one's own previous research
  8. Circumventing certain minor aspects of human-subjects requirements (e.g. related to informed consent, confidentiality, etc.)
  9. Overlooking others' use of flawed data or questionable interpretation of data
  10. Changing the design, methodology or results of a study in response to pressure from a funding source

You'll be relieved to know that the scientists who were the subjects of this research did not report engaging in these behaviors in terribly high numbers. The top three here had 0.5% or less of the participants saying they had done them, and only when we get to #7 on the list do we hit numbers larger than 5%. For early-career scientist, the most popular misbehavior was #9 at 12.8%, while for mid-career scientists it was #10 at 20.6%.

Martinson et al. dug into the data to identify the statistical correlations:

Engagement in one of the top-ten misbehaviors is positively correlated with procedural injustice and intrinsic drive, but its correlation with distributive injustice is not statistically significant. The misbehavior measure is negatively correlated with being an early career scientist and being female. (59)

In other words, if you have a high intrinsic drive and perceive a high level of procedural injustice, you're more likely also to be engaging in one of these ten misbehaviors. But, interestingly, early-career scientists and female scientists were less likely to be engaging in these misbehaviors. If, as Martinson et al. had expected, these scientists were more likely to have fragile social identities as scientists, you'd expect that they'd be engaging in more misbehavior.

Actually, the hypothesis was a bit more complicated than that. The idea was that, when faced with injustice, scientists with fragile social identities would be more likely to engage in misconduct. This means we need to look at how the perceptions of injustice varied by career stage and gender. The data showed that early-career scientists and women reported higher levels of procedural justice than did mid-career scientists who were not female. They also show that mid-career scientists reported higher levels of distributive justice than did early-career scientists and women.

The mid-career group also showed higher intrinsic drive.

Procedural injustice is significantly and positively associated with engagement in misbehavior, but the main effect of distributive injustice is not significant. Consistent with the bivariate correlations, the multivariate estimates show that early-career and female scientists are less likely to engage in misbehavior. The field-of-study indicators are significant as a group, with social sciences showing a strong, positive association with misbehavior. (59)

In other words, the data seem to show that the climate and culture of a particular scientific discipline may make a difference in terms of how the academic scientists within that discipline will behave. Undoubtedly, the climate and culture of a particular university, department, or even research laboratory could make a similar difference.

For the scientists in the study overall, distributive injustice didn't seem to make much of a difference to the likelihood of engaging in scientific misbehavior. This seems to suggest that distributive injustice, while annoying, seems like less of a threat to one's scientific goals that requires an extreme response.

On the other hand, the data show that procedural injustice increases misbehavior. Perhaps if others aren't playing by the rules -- indeed, if they're departing from them in a way that hurts you as an academic scientist -- it seems foolish to hold yourself to playing by those rules.

The mid-career scientists are clearly more likely to engage in misbehavior, regardless of their perceptions of distributive injustice, but the association between distributive injustice and misbehavior is more strongly positive among early career (.243) than among mid career (.036) scientists, a finding that lends support to our third hypothesis. We did not observe interactions to support our expectation of differential associations between injustice perceptions and misbehavior among women in traditionally male dominated fields. (59)

To my eye, these results might create a problem for the hunch that early-career scientists have more fragile social identities than do mid-career scientists. That positive correlation between perceiving distributive injustice and misbehaving in the early-career group is pretty small. You'll also notice that being female doesn't increase the chances of misbehavior.

The data do show that misbehavior is most likely from mid-career scientists with high intrinsic drive who perceive a high level of procedural injustice.

An interaction between procedural and distributive injustice showed that the positive relationship between procedural injustice and misbehavior was strongest among scientists who perceived the least distributive injustice and weakest among those who perceived the most distributive injustice. Though statistically significant, this effect is not large. Second, while distributive injustice was not related to misbehavior among scientists in most fields, those in the social science and other/unknown groups who reported more distributive injustice were less likely to report misbehavior. Finally, a significant interaction between gender and "never partnered" shows that the negative association of female status with misbehavior is not simply a gender effect. In fact, women who have never been partnered are even more unlikely than other women to report misbehavior while men who have never been partnered are more likely than other men to do so. (60)

Perceptions of distributive injustice just didn't have the effect the researchers expected. Indeed, among the social scientists in the study, the greater the distributive injustice, the less likely the scientists were to report misbehaving. I'm curious as to what could explain that correlation. Maybe social scientists are better at gaming the survey instruments (or on guard against potential violations of the confidentiality of their responses).

It's also interesting that being partnered or unpartnered seems to have different effects on male and female scientists with respect to misbehavior. The never-partnered women report less misbehavior than the partnered women (where women overall report less misbehavior than men). And, the never-partnered men report more misbehavior than the partnered women.

I'm sure someone somewhere is going to draw some conclusion about the civilizing influence that marriage exerts on men. It will not be me.

While distributive injustice was less strongly correlated with misbehavior than procedural injustice, in the early-career group scientists were more at risk for misbehavior if they perceived distributive justice than if they did not. Martinson et al. interpret this as providing some support for their third hypothesis:

Hypothesis 3: Perceptions of injustice are more strongly associated with misbehavior among those for whom the injustice represents a more serious threat to social identity (e.g., early-career scientists, female scientists in traditionally male fields). (52)

This assumes that early-career scientists have more fragile social identities. But what, then, explains all those misbehaving mid-career scientists?

Maybe there are some kinds of injustice that encourage misbehavior regardless of the strength of your social identity.

The three-way interaction involving intrinsic drive, procedural injustice, and career stage provides a demonstration of strain theory's anticipation of misbehavior as a function of situational provocation combined with individual disposition. The critical factor is intrinsic drive. Scientists who are personally driven to achieve may be particularly sensitive to violations of procedural justice, especially if these violations are seen as hindering or thwarting their career success. Furthermore, it appears likely that selection processes in science favor those exhibiting a high degree of personal drive leading to an overall higher level of intrinsic drive among those who reach mid-career status. This selection may thus increase, within the mid-career ranks, sensitivity to perceptions of procedural injustice, and thereby the likelihood of misbehavior. (61)

That the selection process operating within science favors those with high intrinsic drive could be a good thing or a bad thing, depending on what the intrinsic drive is driving toward (adding to the body of reliable scientific knowledge or achieving individual success at any cost, for example).

Also, the link between intrinsic drive and awareness of procedural justice raises (for me, anyway) some worthwhile questions about instances where early-career scientists notice, and call out, procedural injustice, and where mid- and late-career scientists seem blind to it. (Aetogate, anyone?) Maybe the question is how attuned scientists are to procedural injustices versus to procedural injustices that directly impact them individually.

Martinson et al. then consider the ways their finding might be misleading:

The contribution of our research must be considered in light of some limitations of the study. First, the cross-sectional nature of our data may understate the strength of associations if there are time-lags in how the studied variables operate with respect to one another. Second, the collection of both predictor and outcome data using self-report requires caution. There is the potential for bias as a result of common variance attributable to the reporter on both sides of the model -- that is, our measures of organizational injustice rely on reports from same people reporting on their behaviors in that environment. Furthermore, these data allow us to determine only that there are significant associations between the behaviors of interest and perceived working conditions, without illuminating causality. Thus, it would be possible to observe the patterns we have found if, for instance, those who have misbehaved were not truly the victims of injustice, but merely rationalize or excuse their misbehavior through reference to unfair treatment. Additionally, as in any study relying on self-report of misbehavior, there is likely to be an under-reporting bias in our data. This bias may be particularly pronounced for the more serious or legally sanctionable behaviors. We note, however, that any under-reporting would make our estimates of associations conservative. Related to this issue, interpretation of the differences in levels of reported behaviors between the early- and mid-career samples must be undertaken cautiously. Some of these differences may be explainable by reference to the fact that not all individuals had equal opportunity or exposure to engage in all of the behaviors about which we asked (e.g., only that subset of scientists engaged in conducting human-based research would have had opportunity to ignore major or minor details of human-subjects requirements). Finally, some of these differences may be attributable to differential under-reporting on the part of early-career scientists who, because of their more tenuous career standing, may be more hesitant than more well-established scientists to admit wrongdoing. (61-62)

Moving from correlation to causation always has the potential to get you in trouble, and getting the causal arrows in the right direction can be tricky, too. Does the perception of injustice precede misbehavior? Or do scientists misbehave and then start noticing injustices, almost as a way to rationalize their misbehavior?

Answering this kind of questions may require further studies with experimental design clever enough to distinguish among the live possibilities at the conclusion of this study.

The researchers note that scientists might well be underreporting their own misbehavior, meaning that the actual correlations would be even stronger than the ones seen in this data. They don't really consider the possible of subjects overreporting their misbehavior. Of course, I can't really think of a good reason a scientist might over-report misbehavior.

They also suggest that early-career scientists might be more likely to under-report their misbehavior than are mid-career scientists. But I wonder if there might not be underreporting of misbehavior by mid-career scientists who have developed a habit of misbehavior but who are able to kid themselves about it. However, since the survey didn't explicitly identify these behaviors as misbehaviors, mid-career scientists might just view what they do as normal behavior for the working scientist.

By that reasoning, though, why would early-career scientists be more likely to under-report their misbehavior? (Are they just more worried about messing up in general?)

These are the results of the study. We'll be looking on what conclusions we can draw from them in part 3. Stay tuned.
[1] Brian C. Martinson, Melissa S. Anderson, A. Lauren Crain, and Raymond De Vries (2006) "Scientists' Perceptions of Organizational Justice and Self-Reported Misbehaviors" Journal of Empirical Research on Human Research Ethics 1(1), 51-66.

More like this

Regular readers know that I frequently blog about cases of scientific misconduct or misbehavior. A lot of times, discussions about problematic scientific behavior are framed in terms of interactions between individual scientists -- and in particular, of what a individual scientist thinks she does…
Let's wrap up our discussion on the Martinson et al. paper, "Scientists' Perceptions of Organizational Justice and Self-Reported Misbehaviors". [1] You'll recall that the research in this paper examined three hypotheses about academic scientists: Hypothesis 1: The greater the perceived…
In the last post, we started looking at the results of a 2006 study by Raymond De Vries, Melissa S. Anderson, and Brian C. Martinson [1] in which they deployed focus groups to find out what issues in research ethics scientists themselves find most difficult and worrisome. That post focused on two…
In the U.S., the federal agencies that fund scientific research usually discuss scientific misconduct in terms of the big three of fabrication, falsification, and plagiarism (FFP). These three are the "high crimes" against science, so far over the line as to be shocking to one's scientific…

I find the differences in the types of misbehavior at the different career stages to make a lot of sense. That info matches well with what I've seen. A lot of post-docs are in a place where they feel pressured to not point out problems in their mentor's work (or the work of collegues-favored-by-mentors)... a lot of assistant profs are under tremendous pressure to get funding and so would not necessarily stand up to experimental design requests from a funding source.

I seriously question the author's assumption that being in an early career stage or female puts you in a position where your social identity as a scientist is more fragile (I wonder about the interference from those two factors too... I have the vague impression it's harder to be female later in your career than earlier).
However, if there is sometimes such a correlation, then I wonder if never-partnered women feel more secure in their scientific identities, and previously-partnered men feel more secure in their scientific identities.

The second item has to do with intrinsic drive, the extent to which a scientist is responding to internal factors rather than merely external ones in tackling her tasks as an academic scientist. (You'll recall from the earlier post that I'm betting that the role on intrinsic drive in individual scientists and in the workings of scientific communities is complicated.) To the extent that procedural and distributive injustice are external obstacles to your goals as a scientist, if you are highly driven by internal factors, you are going to notice those external roadblocks.

I am very skeptical that this distinction between intrinsic and extrinsic drive actually has any real meaning.

Overlooking others' use of flawed data or questionable interpretation of data

Does this mean not pointing out a problem to the PI when you notice it? Or does it include deciding not to end your career over it when the PI ignores you?

Tongzhi PP-
Is that because
A) Intrinsic motivation is all that matters and people are naturally curious and productive
B) We're so socialized that the things that makes us proud, or give us warm-fuzzies, or whatever, are really just extensions of the things other people think we should do?
If you are truly a Comrade, there is only one correct answer.