Injustice, misbehavior, and the scientist's social identity (part 1).

Regular readers know that I frequently blog about cases of scientific misconduct or misbehavior. A lot of times, discussions about problematic scientific behavior are framed in terms of interactions between individual scientists -- and in particular, of what a individual scientist thinks she does or does not owe another individual scientist in terms of honesty and fairness.

In fact, the scientists in the situations we discuss might also conceive of themselves as responding not to other individuals so much as to "the system". Unlike a flesh and blood colleague, "the system" is faceless, impersonal. "The system" is what you have to work within -- or around.

Could scientists feel the same sort of loyalty or accountability to "the system" as they do to other individual scientists? How do scientists' views of the fairness or unfairness of "the system" impact how they will behave toward it?

It is this last question that is the focus of a piece of research reported by Brian C. Martinson, Melissa S. Anderson, A. Lauren Crain, and Raymond De Vries in the paper "Scientists' Perceptions of Organizational Justice and Self-Reported Misbehaviors" . [1] Focusing specifically on the world of the academic scientist, they ask, if you feel like the system won't give you a fair break, is your behavior within it more likely to drift into misbehavior? Their findings suggest that the answer to this question is "yes":

Our findings indicate that when scientists believe they are being treated unfairly they are more likely to behave in ways that compromise the integrity of science. Perceived violations of distributive and procedural justice were positively associated with self-reports of misbehavior among scientists. (51)

In their study, Martinson et al. spell out the kinds of fairness and unfairness that may be important in shaping the behaviors of scientists, and consider how these may be linked to a "me against the system" mentality or, alternatively, to a feeling of being a part of a functioning professional or intellectual community:

When people regard the distribution of resources within an organization -- and the decision process underlying that distribution -- as fair, their confidence in the organization is likely to be bolstered. When they believe either the distribution or the procedures for distribution to be unfair, however, they may take actions to compensate for the perceived unfairness. Furthermore, current work reported in the justice literature suggests that social identity plays a crucial role in how people respond behaviorally to perceptions of justice or fairness. Perceptions of injustice may threaten one's feelings of identification or standing within a group, a threat that may prompt compensatory behavior to protect or enhance one's group membership or reputation. (51)

I'm going to break up my discussion of this paper into smaller bites. This first post will concentrate on the methodology of the study. It will be followed by a second post discussing the findings of the research and a third post considering what conclusions we might draw from these findings.

The study was designed to test three hypotheses:

Hypothesis 1: The greater the perceived distributive injustice in science, the greater the likelihood of a scientist engaging in misbehavior. (51)

Hypothesis 2: The greater the perceived procedural injustice in science, the greater the likelihood of a scientist engaging in misbehavior. (52)

Hypothesis 3: Perceptions of injustice are more strongly associated with misbehavior among those for whom the injustice represents a more serious threat to social identity (e.g., early-career scientists, female scientists in traditionally male fields). (52)

Let's pause a moment.

What is meant by distributive justice here? Consider the professional life of an academic scientist. It includes certain resources (research grants, start-up funds, lab space, graduate students, etc>) and responsibilities (like teaching, serving on committees, reviewing the manuscripts and grant proposals of other scientists in your field, etc.). Distributive justice has to do with how fair or unfair the allotment of these resources and responsibilities is.

How about procedural justice? For the academic scientist, this will encompass whether one's work and results are fairly evaluated. Getting a fair hearing (or not) happens on a number of different levels: when peer reviewers render official decisions on your submitted journal articles or grant proposals, when your department and your university evaluate your job performance and decide whether to give you tenure or promote you, when members of your institution and your intellectual community show you respect. It's worth noting that academic scientists are operating simultaneously in multiple communities and organizations, and their view of the level of procedural justice in each one may shape their overall perspective.

There is a further complication to consider. The academic scientists moving through their departments, their universities, and their discipline centered professional communities frequently see science as more than just what they do but as centrally connected to who they are. This raises the stakes for the scientist trying to navigate these communities. Here, Martinson et al. connect the first two hypotheses with the third:

Perceptions of justice provide a sense of security in one's membership or standing in a group. Violations of justice principles introduce vulnerability to one's social identity, which may in turn increase the likelihood of engaging in harmful or unsanctioned behavior. This process may be especially pronounced for those whose social identity is already more fragile or uncertain. We expect that scientists with less well-established reputations in their field will have more fragile identities than their more established counterparts. ...

As academic science has become increasingly dependent on external resources, whether from government or private industry, the culture of science has changed, and with it, what it means to be an academic scientist. ... Not only is science still widely regarded as a vocation, but academic scientists invest substantial resources (e.g., time, money, and labor) and incur substantial opportunity costs in preparing themselves for their careers. These factors create a strong identification as a scientist, make it difficult for scientists to create and inhabit alternative identities, and increase the distress associated with threats to that identity. (52)

Just as you'd expect, Martinson et al. draw on previous work reported in the literature. They note, however, how this previous work may not fully capture the situation the current study is trying to understand (which is part of why it made sense to do the current study). In particular, in considering the literature on what motivates individual deviance within a social network, they point out ways this prior work may not be an accurate description of the situation of the academic scientist:

Agnew emphasizes the motivation for deviance: he posits that strain, resulting from negative social relationships, produces negative affect in the individual (e.g., fear, anger, frustration, alienation) which, in turn, leads to pressure on the individual to "correct" the situation and reduce such affect, with one possible coping response being deviant behavior ... deviant behavior is not an inevitable outcome of such pressures, and ... the likelihood of deviance is a function of individual traits and dispositions as well as contextual factors. ...

Agnew further asserts that strong social support networks are expected to decrease the likelihood of deviant responses to strain, while having many deviant peers is expected to increase the likelihood of deviant responses. ... The unavailability of legitimate coping responses increases the potential for deviant behavior. Many of the coping strategies identified by Agnew hinge on the individuals' minimizing, de-emphasizing, or detaching from the situation causing them strain. We argue that the legitimate coping responses to strain enumerated by Agnew are less readily available to academic scientists, because of the centrality of their work roles to their individual identities; thus, for scientists, misbehavior is a more likely response to strain. (53)

Martinson et al. don't think Agnew's account gets to the heart of the matter here: How do you react if you perceive you are not getting your fair share of the resources necessary to do science, or if you feel the procedures are not being applied fairly to you? The injustice you perceive is not just making it harder for you to do your job as an academic scientist, but threatening who you are as a scientist. What are your options for remedying the unfairness? What affects your perceptions of the options available to you?

If scientists believe that unjust behavior results in an unfair distribution of resources, individuals may resort to illegitimate, or unsanctioned responses to compensate for these injustices. Moreover, we believe that behavioral responses to organizational injustice are moderated by the degree to which the injustice threatens one's identity as a scientist, with deviant responses being more likely among scientists whose identities are less established and secure. (54)

Now we move into some of the details of how the study collects the data necessary to assess their three hypotheses.

Martinson et al. set up a study in which they surveyed academic scientists just staring their scientific careers and academic scientists whose careers had been under way for a while. The former can be thought of as the newbies trying to find their place in their professional, intellectual, and institutional communities. The latter have had some experience establishing themselves within this communities. Here are the specifics of how the study participants were selected:

Using databases maintained by the NIH Office of Extramural Research we created a sample of mid-career researchers -- 3,600 scientists who had received their first research grant (R01) awards from NIH sometime during the period 1999-2001 -- and a sample of recently graduated students -- 4,160 NIH-supported, postdoctoral trainees who had received either institutional (T32) or individual (F32) postdoctoral training support during 2000 or 2001. In the fall of 2002 we mailed our survey to these two random samples. ...

From the sample of 3,600 mid-career scientists, 191 surveys were returned as undeliverable. For the purposes of calculating response rates, we removed these individuals from the denominator. We received 1,768 completed surveys from this sample (response rate = 52%). From the sample of 4,160 early-career scientists, 685 surveys were returned as undeliverable and 1,479 completed surveys were received (response rate = 43%). It is possible that many of the non-responses were due to incorrect or incomplete addresses in the NIH database, although the extent of this problem is unknown. For the vast majority of scientists, the addressing information available from NIH was not that of their home department, but that of their institution's grant office. To the extent that grant offices did not forward our surveys to the appropriate departments, surveys never reached their intended recipients. (54)

We've established who was being studied. Next, what kind of information did the researchers try to get from study participants?

First, they tried to gauge the subjects' perceptions of distributive justice within their work and professional setting:

One of our primary predictive measures is the 23-item short form of the Effort-Reward Imbalance (ERI) questionnaire. This measure corresponds directly to the concept of distributive justice as originally proposed by Leventhal ... Subscales derived from the extrinsic effort items (e.g., "Over the past years, my job has been more and more demanding," "I am often pressured to work more hours than reasonable") and extrinsic reward items ("I receive the respect I deserve from my colleagues," "Considering all my efforts and achievements, I receive the respect and prestige I deserve at work") were used to compute a ratio of perceived effort (E) required at work to perceived extrinsic rewards (R) received, or E/R. Respondents rated sub-scale items on a 1-to-5 scale with higher scored indicating greater effort and reward, respectively. Distributive injustice was calculated as the ratio of extrinsic effort to reward (with a correction factor to compensate for the different number of items in the two scales), so that higher values represented greater perceived distributive injustice. (54-55)

As well, they attempted to gauge subjects' perceptions of procedural justice:

We chose the Ladd and Lipset six-item alienation scale over more standard measures of procedural justice. The Ladd and Lipset measure is appropriate in this context because it is directed specifically to academic scientists and the organizational structures within which they work. Moreover, it covers five of the six core components of procedural justice originally proposed by Leventhal, addressing bias suppression, correctability, information accuracy, representation, and ethicality, primarily within the context of the peer-review systems of science (e.g., "The top people in my field are successful because they are more effective at 'working the system' than others," "The 'peer review' system of evaluating proposals for research grants is, by and large, unfair; it greatly favors members of the 'old boy network.'"). The scale has been used elsewhere to examine associations among alienation, deviance, and scientists' perceptions of rewards and career success. Respondents rated their agreement with each item on a 1 to 5 scale. (55)

These survey instruments yield information relevant to the first two hypotheses.

Things could be complicated by other factors, however. Some of the participant responses to these surveys might be driven not just by the external influence of colleagues and working conditions, but also by internal features of the subject. In particular, the subject's own attitude toward various aspects of the job of being a scientist may make a significant difference. To take this into account, the researchers tried to measure subjects' intrinsic drive:

the effort that one exerts in work may be a function of intrinsic drive as well as extrinsic motivators. To the extent that one's work effort is driven by intrinsic factors as well as extrinsic factors, we anticipate that high intrinsic drive alone may not always be problematic, but when paired with perceptions of effort/reward imbalance or procedural injustice might well motivate misbehavior. Within the framework of general strain theory, high intrinsic drive can be viewed as a characteristic that could predispose an individual to be more easily provoked to misbehavior in the face of unfair treatment. Intrinsic drive is measured as a sum of respondents' agreement with six items (e.g, "Work rarely lets me go; it is still on my mind when I go to bed," "People close to me say I sacrifice too much for my job."), rated on a 1-to-4 scale and coded so that higher values indicate more intrinsic drive. (55-56)

It strikes me that teasing out the precise role intrinsic drive plays in the academic scientist may be brutally hard. Two scientists might score equally high on the intrinsic drive scale, one of them because science is what she loves, the other because she is the kind of person who would be highly driven from within in whatever job she held.

Somewhat nebulous (to me anyway) is the target of the intrinsic drive -- what exactly is it a drive toward? I think this could make a difference as far as the sorts of behaviors that a high intrinsic drive might prompt.

Intrinsic drive may make you very attuned to the external obstacles in your path, but that in itself doesn't determine how you will respond to those obstacles. If the drive is primarily directed at doing a job well (say, producing a rigorous scientific result), the response may well be different than if the drive is primarily directed at winning a competition between many individual scientists. In the latter situation, the academic scientific equivalent of throwing an elbow may seem like a reasonable strategy. In the former, it might not.

The next piece of the methodology to look at is how Martinson et al. identify the scientists for whom injustice "represents a more serious threat to social identity". Their hunch is that this group will include newbies more than midcareer academic scientists and women more than men (owing to women being traditionally under-represented in scientific fields):

Scientists may perceive themselves as relatively peripheral to their scientific community by virtue of their career stage, field of study, sex or a combination of these factors. ... The sampling frames used provide a marker of career stage (early-career versus mid-career). Data on field of study were collected through an open-ended question (i.e., "In what specific disciplinary field did you earn your highest degree?") and coded into the broad categories of biology, chemistry, medicine, social science, physics/math/engineering and miscellaneous or unknown. ...

Given a relative lack of theoretical guidance about what personal characteristics might operate as potential confounders or modifiers of the associations of interest, it seemed reasonable to adjust our predictive models for variables such as sex and marital status. (56)

A woman in academic science may experience her social identity as an academic scientist differently than her male colleagues at least in part because society still seems to expect women to define themselves in terms of their families rather than their careers. If the very act of becoming a scientist requires a sacrifice of the familial sphere, this may raise the stakes to preserve your social identity as a scientist. To the extent that women in science seem more likely to have to put off (or opt out of) marriage and child rearing than their male colleagues, we might expect a corresponding sex difference in the strength and vulnerability of social identity.

These sex differences in family formations can be described as representing opportunity costs that are differentially higher for female than male scientists. Either interpretation would lead us to expect female scientists to be more acutely attuned than male scientists to violations of justice in their work roles. The lack of theoretical expectations with respect to age effects on misbehavior, combined with a correlation of 0.56 in our sample between age and career stage (a theoretically relevant factor), led us to omit age from our multivariate models. (56)

Methodologically, it would be nice if there were a cleaner, more direct way to measure the strength and vulnerability of social identity. However, some important variables worth following are hard to measure. You do what you can.

Finally, the researchers needed to measure the extent to which study participants engaged in scientific misbehaviors. To do this, they constructed a survey:

... we developed a list of 33 problematic behaviors that were included in the survey. These range from the relatively innocuous (e.g., signing a form, letter, or report without reading it completely), to questionable research practices, outright misbehaviors, and misconduct as formally defined by the U.S. federal government, including data falsification and plagiarism. Survey respondents were asked to report whether they themselves had engaged in any of the specified behaviors during the past three years. We did not attempt to assess frequency as we doubted that most people could report this accurately. Even though the survey was designed and administered to ensure respondent anonymity absolutely and transparently, we suspect the residual fear of discovery led to some under-reporting, particularly for the most serious misbehaviors. Psychological denial about misbehavior is another reason to suspect some under-reporting. (56-57)

The likelihood that study participants underreported their own misbehavior is something we'll discuss in the upcoming post looking at the findings of this study.

Within the 33 behaviors about which the subjects were asked, not all were the sort of behavior that would rise to the level of misconduct (like fabrication, falsification, and plagiarism). The seriousness of the various behaviors was "graded" by institutional compliance officers:

We asked the compliance officials to assess the likelihood that each behavior, if discovered, would get a scientist in trouble at the university or federal level (from 0, unlikely, to 2, very likely). The 10 items that received the top scores by this assessment each received scores of 2 from at least 4 of the 6 compliance officers and no scores below 1. Having identified the top-ten misbehaviors as judged by these independent observers, we constructed a binary dependent variable, coded 1 if an individual responded "yes" to one or more of these top-ten misbehavior items, 0 otherwise. If no response was received for any of the ten items (N=120) this indicator was coded as missing and the observation excluded from the analysis. (57)

In other words, the researchers tracked the extent to which subjects identified themselves as engaging in serious crimes against science or simply misdemeanors.

Among the scientists who actually completed and returned the surveys, what kind of sample did Martinson et al. have to work with?

The two groups [early-career and mid-career] are roughly one decade apart in mean age. Women make up only about a third of the mid-career sample, but over half of the early-career group. Ten percent of the mid-career sample has never been married or cohabited with a partner, with twice the proportion of early-career scientists falling into that category. There is little overlap between the two samples in terms of academic rank: the mid-career sample is concentrated at the assistant- and associate-professor levels, while the early-career sample is concentrated at entry-level positions with more than half in postdoctoral positions. The respondents are spread across NIH-funded fields of study, though approximately 40% of each group are in medical fields. (57-58)

What kinds of perceptions of justice did these subjects report? Were the early-career academic scientists more likely to respond to perceived injustice by engaging in misbehavior as a survival strategy?

Tune in for the upcoming post on the findings Martinson et al. report to see.

______
[1] Brian C. Martinson, Melissa S. Anderson, A. Lauren Crain, and Raymond De Vries (2006) "Scientists' Perceptions of Organizational Justica and Self-Reported Misbehaviors" Journal of Empirical Research on Human Research Ethics 1(1), 51-66.

Categories

More like this

[this is good]

Just wanted to say thank you for giving these important issues some air-time. Looking forward to part 2.