Basic concepts: scientific anti-norms (part 2).

Coming on the heels of my basic concepts post about the norms of science identified by sociologist Robert K. Merton [1], and a follow-up post on values from the larger society that compete with these norms, this post will examine norms that run counter to the ones Merton identified that seem to arise from within the scientific community. Specifically, I will discuss the findings of Melissa S. Anderson [2] from her research examining how committed university faculty and Ph.D. students are to Merton's norms and to the anti-norms -- and how this commitment compares to reported behavior.

You'll recall that Merton identified four central values, or norms, defining the tribe of science:

  • Universalism
  • "Communism"
  • Disinterestedness
  • Organized Skepticism

Anderson's staring point is drawn from the research of L. Mitroff with Apollo moon scientists. In that research, Mitroff identified a parallel set of four "counternorms", which Anderson describes as follows:

  • Particularism: Scientists assess new knowledge and its applications based on the reputation and past productivity of the individual or research group.
  • Individualism: Scientists protect their newest findings to ensure priority in publishing, patenting, or applications.
  • Self-interestedness: Scientists compete with others in the same field for funding and recognition of their achievements.
  • Organized dogmatism: Scientists invest their careers in promoting their own most important findings, theories, or innovations [3]

According to Anderson, Mitroff's research suggested that these counternorms could serve a useful purpose within scientific activity, especially in instances where scientists are grappling with problems that are not yet well defined. There is something quite reasonable about this suggestion. One of the big knocks against a Popperian view of scientific activity is that you need to give ideas a chance to develop, and to be made more precise, before you can test them in a meaningful way. In other words, if organized skepticism is your first move, it's unlikely that any ideas will survive their first articulation. Moreover, an initial narrowing of focus (rather than taking into account what everyone else in your tribe has to say) could also help scientists come up with good -- and ultimately, testable -- ways to understand phenomena and to frame questions about them.

While I'm happy to grant that there may be certain points in the creation and development of a line of research at which the counter norms are useful to the project of building a reliable body of scientific knowledge, I'm less sure that the knowledge-building aims are well served if these counternorms are dominant (with respect to Merton's norms) all the time. I'll have more to say on this below.

Rather than going into all the minutiae of Anderson's experimental design, statistical analyses, and so forth, I'm just going to give you the broad outlines and the big results. 4000 faculty and doctoral students (in the fields of chemistry, civil engineering, microbiology, and sociology at U.S. universities) were surveyed. Among the questions in the survey, they were asked to rank each of the norms and counternorms (which weren't labeled as such but instead were presented as statements) in terms of the extent to which respondents felt it should represent the behavior of scientists, and on the extent to which respondents felt it actually did represent the typical behavior of their department's faculty. In other words, the survey measured both subscription to the norms and enactment of the norms.

Here's what Anderson found as far as the values the survey respondents endorsed:

[B]oth students and faculty subscribe much more strongly to the norms than to the counternorms. On scales of 0 to 8, the overall mean subscription to the norms is 7.06, whereas mean subscription to the counternorms is 4.04. Clearly, despite some controversy about the validity of the Mertonian norms as statements representing academic researchers' views on appropriate behavior, faculty and students in large part agree more strongly with the norms than with the alternative counternorms. It is also clear, however, that the counternorms generate at least modest support and cannot be ruled out as representing, at least in some contexts, behavior that academic researchers view as appropriate. [4]

These results don't really allow us to figured out whether respondents who subscribed to the counternorms had in mind that they were appropriate values in the sorts of situations the Mitroff research flagged. Maybe those subscribing to the counternorms felt they were the best values in circumstances where scientific problems aren't well defined, but these respondents might also have viewed the counternorms as appropriate in any phase of scientific activity. I'd also be curious as to whether respondents who subscribed to both the norms and the counternorms saw them as two sets of contradictory values.

Given this information about how respondents believe scientists should behave, what sorts of actual behavior do they see?

Faculty and students are more likely to see evidence of the enactment of the counternorms, as opposed to the norms, by faculty in their academic departments. The overall means for enactment of the norms and the counternorms are 4.69 and 5.94, respectively, on 0-to-8 scales. The norm-counternorm differences are not as pronounced here as in the case of subscription, suggesting that faculty and students see a mix of faculty behaviors that correspond to the norms and the counternorms.

It is interesting to compare respondents' mean scores for subscription to and enactment of norms, with a parallel comparison for the counternorms. Though the survey items were not grouped in any way that would suggest to respondents that certain items represented one normative system and others represented another, respondents rather consistently gave the norms higher subscription scores than enactment scores, while rating the counternorm items higher on enactment than on subscription. [5]

To the extent that the "subscription" data tells us something about the values the respondents think ought to be guiding behavior, this data suggests that respondents see the faculty in their departments falling short of living up to the Mertonian norms. Instead, more respondents identified behavior at odds with those norms and in line with the counternorms. Possibly this means that it's simply hard to live up to the ideals of the tribe of science. It might also mean that we're more likely to notice how others are behaving when they're doing something we think falls short of the ideals (or goes in the opposite direction from those ideals). I suppose it's also possible that respondents see their own departments as hives of cutting-edge scientific activity where the refinement of scientific problems warrants the enactment of the counternorms.

Two features of the experimental design here are interesting. First, the respondents were asked about faculty enactment of various values within their departments -- which is to say, student behavior wasn't a question. Presumably this is because, as new initiates to the tribe, students are still learning the rules, and thus their behavior might be less representative of the community than the behavior of the "grown-ups" in the tribe. It might also be that students don't necessarily participate in the full range of activities that would let them enact (or fail to enact) one set of norms or another. The other thing I find striking about how the questions were posed is that respondents weren't asked to single out their own behavior and indicate whether it enacted particular norms. The faculty respondents had the cover of characterizing the behavior or their department's faculty (to which they belong). Possibly this would make them more forthcoming in their responses, but I would be curious to see how their characterization of their own behavior compared to their characterization of their colleagues' behavior.

One more piece of the results that I found interesting:

It appears that faculty believe more strongly than students in the norms as standards for behavior and are more likely to see them represented in their colleagues' actions. Though the students' norm-subscription scores are not as high as the faculty members', they are nonetheless uniformly high, suggesting that even though they are not as convinced as faculty of the validity of the normative claims on academic researchers, they do see considerable merit in the norms. It is possible, of course, that students actually see less evidence of behavior that accords with the norms. Alternatively, students might be somewhat more reluctant than faculty to report that the general behavior in a department, which they may view as the province of faculty, lives up to their behavioral ideals. By the same reasoning, faculty may be more willing to portray their departments as meeting those high standards. [6]

The state of the tribe of science -- or even of one's department -- can look very different from different ends of a power differential. How do we view the community we want to join, or are on the verge of joining? How do we view the community of which we are a member with significant power (and thus, responsibility)? What's at stake in noticing behaviors that depart from the norms you endorse if you feel yourself powerless to intervene? What's at stake in noticing that behavior if noticing it makes it your problem to help police?

These are not abstract questions.

It's been noted before that how much a part of a community we feel ourselves to be can affect the extent to which we feel ourselves bound by its rules -- or even able to survive in that community if we play by the rules. While the Anderson study doesn't provide enough data that we could determine it one way or another, I'd be interested to know whether student respondents who saw the faculty in their departments enacting the counternorms over the norms were more inclined to believe that the counternorms are the path to scientific success, and that one doesn't have the luxury of enacting the norms before one has achieved a certain level of prestige and power within the community.

What if that belief is right?

Possibly the structural features shaping the way scientific activity is conducted -- what "counts" toward career advancement in academic or industry settings, how much funding is available and how decisions are made about how to distribute that funding, how scientific publishing operates, etc. -- seem to reward enactment of the counternorms and to punish the enactment of the norms. And, if this is the case, enactment of the counternorms would seem to be rewarded all the time, not just at the key junctures with ill-defined scientific problems where Mitroff thought the counternorms would be most helpful to the community.

This is where things get interesting, because it's quite possible that the strategy that makes the most sense for an individual trying to survive the harsh funding climate or competition for tenure or race for priority runs counter to what best contributes to building a body of reliable scientific knowledge.

Ultimately, that the norms Merton identified can be seen by scientists as rules which those in the tribe of science ought to follow reflects the scientists' assumption that they are members of a community with a shared goal. The set of values that defines that community is not taken by the scientists to be arbitrary. Rather, the norms are seen as a shared set of values whose enactment is what makes the knowledge-building project possible.

If structural conditions make the enactment of these values a bad move for individual scientists, can the knowledge-building project succeed? Does real commitment to that knowledge-building project, then, impose obligations on members of the tribe of science with respect to the shape of funding, publishing, and career reward systems?

[1] Robert K. Merton, "The Normative Structure of Science," in The Sociology of Science: Theoretical and Empirical Investigations. University of Chicago Press (1979), 267-278.

[2] Melissa S. Anderson, "Normative Orientation of University Faculty and Doctoral Students," Science and Engineering Ethics, 2000: Vol. 6. no. 4, pp. 443 - 461.

[3] Anderson, p. 448.

[4] Anderson, pp. 450, 452.

[5] Anderson, p. 452.

[6] Anderson, p. 453.

More like this

A while back, I offered a basic concepts post that discussed the four norms identified by sociologist Robert K. Merton [1] as the central values defining the tribe of science. You may recall from that earlier post that the Mertonian norms of science are: Universalism "Communism" Disinterestedness…
Since much of what I write about the responsible conduct of research takes them for granted, it's time that I wrote a basic concepts post explaining the norms of science famously described by sociologist Robert K. Merton in 1942. [1] Before diving in, here's Merton's description: The ethos of…
This post is standing in for a lecture and class discussion that would be happening today if I knew how to be in two places at once. (Welcome Phil. 133 students! Make yourselves at home in the comments, and feel free to use a pseudonym if you'd rather not comment under your real name.) The topic…
In the last post, we started looking at the results of a 2006 study by Raymond De Vries, Melissa S. Anderson, and Brian C. Martinson [1] in which they deployed focus groups to find out what issues in research ethics scientists themselves find most difficult and worrisome. That post focused on two…

David Hull's "Science as a Process" is essential reading for anybody interested in the social dynamics of science.

By bob koepp (not verified) on 25 Feb 2008 #permalink

As you briefly allude, these counternorms are not really bad or maladaptive as long as they are enacted in moderation.

We are much more likely to listen to a scientific idea proposed by a PhD holder than by someone who has never entered a university, for instance. Sure, we _might_ miss a great idea that way. But history and the piles of crank letters at any larger department strongly suggest otherwise. If we really did evaluate _every_ idea on its face, without considering the sender, nobody would get any research done, being too busy reading through neverending piles of pure crank.

And we do need to protect our ideas until publication - that is how the system is set up. First to publish wins, so you make sure you are first. And that is fine; no research field ever died on the vine because some result was held up for six months or a year in the publication queue. The important thing is that the results do get published and spread publicly, not that it happen in the very first instant they're created.

The last two "counternorms" are, when taken in moderation, an unavoidable fact no matter what the field. We _are_ in competition for positions, research grants and other resources. Just like anybody in any industry. We aren't disinterested Nightinggales working for free - most of us can't afford it. We like to have somewhere to sleep, something to eat, clothes for us and our children. And yes, we want to promote our ideas, both because successful promotion will give us the aforementioned food and shelter, but also because we normally really do believe they are good ideas - if we didn't we wouldn't had spent the energy to pursue them in the first place.

This has been a very interesting and thought-provoking series.

One facet of the Anderson work that is not reported is whether or not there were any differences in responses among the four fields surveyed. If there were differences, it would be interesting and perhaps informative to know what they were.

This is where things get interesting, because it's quite possible that the strategy that makes the most sense for an individual trying to survive the harsh funding climate or competition for tenure or race for priority runs counter to what best contributes to building a body of reliable scientific knowledge.

I couldn't agree more. There is a difference between the way the counter norms are used in the advance of science to the way they can be used to advance careers.
Personally I feel trapped in a "Prisoner's dilemma" where exploiting the extreme forms of the counter-norms (in particular Individualism and Self-Interestedness) might make all the difference needed to get one of those few tenure-track jobs, yet would be unscientific. Some people engage in gamesmanship without conscience, while others would rather die than violate a perceived-but-unwritten rule. And yes, feeling like you can't win by playing by the rules does diminish one's sense of community and trust.