Adventures in Ethics and Science

Fuller on Mooney on science.

By now you have seen the excellent Crooked Timber seminar on Chris Mooney‘s book, The Republican War on Science. In addition to the CT regulars, sociologist of science (and Kitzmiller vs. Dover expert witness) Steve Fuller contributed an essay to the seminar. While some in these parts have dismissed it rather quickly, I want to give it a slightly less hasty response.

At the outset, let me say that I’m not going to respond to all of Fuller’s claims in the essay. For one thing, it’s long; the printout (yes, I’m a Luddite), not counting comments, is seven pages of very small type. For another, since I haven’t yet read Chris’s book (it’s on my list), I’m not really in a position to address whether Fuller is making accurate representations of its journalistic methodology or rhetorical strategy.

What I would like to address are some of Fuller’s comments on the workings of science. As well, I’d like to look at his comments about Intelligent Design, but since this post is running long (and I have to drive across the bay to give a talk soon), I’ll take up the ID issues in my next post.

Early in his essay, Fuller criticizes Chris for being an uncritical booster of a set of scientific elites. He writes:

The question of intellectual integrity in both the journalistic and philosophical cases pertains to just how independent is your representation of science: Are you doing something other than writing glorified press releases for thinly veiled clients? It must be possible to be pro-science without simply capitulating to the consensus of significant scientific opinion.

He then takes issue with Mooney’s description of the journalistic method he tried to employ:

In my opinion, far too many journalists deem themselves qualified to make scientific pronouncements in controversial areas, and frequently in support of fringe positions. In contrast, I believe that journalists, when approaching scientific controversies, should use their judgment to evaluate the credibility of different sides and to discern where scientists think the weight of evidence lies, without presuming to critically evaluate the science on their own (p. vii).

Fuller’s worry is that Mooney thinks he can pronounce on scientific debates from an objective, independent standpoint — not by objectively evaluating the scientific facts in question, but by objectively evaluating the credibility of the scientists involved and deferring to the judgment of the most credible scientists. Fuller seems to be suggesting that there is no good way to determine which scientists in the debate are most credible — it all comes down to deciding who to trust.

I think this misses an important piece of how scientific disputes are actually adjudicated. In the end, what makes a side in a scientific debate credible is not a matter of institutional power or commanding personality. Rather, it comes down to methodology and evidence. The winning side is the side that can demonstrate support from the evidence, the side that can make useful predictions, the side that can build more productive research from the theoretical machinery under contention. Personalities may be useful in attracting scientists to develop a theory or to do experiments around it. Ample funding may help you support the work of those scientists. But in the long run, results are what matter, and the methodologies scientists use to obtain and evaluate results are just the kinds of things other scientists — and careful science journalists — look at to judge credibility.

A good bit of Fuller’s essay focuses on how policy decisions about which science projects to fund are decided. I think there is a reasonable debate to be had about this question (as some of my earlier posts indicate). But in this discussion, Fuller spends a fair amount of time contrasting the interests of a small set of scientific “elites” with the interests of the scientists in the trenches. For example, he considers how the Office of Technology Assessment (OTA) evaluated the proposal for a Superconducting Supercollider:

The OTA, staffed by social scientists, tended to frame analyses of the science policy environment in terms of a comprehensive statistical representation of the range of constituencies relevant to the policy issue: that is, including not only elite but also more ordinary scientists. On that basis, the OTA suggested that if the research interests of all practicing physicists are counted equally, then the Supercollider should not rank in the top tier of science funding priorities because relatively few physicists would actually benefit from it. I say ‘suggested’ because, whereas the NAS typically offers pointed advice as might be expected of a special interest group, the OTA typically laid out various courses of action with their anticipated consequences. My guess is that Mooney fails to mention this chapter in the OTA’s short but glorious history because it helped to trigger the ongoing Science Wars, which – at least in Steven Weinberg’s mind – was led by science’s ‘cultural adversaries’, some of whom staffed the OTA, whose findings contributed to the Congressional momentum to pull the plug on the overspending Supercollider. Although Mooney is right that both the NAS and OTA have often found themselves on the losing side in the war for influence in Washington science policy over the past quarter-century, their modus operandi are radically different. According to the NAS, science is run as an oligarchy of elite practitioners who dictate to the rest; according to the OTA, it is run as a democracy of everyone employed in scientific research.

First, it is certainly true that at any given moment in a particular scientific field, some rejearch areas are “sexy” and others are not. Certain ways of distributing research funding may be skewed toward preferring the “sexy” projects — even if they are extremely expensive. Others may prioritize spreading the money around so the greatest number of scientists in the field can do productive research, even if some of the preferred (and pricey) projects of the “elite practitioners” don’t get funded. However, one should remember that there are hardly any scientific research projects that only benefit the “elites”. Wander into the lab (or out into the field) — it is not the elites who are doing the grubby work of assembling apparatus, collecting data, refining techniques, and the like. Rather, this scientific work is done by the rank-and-file scientists: graduate students, postdocs, technicians, and other non-members of the NAS.

In any line of research, it is the labors of these non-elite scientists that make it science. Having a good idea, or a keen piece of equipment, is not sufficient.

Now, maybe Fuller’s discussion of the interests of the “elite” scientists bears primarily on questions of funding priorities in science. I take it much of Fuller’s discussion here is arguing that an elected administration is well within its rights to fund the kind of science it wants to fund, and that if the public disagrees with an administration’s funding priorities they can vote that administration right out of office. In other words, Fuller sees democracy in decisions about what science should be funded to be a good thing. Indeed, his view of the power of the (elite, oligarchic) NAS strengthens this impression. The forces of democracy, then, might be expected to get us moving in the best scientific directions. (He notes also that the “democratic” picture most people have of scientific peer review, or of graduate students being able to seriously disagree with senior scientists in professional discussions, is an idealization of how the scientific community really operates.)

But there are moments in this discussion where Fuller seems not to see the adjacent issue of what kind of hold the elected administration seems to think it should have over the scientists working in the areas it has elected to fund. Does “democracy” mean that the winners of the election get to decide what the truth is? Fuller writes:

… as it stands, it seems to me that the best course of action for those interested in improving the quality of science in policymaking is simply to try harder within the existing channels – in particular, to cultivate constituencies explicitly and not to rely on some mythical self-certifying sense of the moral or epistemic high ground. Sometimes I feel that the US scientific establishment and the Democratic Party are united in death’s embrace in their failure to grasp this elementary lesson in practical politics.

If he’s merely saying that scientists ought to be better advocates for the research projects they would like to get funded, that’s fine. But the suggestion that scientists abandon the “epistemic high ground” to achieve more political success sounds frighteningly close to what some government scientists report being asked to do. What makes it science is its epistemology. Knowledge claims in science are connected to empirical facts in particular ways, and those connections seem intimately tied to science’s ability to give us good predictions, explanations, and ways to manipulate the world we’re in.

If winning elections means you not only get to decide what science to fund, but also how the results will come out (while other outcomes, should they happen, will never be discussed), we’re not talking about science anymore.

A bit further on, Fuller writes:

Mooney does not take seriously that scientists whose research promotes the interests of the tobacco, chemical, pharmaceutical or biotech industries may be at least as technically competent and true to themselves as members of the NAS or left-leaning academic scientists in cognate fields. Where these two groups differ is over what they take to be the ends of science: What is knowledge for – and given those ends, how might they best be advanced?

The question of how best to use scientific knowledge is, of course, subject to debate. What is not so open to debate is what is means to be true to oneself as a scientist. Regardless of personal interests, or of the interests of one’s employer, the scientist is accountable to the empirical facts. Ignoring those facts, or misreporting them, is out of bounds.

Coming up: Fuller’s thoughts on democracy, science education, and whether Intelligent Design is getting a fair shake from the tribe of science.

Comments

  1. #1 Chris Mooney
    March 28, 2006

    Thanks for posting on this, and for the defense!

  2. #2 Bill Hooker
    March 29, 2006

    Fuller seems to be suggesting that there is no good way to determine which scientists in the debate are most credible — it all comes down to deciding who to trust.
    I think this misses an important piece of how scientific disputes are actually adjudicated. In the end, what makes a side in a scientific debate credible is not a matter of institutional power or commanding personality. Rather, it comes down to methodology and evidence.

    So, in other words, deciding who to trust means being able to evaluate the data for yourself, which — according to the pullquote above — Mooney suggests a journalist should not do. (Right here would be a good place to admit I haven’t read TWoS.)

    Don’t get me wrong, I’ve been reading Chris Mooney about as long as he’s had a blog, and I have a lot of respect for him. He’s a welcome exception to the rule that science writers don’t understand the science. I think, however, that in this case he’s wrong, both about what he should do and what he does do. It seems clear to me that he does understand the science, and does evaluate the facts for himself. I don’t, frankly, see how one can approach a scientific controversy by any other method than reference to the data. To me, “what makes it science is the epistemology” means RTFdata.