Many of the commenters on my earlier post about the so-called wisdom of crowds, “Science is not a democracy,” have expressed distaste for the phrase “scientific consensus.” I don’t really share that distaste, and here’s why.
To me, it’s like being disturbed by the phrase “electoral college.” You may detest the way our nation’s electoral system works; you may not trust the outcomes it produces; but there is an established system, and the electoral college is part of it. You can object to the existence of the electoral college and criticize its characteristics, and you can try to change the system. But to stop using the phrase “electoral college” isn’t really all that helpful.
Perhaps some people see “scientific consensus” more like the phrase “jury of one’s peers”: one could argue there is really no such thing. No real-world jury will ever be composed of people perfectly representative of one’s peer group – whatever that is. Indeed, we may prefer today not to have juries that are homogeneous, because our society is not homogeneous – we don’t really want people like Bernie Madoff tried by juries of rich white male investment bankers, do we?
But though one can quibble with the wording “jury of one’s peers”, one still knows what concept it historically represents. Ditto for “scientific consensus.” You might argue that there is no stable “consensus,” because science is never “done.” But we recognize that the “scientific consensus” is not unanimous, that it changes over time, etc. That’s how science is supposed to work. Just because in fifty years we may see inheritance or atoms quite differently, does not mean the current scientific consensus is useless: it’s the framework we’ve got at the moment.
While I can see why people may prefer not to use “scientific consensus,” I actually like the phrase, because it captures something important about science: to come to a consensus, you need people.
One might instead say “preponderance of the evidence” or “net total of the data” or “bulk of the experiments,” but these phrases suggest that experiments, data and evidence are the only significant parts of science. That’s nonsense: people generate experimental frameworks, interpret data, and come up with creative ways to reconcile it with other data (or not). And yes, people are biased. If you’ve survived grad school, you must have known researchers who remain inexplicably wedded to their elegant but outdated pet hypotheses, or professors who conflate defeat of an alternative hypothesis with triumph over a particularly detested academic rival. The scientific consensus includes some element of human nature, and our cognitive biases play a role in how we read the evidence. That’s a given. That’s why it’s important that we have peer review, blah blah blah. Furthermore, it’s people who write and publish papers and textbooks, and people who explain the current state of science to nonscientist policymakers, journalists and juries. So the scientific consensus always has a powerful subjective human element. To claim otherwise is either naive or disingenuous.
People who attack science by saying “scientists are people!” or “scientists are biased!” or “scientists aren’t perfectly objective!” are like people who tell me I can’t trust what I see on Fox News. Duh. Please tell me something I don’t already know. The most annoying thing about such one-note criticisms is that they don’t offer anything constructive – they just bewail that science as a whole is unreliable and biased and imperfect. The system’s not perfect, but no system created by human beings is, is it? If we throw science out, what exactly are we supposed to replace it with? Religious revelation? (Our government is also imperfect and unreliable and biased. Should we switch to Fascism, or communism? You pick!)
Although scientists are understandably sensitive about it, it’s possible to talk about the negative aspects of science without being “anti-science.” Practicing scientists themselves do it all the time. Over the years, I’ve heard researchers question colleagues for not recognizing good data when it’s in front of them, for publishing experiments that might not be airtight, for putting multiple postdocs on a single project and forcing them to race to publication, for neglecting to disclose conflicts of interest, or for overstating preliminary data in a grant proposal. The very fact that we question these behaviors shows that they are not normative. They’re not consistent with what we’d like the profession to be. We treat the claims of such people with skepticism. And all of this self-directed criticism is part of the very process that generates scientific consensus. It seems very odd indeed that so many people think the accusation “scientists are biased!” is some sort of silver bullet that will kill the scientific werewolf. Duh. Please tell me something I don’t already know.
Since my previous post was titled “Science is not a democracy,” I thought it might be fitting to end with a few quotes from a cautionary essay that Harvard Professor Sheila Jasanoff, an authority on science and technology studies, wrote for SEED almost a year ago, entitled “The Essential Parallel Between Science and Democracy.” I recommend going to read the whole thing – it’s only two pages – but here’s a taste:
[T]he restorative steps Obama has taken vis-à-vis science are praiseworthy not so much because they respect science as because they respect the grand institutions of democracy. This is no accident, because the very virtues that make democracy work are also those that make science work: a commitment to reason and transparency, an openness to critical scrutiny, a skepticism toward claims that too neatly support reigning values, a willingness to listen to countervailing opinions, a readiness to admit uncertainty and ignorance, and a respect for evidence gathered according to the sanctioned best practices of the moment.
A common mistake is to claim these virtues for science alone. Writing in the New York Times on January 26, 2009, six days after the inauguration, veteran science writer Dennis Overbye said about science: “That endeavor, which has transformed the world in the last few centuries, does indeed teach values. Those values, among others, are honesty, doubt, respect for evidence, openness, accountability and tolerance and indeed hunger for opposing points of view.” Elevating science, Overbye argued, elevates democracy. This gets cause and effect backward. The values Overbye rightly cherishes are not taught by science, as if the scientific enterprise has some unique claim on them. Rather, the sound conduct of science and the sound conduct of democracy both depend on the same shared values. . .
[However] modern science is a clutch of complex institutions and practices, carrying tendencies that do not always converge with the aims of democracy. Accordingly, simply throwing more money at science, or even listening to the best-qualified scientists for policy advice, may not ensure that research and development are conducted for the public good. Care must be taken to avoid the tunnel vision that frequently accompanies expertise. Studies of disasters – Challenger, 9/11, the financial meltdown – all confirm a sadly recurring story. Complacent or arrogant technical experts refused to heed early warning signs that could have prevented the worst consequences from materializing. It would be a pity if the present administration lost sight of the need for powerful countervailing voices to question conventional technocratic wisdom. . .
The Second Enlightenment must be the enlightenment of modesty. All through the 20th century, grand attempts to remake nations and societies failed. Today, as this nation heeds its president’s call to “begin again the work of remaking America,” it would do well to reflect on those modest virtues that underlie the long-term successes of both science and democracy. These are not the programmatic ambitions of revolution or of wholesale system redesign, but rather the skeptical, questioning virtues of an experimental turn of mind: the acceptance that truth is provisional, that questioning of experts should be encouraged, that steps forward may need corrective steps back, and that understanding history is the surest foundation for progress.
Whether or not you agree with Jasanoff’s critique, it’s true that in the real world, science really is “a clutch of complex institutions and practices.” Science is like the fabled elephant among the blind men: one part of it feels like democracy, another part like human ambition, another part like consensus, another part like truth. None of those perspectives is false, exactly, but they’re all incomplete, one-dimensional representations of what science is. And that’s why (speaking for myself) I find most anti-science rhetoric useless. It sounds like the hollering of someone who, fumbling in the dark, got ahold of the elephant’s tail (or its email archives?) and thinks he’s discovered a great big secret. But some of us have been around the elephant a few times, so to speak. Tell us something we don’t know.