Many of the commenters on my earlier post about the so-called wisdom of crowds, "Science is not a democracy," have expressed distaste for the phrase "scientific consensus." I don't really share that distaste, and here's why.
To me, it's like being disturbed by the phrase "electoral college." You may detest the way our nation's electoral system works; you may not trust the outcomes it produces; but there is an established system, and the electoral college is part of it. You can object to the existence of the electoral college and criticize its characteristics, and you can try to change the system. But to stop using the phrase "electoral college" isn't really all that helpful.
Perhaps some people see "scientific consensus" more like the phrase "jury of one's peers": one could argue there is really no such thing. No real-world jury will ever be composed of people perfectly representative of one's peer group - whatever that is. Indeed, we may prefer today not to have juries that are homogeneous, because our society is not homogeneous - we don't really want people like Bernie Madoff tried by juries of rich white male investment bankers, do we?
But though one can quibble with the wording "jury of one's peers", one still knows what concept it historically represents. Ditto for "scientific consensus." You might argue that there is no stable "consensus," because science is never "done." But we recognize that the "scientific consensus" is not unanimous, that it changes over time, etc. That's how science is supposed to work. Just because in fifty years we may see inheritance or atoms quite differently, does not mean the current scientific consensus is useless: it's the framework we've got at the moment.
While I can see why people may prefer not to use "scientific consensus," I actually like the phrase, because it captures something important about science: to come to a consensus, you need people.
One might instead say "preponderance of the evidence" or "net total of the data" or "bulk of the experiments," but these phrases suggest that experiments, data and evidence are the only significant parts of science. That's nonsense: people generate experimental frameworks, interpret data, and come up with creative ways to reconcile it with other data (or not). And yes, people are biased. If you've survived grad school, you must have known researchers who remain inexplicably wedded to their elegant but outdated pet hypotheses, or professors who conflate defeat of an alternative hypothesis with triumph over a particularly detested academic rival. The scientific consensus includes some element of human nature, and our cognitive biases play a role in how we read the evidence. That's a given. That's why it's important that we have peer review, blah blah blah. Furthermore, it's people who write and publish papers and textbooks, and people who explain the current state of science to nonscientist policymakers, journalists and juries. So the scientific consensus always has a powerful subjective human element. To claim otherwise is either naive or disingenuous.
People who attack science by saying "scientists are people!" or "scientists are biased!" or "scientists aren't perfectly objective!" are like people who tell me I can't trust what I see on Fox News. Duh. Please tell me something I don't already know. The most annoying thing about such one-note criticisms is that they don't offer anything constructive - they just bewail that science as a whole is unreliable and biased and imperfect. The system's not perfect, but no system created by human beings is, is it? If we throw science out, what exactly are we supposed to replace it with? Religious revelation? (Our government is also imperfect and unreliable and biased. Should we switch to Fascism, or communism? You pick!)
Although scientists are understandably sensitive about it, it's possible to talk about the negative aspects of science without being "anti-science." Practicing scientists themselves do it all the time. Over the years, I've heard researchers question colleagues for not recognizing good data when it's in front of them, for publishing experiments that might not be airtight, for putting multiple postdocs on a single project and forcing them to race to publication, for neglecting to disclose conflicts of interest, or for overstating preliminary data in a grant proposal. The very fact that we question these behaviors shows that they are not normative. They're not consistent with what we'd like the profession to be. We treat the claims of such people with skepticism. And all of this self-directed criticism is part of the very process that generates scientific consensus. It seems very odd indeed that so many people think the accusation "scientists are biased!" is some sort of silver bullet that will kill the scientific werewolf. Duh. Please tell me something I don't already know.
Since my previous post was titled "Science is not a democracy," I thought it might be fitting to end with a few quotes from a cautionary essay that Harvard Professor Sheila Jasanoff, an authority on science and technology studies, wrote for SEED almost a year ago, entitled "The Essential Parallel Between Science and Democracy." I recommend going to read the whole thing - it's only two pages - but here's a taste:
[T]he restorative steps Obama has taken vis-Ã -vis science are praiseworthy not so much because they respect science as because they respect the grand institutions of democracy. This is no accident, because the very virtues that make democracy work are also those that make science work: a commitment to reason and transparency, an openness to critical scrutiny, a skepticism toward claims that too neatly support reigning values, a willingness to listen to countervailing opinions, a readiness to admit uncertainty and ignorance, and a respect for evidence gathered according to the sanctioned best practices of the moment.
A common mistake is to claim these virtues for science alone. Writing in the New York Times on January 26, 2009, six days after the inauguration, veteran science writer Dennis Overbye said about science: "That endeavor, which has transformed the world in the last few centuries, does indeed teach values. Those values, among others, are honesty, doubt, respect for evidence, openness, accountability and tolerance and indeed hunger for opposing points of view." Elevating science, Overbye argued, elevates democracy. This gets cause and effect backward. The values Overbye rightly cherishes are not taught by science, as if the scientific enterprise has some unique claim on them. Rather, the sound conduct of science and the sound conduct of democracy both depend on the same shared values. . .
[However] modern science is a clutch of complex institutions and practices, carrying tendencies that do not always converge with the aims of democracy. Accordingly, simply throwing more money at science, or even listening to the best-qualified scientists for policy advice, may not ensure that research and development are conducted for the public good. Care must be taken to avoid the tunnel vision that frequently accompanies expertise. Studies of disastersâ--âChallenger, 9/11, the financial meltdownâ--âall confirm a sadly recurring story. Complacent or arrogant technical experts refused to heed early warning signs that could have prevented the worst consequences from materializing. It would be a pity if the present administration lost sight of the need for powerful countervailing voices to question conventional technocratic wisdom. . .
The Second Enlightenment must be the enlightenment of modesty. All through the 20th century, grand attempts to remake nations and societies failed. Today, as this nation heeds its president's call to "begin again the work of remaking America," it would do well to reflect on those modest virtues that underlie the long-term successes of both science and democracy. These are not the programmatic ambitions of revolution or of wholesale system redesign, but rather the skeptical, questioning virtues of an experimental turn of mind: the acceptance that truth is provisional, that questioning of experts should be encouraged, that steps forward may need corrective steps back, and that understanding history is the surest foundation for progress.
Whether or not you agree with Jasanoff's critique, it's true that in the real world, science really is "a clutch of complex institutions and practices." Science is like the fabled elephant among the blind men: one part of it feels like democracy, another part like human ambition, another part like consensus, another part like truth. None of those perspectives is false, exactly, but they're all incomplete, one-dimensional representations of what science is. And that's why (speaking for myself) I find most anti-science rhetoric useless. It sounds like the hollering of someone who, fumbling in the dark, got ahold of the elephant's tail (or its email archives?) and thinks he's discovered a great big secret. But some of us have been around the elephant a few times, so to speak. Tell us something we don't know.
- Log in to post comments
I like what you're saying and generally agree, but part of me would still prefer to see a distinction between a consensus of opinions (where all the things you say apply) and a consensus in the sense of what is well-known or "established" (i.e. what is the common ground that is accepted).
Ideally the common ground within a "consensus of opinions" will be what is "well accepted", and it's the latter that is perhaps more useful to those that haven't the time or training to explore where the difference in opinions is coming from?
While people's views on the common ground will differ too, the difference won't (shouldn't!) be as great because opinions include speculation, extrapolations and all the "political" stuff.
For example, if asked to give the consensus of what is well-known, the professor with the off-beat ideas would have to admit his ideas are off to one side. If asked for his or her opinion, then of course it'll include their particular slant (and you'll have to nut out why they hold the position that they do).
I can't but help think that part of the problem is which question is asked. I've written short post on this earlier in the context of a TV interview, where it's commonly asked what an expert's opinion is, when what is usually really (or should be) meant by the question is what is well-known or well-established. The setting is different to your context of those who argue against science on the internet, but there still is a point to consider, perhaps?
I agree that arguing against science by saying "scientists are people" is daft. You could just as equally argue against anything that way, including the things that the people saying that hold dear... I'd like to think they won't object to be called people :-)
Can I just say that I've always enjoyed Bioephemera, even when it was just links to scientific art or beautiful science, but since you were named ScienceBlogs featured blog of the week in the RSS feed, you've been producing thought pieces that are absolutely extraordinary?
I hope that you keep it up even after Sb moves on to some other blog to feature. I'm really enjoying this glimpse into your mind beyond the delightful images. Brava.
Huh? I'm the featured blog of the week? No one tells me these things. They picked a ridiculous week to do it then didn't they??! ;)
Jessica Palmer: Whether or not you agree with Jasanoff's critique, it's true that in the real world, science really is "a clutch of complex institutions and practices."
Yes... and no.
The term "science" is used in at least three ways; referring to a collection of human understandings, to an anthropological practice, and to some abstracted philosophical discipline (or, set of overlapping disciplines, working from semi-related nonidentical primary assumptions).
For example, "table salt is composed of atoms of sodium and chlorine in a 1:1 ratio" is part of the body of understanding, while writing grant proposals is widely part of the anthropological practice that is unrelated to the core philosophical discipline (if arguably part of the related philosophical discipline of engineering).
The "clutch" applies to the anthropological sense; not so much the others. Using the same term for all three a terminological sloppiness that often leads to equivocation fallacies; EG, presuming that the anthropological practice has a "perfect" implementation of the philosophical discipline.
On a related note, I'd also argue that Jasanoff's attributing "a commitment to reason and transparency, an openness to critical scrutiny, a skepticism toward claims that too neatly support reigning values, a willingness to listen to countervailing opinions, a readiness to admit uncertainty and ignorance, and a respect for evidence gathered according to the sanctioned best practices of the moment" as being intrinsic part of "democracy" is similarly sloppy-to-wrong. These are traits that have grown up alongside the western world's anthropological implementation (having an overall general benefit to that implementation), but are not philosophically inherent requirements to be "democracy". This may relate to why many of the western attempts to spread "democracy" often result in cargo-cult levels of political success: some similarity in form, but not substance.
abb3W: while I agree with you that the "science" has multiple meanings, I fail to see the relevance to this post. I think it's abundantly clear from context which sense of the word "science" is in play, since the post is about the institutions and social practices of science "in the real world," and the characteristics of the people in the scientific community. It would be illogical and strained to read "science" any other way in such a context.
w00tlolz!
For me, "consensus" is usually the wrong word since it implies some agreement and belief not necessarily based on facts. For me, a consensus in science would be when there is a good story to tell but there is not yet enough evidence to prove or disprove it but it is generally accepted as the best explanation offered so far; there are such things in science (for example, the Higgs Boson). This is very different from the agreement between scientists on issues which have evidence for it, such as Newton's laws of gravitation. All sensible scientists accept Newton's laws as a well-tested fact, but to call that a "consensus" is to belittle its status as a fact.
"For me, a consensus in science would be when there is a good story to tell but there is not yet enough evidence to prove or disprove it but it is generally accepted as the best explanation offered so far; there are such things in science (for example, the Higgs Boson)."
I agree with that. But I also think it's pretty rare that policymakers bring scientists in to testify on topics as uncontroversial as gravity. Even if the relevant scientific theory is well-established, the application of that theory may be controversial - what variables must be taken into account, how to correct for errors, how well a model based on one species can be extrapolated to another, etc. In such a situation you usually turn to the people doing research in the field and see what the current consensus is among practitioners. So you might have consensus on a hypothesis, or a consensus on methodology - or even a consensus on ethics, as came out of the Asilomar conference. Those are all variations of scientific consensus.
So while I don't think it's inaccurate to say "the scientific consensus is that evolution happened", I think that's a relatively rare use of the term consensus. I'd expect to see consensus applied to more granular aspects of a theory, like "the consensus is that punctuated equilibrium is the most likely explanation for this aspect of the fossil record."
Jessica Palmer: while I agree with you that the "science" has multiple meanings, I fail to see the relevance to this post.
Because by trying to keep the distinction clear helps prevent criticisms of anthropological practice from being extended politically to fundamental attacks on the philosophical discipline via out-of-context quote (such as by strains like flat earthers, creationists, anthropogenic warming deniers, homeopathy advocates, and others). Additionally, it may also explain some of the discomfort. Those who attack science are not merely attacking the anthropological practice, but the philosophical discipline; and usually can't keep the two straight. Similarly, many people who support science (especially interested laymen) don't keep the two straight. Thus, if you discuss problems in "science" without making it clear they are problems not in philosophical methodology but "merely" current anthropological practice, listeners are more apt to "expressed distaste", due to (perhaps subconscious) ambiguously interpreting the object of the discussion.
Or maybe it's just a bee that's gotten stuck in my bonnet after so many discussions seem to have bumped aground over this same point.
Jessica Palmer: Even if the relevant scientific theory is well-established, the application of that theory may be controversial - what variables must be taken into account, how to correct for errors, how well a model based on one species can be extrapolated to another, etc.
Not to mention the fundamental problem going from description to application: disagreement as to the nature of the bridge across the is-ought divide... which in turn often results in semi-covert attempts to magnify some perceived uncertainties when considering application.
abb3w, we largely agree. The bulk of your comment is the point of my whole post.
You seem to be suggesting that after I write a post about the social practices of science (which I did) and how criticisms of those social practices are not valid criticisms of the hypotheses and theories that result (which I did), then you want me to also make sure to specify that the "science" I'm talking about is the social practices of science, and that criticisms thereof are not valid criticisms of the hypotheses and theories that result. Which would be redundant with the entire post.
I'm sorry you don't agree that I'm making my point clearly, but since the whole post is about the very concerns you state, I find it very odd that you're criticizing me for not discussing them.
abb3w: High-School Debate-Team WORLD CHAMPEEN!!!
now, now CPP. Few enough of my commenters agree with me, I want to be nice to abb3w!
Jessica Palmer: Which would be redundant with the entire post.
"Redundant" doesn't seem like the right word. It would be explicitly emphasizing the distinct natures of the categories, as opposed to discussing the aspects of one of them. It might be that such emphasis would be a distraction from your intended purpose in making the post.
If you are saying that the social practice of science, the philosophical framework of science, and the knowledge generated from the scientific process are "distinct categories," then I disagree with you. I think that's a sterile and unproductive way of looking at it, since they are very much overlapping, interdependent concepts. The social mores, politics, and economics of institutional science are based on science's philosophical framework, and the respect given the knowledge generated by science is affected by public perceptions of the way science is practiced. The post is emphasizing the distinction between the social practice of science by individual human beings, and the knowledge generated by the scientific process - as such, it is consistent with what you are saying. But the reason I didn't make a big deal about the concepts being completely distinct is simple: I don't think it is useful to talk about them that way.
Because of your topics, it seems that you and some of your readers might be interested in the brand new blog...
"Scientific Words of the Week" at:
http://ScientificWords.wordpress.com