On the Pew Science Survey, Beware the Fall from Grace Narrative

Somewhat predictably, several pundits and commentators have framed Thursday's Pew survey as supporting an all too common yet misleading "fall from grace" narrative about the place of science in society.

These interpretations proclaim a "growing disconnect," "a dangerous divide," a "widening gulf" and use other metaphors that are representative of what sociologists might label as a moral panic. This traditional fall from grace narrative about science argues for the need to return to a (fictional) point in the past where science was better understood and appreciated by the public. In the U.S., this point in the past is often referred to as the years just after the launch of Sputnik and leading up to the Apollo moon landing.

Yet you would be hard pressed to find this type of rhetoric in the peer-reviewed literature examining public opinion about science, the role of scientific expertise in policymaking, or the relationship between science and other social institutions. (For a recent synthesis of public opinion research on science and how it has been interpreted--or misinterpreted--in popular discussion, see this article.)

Indeed, if there is a "dangerous divide" in society it is between the conclusions of experts working in these areas and the extraordinary claims that are often made by some journalists and pundits.

To get a sense of how experts view the Pew survey, consider this interpretation at USA Today from Michigan State political scientist Jon Miller, widely recognized as the academic dean of public opinion research on science: "The major value of this survey is that it rebuts the frequent allegations that Americans are 'turning against' science."

Or consider how the Pew researchers themselves described the central findings of the survey:

Americans like science. Overwhelming majorities say that science has had a positive effect on society and that science has made life easier for most people. Most also say that government investments in science, as well as engineering and technology, pay off in the long run. And scientists are very highly rated compared with members of other professions: Only members of the military and teachers are more likely to be viewed as contributing a lot to society's well-being.

I shared a similar observation in a post yesterday, detailing the Pew results that indicate an almost unrivaled amount of cultural respect, admiration, authority, and deference to science and scientists.

So why do so many popular claims alleging a crisis in public support for science persist? And why are they potentially distracting, if not harmful, to public engagement efforts? In a forthcoming peer-reviewed article at the American Journal of Botany, I team up with my colleague at the Univ. of Wisconsin Dietram Scheufele in describing the origins of this all too common fall from grace narrative about science in American society.

Below I've pasted the relevant excerpt from the forthcoming article:


Myths about Public Communication

Historically, a prevailing assumption has been that ignorance is at the root of social conflict over science. As a solution, after formal education ends, science media should be used to educate the public about the technical details of the matter in dispute. Once citizens are brought up to speed on the science, they will be more likely to judge scientific issues as scientists do and controversy will go away. In this decades-old "deficit" model, communication is defined as a process of transmission. The facts are assumed to speak for themselves and to be interpreted by all citizens in similar ways. If the public does not accept or recognize these facts, then the failure in transmission is blamed on journalists, "irrational" public beliefs, or both (Bauer, 2008; Bauer, Allum, & Miller, 2007; Nisbet & Goidel, 2007; Scheufele, 2007).

The heavily referenced symbols in this traditional paradigm are popular science outlets such as Scientific American or PBS' Nova along with famous popularizers such as Richard Feynman and Carl Sagan. Often when the relationship between science and society breaks down, science illiteracy is typically blamed, the absence of quality science coverage is bemoaned, and there is a call put out for "the next Carl Sagan."

Deficit model thinking also includes a fall from grace narrative, with various mythmakers hyperbolizing that in contrast to today's culture of "anti-science," there was a point in the past when the public understood--and as a direct consequence--deeply respected science. In the United States, this so-called golden era is often described as the dozen or so years of the "Space Race," the period that stretched from the 1957 Russian launch of the Sputnik satellite to the U.S. lunar landing in 1969.

As we explain in this essay, continued adherence to the deficit model only likely fans the flames of science conflicts. Condescending claims of "public ignorance" too often serve to further alienate key audiences, especially in the case of evolutionary science, when these charges are mixed with atheist critiques of religion (Nisbet, 2009a). Myths such as Sputnik over simplify the past, making it easier to falsely define contemporary debates in terms of "anti-science," "illiteracy" or "denial" (Goldston, 2009). Moreover, by emphasizing what is wrong with the public--or by pinning their hopes on a major focusing event such as Sputnik--many scientists ignore the possibility that their communication efforts might be part of the problem (Irwin & Wynne, 1996).

Perhaps worse, the assumptions of the deficit model cut against the conclusions of several decades of research in the area. For example, a recent meta-analysis shows that science literacy only accounts for a small fraction of the variance in how lay publics form opinions about controversial areas of science (Allum et al., 2008). Far stronger influences on opinion derive from value dispositions such as ideology, partisanship, and religious identity (Nisbet, 2005; Nisbet & Goidel, 2007; Ho, Brossard, & Scheufele, 2008; Scheufele et al., 2009). In addition, no matter how accurately communicated and understood the science, policy decisions cannot be separated from values, political context, and necessary trade-offs between costs, benefits, and risks (Jasanoff, 2005; Guston et al., 2009; Pielke, 2007).

Given these realities, to focus on science literacy as both the cause and the solution to failures in public communication remains a major distraction for science organizations. If scientists had a better understanding of the complex factors that shape public preferences and policy decisions, they would be less likely to define every debate in terms of "crisis" or "politicization," interpretations that often only further fuel polarization, alienation, and/or political gridlock (Goldston, 2008; Nisbet, 2009b). Just as importantly, arguing that a policy debate is simply a matter of "sound science" reduces scientific knowledge to just another resource that interest groups can draw upon in political battles, threatening the perceived integrity of science. As we will discuss relative to climate change, under these conditions, an inevitable part of the framing of an issue will involve a contest over uncertainty, with each side potentially hyping or distorting the objective state of expert agreement. Each time an exaggerated scientific claim is proven false or inaccurate; it risks further alienating publics already distrustful of the science and scientists (See Pielke, 2007 for more on this perceptual trap).

Finally, there is little reason to expect that traditional popular science approaches if applied to informing a wider public about science will ever be effective. These initiatives instead tend to reach a small audience of already informed science enthusiasts. The reason is that individuals are naturally "cognitive misers." Science communication efforts grapple with a wider public that is for the most part unable or uninterested in developing an in-depth understanding of scientific breakthroughs, and instead rely on cognitive shortcuts and heuristic decision making to help them reach opinions about policy-related matters (Popkin, 1991; Scheufele, 2006). The nature of the media system further exacerbates this human tendency. The increase in content choices available to a general audience, paired with decreasing public affairs news consumption across all age cohorts, makes widespread messaging difficult. Second, even leading national media outlets are investing less and less money in staffing their newsrooms with science writers meaning less coverage devoted to important scientific topics. At the local level, the historic distress to the news industry has meant that major cities and regions of the country no longer have a reliable source of news about science and the environment that is tailored to the specific needs of their communities (Brumfiel, 2009).

Never well understood, but always deeply respected. There is perhaps no better example of the persistence of the deficit model than the widespread belief that the period between the launch of Sputnik in 1957 and the U.S. moon landing in 1969 was a golden age of science literacy, with an informed public pushing for large scale government investment in science.

In contrast to this often repeated myth, public opinion surveys taken just after Sputnik indicate a public barely familiar with the most basic science concepts. In one measure, just 12% of the public understood the scientific method. On basic questions tapping knowledge of polio, fluoridation, radioactivity, and space satellites, only 1 in 6 could answer all four questions correctly (Withey, 1959). In other survey results, only 38% knew that the Moon was smaller than the Earth and only 4% could correctly indicate the distance in miles between the Moon and the Earth (Michael, 1960). Apart from knowledge, attention to science news occurred predominantly among the 10% of American adults who held a four-year college degree (Swinehart & McLeod, 1960.)

Just after the launch of Sputnik, many Americans reported paying closer attention to the desegregation conflict in Arkansas and to the World Series than to the satellite launch and the call to arms for a Space Race (Michael, 1960). A majority of the public, in fact, did not view Sputnik as a scientific event, but rather as fitting with a larger frame of reference relative to the Cold War, describing the launch in terms of national security, international competitiveness, and falling behind the Soviet Union (Michael, 1960; Swinehart & McLeod, 1960)

By deficit model standards, these survey results reveal that the mythologized Sputnik-era America was in reality a scientifically illiterate America. The paradox then is that despite low levels of science literacy, the post-Sputnik public held science in almost universally high regard. For example, roughly 90% agreed that science was making life healthier, easier, and more comfortable and an equal number agreed that science was contributing to societal progress (Withey, 1959). The reason for this divergence between knowledge and admiration is that science literacy, as we have reviewed, has very little to do with public perceptions. Instead, driving public opinion during the Space Race and Cold War were strong frames of social progress and international competitiveness, historically consistent messages about science that we will return to later.

Today, despite a doubling in the proportion of Americans with a college-education and more science-related information available by way of the Web than at any time in media history, scores on survey questions measuring factual science knowledge have remained relatively stable for more than a decade, with Americans averaging six correct answers out of twelve true or false quiz-like items (NSF 2008). Yet even with these relatively low levels of knowledge, the best available survey data suggest that science commands as much respect as it did during the decade of the Space Race.

In 2004, the National Science Foundation brought together a team of social scientists to re-examine the organization's bi-annual surveys on public attitudes about science and technology. The NSF asked the team to redesign the survey to include a new emphasis on what the NSF termed the "cultural authority of science," particularly how the public views the role of scientific expertise in policymaking and societal decisions.

The commissioned survey findings, gathered in 2006, argue against the claims of the deficit model that scientific illiteracy threatens the cultural status of science. Consider that more than 85% agree that "even if it brings no immediate benefits, scientific research that advances the frontiers of knowledge is necessary and should be supported by the federal government." On the specific issues of climate change, stem cell research, and food biotechnology, Americans believe scientists hold greater expertise, are less self interested, and should have greater say in decisions than industry leaders, elected officials, and/or religious leaders. Moreover, during the past twenty years, as public trust in Congress, the presidency, industry, religious institutions, and the media have plummeted; public faith in science has remained virtually unchanged. In fact, among American institutions, only the military enjoys more trust (NSF 2008).

As we will discuss in subsequent sections, a "miserly" public relies heavily on their trust in science and scientists as a dominant heuristic in reaching judgments about policy matters. Only on a few issues, where societal leaders effectively re-define an area of science as in conflict with something else the public deeply cares about, do perceptual gaps based on values and identity appear among the general public (Brossard & Nisbet, 2007; Ho, Brossard & Scheufele, 2008). Yet even under these conditions, as is the case with climate change, scientists still appear to hold the upper hand in terms of trust (Scheufele et al., 2007).

The implication is that relative to authority, deference, and respect, scientists have earned a rich bounty of perceptual capital. When controversies occur, the challenge is to understand how to use this capital to sponsor dialogue, invite differing perspectives, facilitate public participation, reach consensus when appropriate, learn from disagreement, and avoid common communication mistakes that undermine these goals.

Categories

More like this

The NY Academy of Sciences offers a stunning venue for public talks, forums, and receptions, with a view from the 40th floor of 7 World Trade Center. Thursday morning I will be heading up to New York to give a 7pm talk at the New York Academy of Sciences. A crowd of more than 100 is expected for…
I am back from an excellent science journalism conference in Denmark and will have more to say on the meeting which highlighted several issues that speak directly to challenges faced here in the US. But for now, I wanted to return to our Commentary article "Science Communication Re-Considered"…
Next week there will be big news on the science communication front. In anticipation, I was just going back over some things that I have written on the topic over the past decade. I ran across an essay I wrote for Skeptical Inquirer from 2003, which I posted below the fold. The essay puts into…
As part of their conversation series with scientists, the NY Times this week runs an interview with Harvard's Eric Mazur featuring the headline "Using the 'Beauties of Physics' to Conquer Science Illiteracy." Mazur discusses his teaching approach in his physics course, stating that his goal is to…

This is a real, not a perceived, problem. I think the net was cast too wide in the surveys: when highly aggregated "on the whole" people will say that are "pro-science". But when it comes to specific issues they blush and prevaricate. E.g.,:

"Only on a few issues, where societal leaders effectively re-define an area of science as in conflict with something else the public deeply cares about, do perceptual gaps based on values and identity appear among the general public"

-- but unfortunately these are issues of humongous import. And: why can we hardly find American graduate students in the sciences? We find hundreds of foreign students applying but hardly any Americans, even in popular subjects.

p.s. I'm with Anna: it's Dunning-Kruger in overdrive.

What I see in my small-town community is Dunning-Kruger in action - people akin to the public in the Pew survey, who think they respect science, but who possess an unrealistic faith in their own judgment and who are poor at assessing the credibility of others. What they most need to be taught, IMO, is epistemological literacy - they need to know just how easy it is to be bamboozled, when you don't know much and aren't ready to take the time to learn, when there are deep-pocketed folks with a vested interest in bamboozling you; and they need to learn that there are times to practice science, and there are times to absorb findings by scientists, and that most of us, most of the time, would benefit - due to our limited time/interest and also the "one way hash" problem - if we took the latter approach. There's nothing the matter with resorting to heuristics, if they're the right ones.

Someone recently suggested that educators' nurturing of their students' self-esteem has led to this problem - if you're always "ok", then you'll feel that your views are also "ok", regardless of how misaligned with the evidence they are.
(alas, I can't remember where I saw this)

Which means that another thing that needs to be communicated is that science isn't English class - rather, it's an evidence-based meritocracy of ideas, and some ideas - and heuristics - _are_ more equal than others.

Apologies to Ed Yong, but this isn't rocket science.