How do we perceive risk?: Paul Slovic’s landmark analysis

By Sara Gorman

In the 1960s, a rapid rise in nuclear technologies aroused unexpected panic in the public. Despite repeated affirmations from the scientific community that these technologies were indeed safe, the public feared both long-term dangers to the environment as well as immediate radioactive disasters. The disjunction between the scientific evidence about and public perception of these risks prompted scientists and social scientists to begin research on a crucial question: how do people formulate and respond to notions of risk?

Early research on risk perception assumed that people assess risk in a rational manner, weighing information before making a decision. This approach assumes that providing people with more information will alter their perceptions of risk. Subsequent research has demonstrated that providing more information alone will not assuage people’s irrational fears and sometimes outlandish ideas about what is truly risky. The psychological approach to risk perception theory, championed by psychologist Paul Slovic, examines the particular heuristics and biases people invent to interpret the amount of risk in their environment.

In a classic review article published in Science in 1987, Slovic summarized various social and cultural factors that lead to inconsistent evaluations of risk in the general public. Slovic emphasizes the essential way in which experts’ and laypeople’s views of risk differ. Experts judge risk in terms of quantitative assessments of morbidity and mortality. Yet most people’s perception of risk is far more complex, involving numerous psychological and cognitive processes. Slovic’s review demonstrates the complexity of the general public’s assessment of risk through its cogent appraisal of decades of research on risk perception theory.

Slovic’s article focuses its attention on one particular type of risk perception research, the “psychometric paradigm.” This paradigm, formulated largely in response to the early work of Chauncey Starr, attempts to quantify perceived risk using psychophysical scaling and multivariate analysis. The psychometric approach thus creates a kind of taxonomy of hazards that can be used to predict people’s responses to new risks.

Perhaps more important than quantifying people’s responses to various risks is to identify the qualitative characteristics that lead to specific valuations of risk. Slovic masterfully summarizes the key qualitative characteristics that result in judgments that a certain activity is risky or not. People tend to be intolerant of risks that they perceive as being uncontrollable, having catastrophic potential, having fatal consequences, or bearing an inequitable distribution of risks and benefits. Slovic notes that nuclear weapons and nuclear power score high on all of these characteristics. Also unbearable in the public view are risks that are unknown, new, and delayed in their manifestation of harm. These factors tend to be characteristic of chemical technologies in public opinion. The higher a hazard scores on these factors, the higher its perceived risk and the more people want to see the risk reduced, leading to calls for stricter regulation. Slovic ends his review with a nod toward sociological and anthropological studies of risk, noting that anxiety about risk may in some cases be a proxy for other social concerns. Many perceptions of risk are, of course, also socially and culturally informed.

Slovic’s analysis goes a long way in explaining why people persist in extreme fears of nuclear energy while being relatively unafraid of driving automobiles, even though the latter has caused many more deaths than the former. The fact that there are so many automobile accidents enables the public to feel that it is capable of assessing the risk. In other words, the risk seems familiar and knowable. There is also a low level of media coverage of automobile accidents, and this coverage never depicts future or unknown events resulting from an accident. On the other hand, nuclear energy represents an unknown risk, one that cannot be readily analyzed by the public due to a relative lack of information. Nuclear accidents evoke widespread media coverage and warnings about possible future catastrophes. In this case, a lower risk phenomenon (nuclear energy) actually induces much more fear than a higher risk activity (driving an automobile).

Importantly, Slovic correctly predicted 25 years ago that DNA experiments would someday become controversial and frighten the public. Although the effects of genetically modified crops on ecosystems may be a cause for concern, fears of the supposed ill effects of these crops on human health are scientifically baseless. Today, although biologists insist that genetically modified crops pose no risk to human health, many members of the public fear that genetically modified crops will cause cancer and birth defects. Such crops grow under adverse circumstances and resist infection and destruction by insects in areas of the world tormented by hunger, and therefore have the potential to dramatically improve nutritional status in countries plagued by starvation and malnutrition. Yet the unfamiliarity of the phenomenon and its delayed benefits make it a good candidate for inducing public fear and skepticism.

There is a subtle yet passionate plea beneath the surface of Slovic’s review. The article calls for assessments of risk to be more accepting of the role of emotions and cognition in public conceptions of danger. Rather than simply disseminating more and more information about, for example, the safety of nuclear power, experts should be attentive to and sensitive about the public’s broad conception of risk. The goal of this research is a vital one: to aid policy-makers by improving interaction with the public, by better directing educational efforts, and by predicting public responses to new technologies. In the end, Slovic argues that risk management is a two-way street: just as the public should take experts’ assessments of risk into account, so should experts respect the various factors, from cultural to emotional, that result in the public’s perception of risk.

Sara Gorman is a PhD candidate at Harvard University. She has written extensively about HIV, TB, and women’s and children’s health for a variety of public health organizations, including Save a Mother and Boston Center for Refugee Health and Human Rights. She most recently worked in the policy division at the HIV Law Project.

More like this

A new study at the journal Risk Analysis examines the factors shaping public perceptions of nuclear energy and provides important clues about how to effectively mobilize public support for expanded investment in the technology. (See end of post.) The study analyzes data from 1997, but the relative…
This post is part of The Pump Handle's new “Public Health Classics” series exploring some of the classic studies and reports that have shaped the field of public health. View the first post of the series here, and check back at the "Public Health Classics" category for more in the future. By Sara…
Respected writer Jared Diamond recently published an overall excellent opinion piece in the New York Times discussing how we often obsess about the wrong things, while failing to watch for real dangers.    Jared Diamond’s Guide to Reducing Life’s Risks - NYTimes.com.   Many of us in the Plant…
I am back from an excellent science journalism conference in Denmark and will have more to say on the meeting which highlighted several issues that speak directly to challenges faced here in the US. But for now, I wanted to return to our Commentary article "Science Communication Re-Considered"…

I’ve used this classic paper by Paul Slovik in my own teaching in public health. Its value lies indeed in accepting that the public’s perception of risk is influenced by social, cultural, economic and political factors, and justifiably so. If a community learns that an economically and politically powerful corporation knowingly dumped chemical waste or other toxic materials in their neighborhood, residents are going to be angry and demand corrective action. Whether or not the community can prove that their health has been adversely affected----other aspects of their wellness---their sense of control, the value of their property, trust of social institutions, has been harmed. Their expertise living in the situation is just as relevant---in my opinion, more so---than the expertise of a toxicologist who can offer a narrow interpretation of the current state-of-knowledge of the potential effects on living organisms.

Regrettably, some of us have witnessed how defenders of dangerous products and processes have used Slovik’s article and its progeny to dismiss communities’ and groups of workers’ perceptions of risk. They’ll hire “experts” to disparaged the affected individuals’ “over-reaction,” and “unscientific” assessment of potential harm, and will throw around p-values and Bradford Hill criteria. The “experts” will try to convince decision-makers and opinion leaders that the lay people’s perception of risk is wrong.

We need to remember that these "experts” may be wearing---intentional or not----their own set of blinders. As a colleague wrote to me yesterday:

“Radiation hazards overblown? That is what the nuclear industry has been spouting for years, with a pretty impressive "blinded by science" approach. Tell it to the folks near Chernobyl or in Japan after the recent earthquake/tsunami/meltdown. How accurate were those "scientific risk assessments"? Much about risk is unknown, and sometimes manipulated, so to the extent lay people don't trust "scientific" discussions of risk, maybe that is because they can see through white lab coats.”

A reader of The Pump Handle shared this with me:

"Often times risk proclamations are not so well intentioned. As William Rusckelshaus---head of the EPA during the Nixon Administration----noted:

"When the action so forced has dire economic or social consequences, the person who must make the decision may be sorely tempted to ask for a “reinterpretation” of the data. We should remember that risk assessment data can be like the tortured spy: if you torture it long enough, it will tell you whatever you want to know. So it is good public policy to so structure an agency that such temptation is avoided."

William D. Ruckelshaus, Risk in a Free Society, Risk Analysis Vol 4, No. 3, 157-158(1984).

Thanks for your comments, Celeste! I completely agree with you about the set of "blinders" among experts as well. This is an important part of the equation, and something I hadn't thought about much before reading Slovic's piece.

Slovic's analysis of risk perception is so informative especially in measuring risks associated with new technologies. As a student of risk communication, I find it so aoolicable to may research thesis