In science, uncertainty is normal

As I put it at a blogging panel last fall, "in science, it is normative to be not sure." It wasn't my most eloquent moment, but at least AAAS' president-elect Alice Huang agrees with me that one of the biggest challenges to public science literacy is understanding the contingent nature of scientific "truth".

But probably the most difficult concept to get across to nonscientists is that we look at data and then use probabilities to judge those data. The public wants an absolute black-and-white answer. We may look at something that is 80 percent likely as being good enough to base decisions on.

We'd like absolute answers, but we realize that sometimes decisions must be made with partial data or some uncertainties. And ... as we collect more data, what we thought of as truth might change.

If we can be patient and explain this to nonscientists -- how we are seeking truth with the best tools available -- they are less likely to be negative or skeptical of our conclusions.(source)

Of course we need better science education to address this. But in the meantime, if I can make a request of my fellow scientists, especially those influencing policy, don't oversell your work. A public led to think there is certainty on a scientific issue when there isn't will feel betrayed when the scientific consensus evolves. And a public who doesn't grasp how dissent and uncertainty are part of the normal scientific dialogue is more likely to give credence to pseudoscientific views promulgated by outliers with no real credibility.

During the recent Origins Symposium at ASU, which aired on NPR's Science Friday two weeks ago, one of the panelists (I think it was Lawrence Krauss) said:

We tend to sometimes hype things too much, and we have to beware. We've got to be careful about saying what is likely to happen. We want to promote things - after all the Large Hadron Collider costs a lot of money, and we try to convince people to spend money to do something. And we often like to say it's going to recreate the early universe. . . and sometimes that comes back to bite us. . . I think it's very important that scientists try of course to get people interested in what we're doing, but not overhype the situation, because it's always bad. And in fact, it's exactly that. If we say we're guaranteed to discover all these new particles in the LHC, and we see nothing, how are we supposed to come back later and say, 'you know, that was what we really wanted.'

Exactly! Although another panelist (I think it was Brian Greene) then noted that sometimes overhyping things - for example, asking whether the LHC will create a black hole - sometimes opens up an opportunity to convey accurate information. But that's a risky strategy. Listen to the entire interesting discussion here.

A failure to understand the role of uncertainty and evidence in science may be why a lot of people espouse the views criticized in this video, popularized lately on Pharyngula:

I'd be showing that video in my philosophy of science course, if I were still teaching.

More like this

In my experience, the problem is usually the opposite --- a reflex refusal to draw a conclusion even from very good data. Instead, what should be conclusions are buried in a steaming heap of qualifiers, reservations, lists of what could be wrong, tabulations of uncertainties, and on and on.

What scientists need to do is to quantify uncertainty and draw conclusions anyway. This is critically important in policy-relevant areas where the over-emphasis on uncertainty has very bad real-world consequences.

By ecologist (not verified) on 14 Apr 2009 #permalink

All of the usual cautions aside, the public wants to know if something is a good bet. As in, "can I trust my child's health to this vaccine?" Absolute certainty comes across as not having respect for the parent's concern, qualifiers come across as lack of confidence.

"The simplest thing I can tell you is that I trust this information enough that I gave the vaccine to my children" is more along the right direction.

By D. C. Sessions (not verified) on 14 Apr 2009 #permalink

With all due respect, both of you are making my point for me. :)

The public is frustrated when scientists hedge their conclusions with "ifs" and "maybes" precisely because the public doesn't understand that science is about finding a preponderance of evidence, not about finding absolute truth. That's a science education/science literacy problem.

It may be tempting for scientists to respond to confusion and frustration by overstating the evidence and saying, "Fine: THIS is the truth." But ask yourself what the downside of that strategy is. Communication research shows that when a strong, forceful message is proven false, it isn't just ignored, it actually increases resistance to future messages. Basically, if scientists exaggerate their evidence to support a message, then when the public realizes several studies later that the message was premature or inaccurate, they will be more skeptical of science in general and trust scientific experts less.

That's not good for us, is it? Especially when scientists advise policymakers, who make policy based on their recommendations. That's why it's important to convey that research has caveats and that science doesn't give you absolute certainty. It's not easy, but it's quite possible to do that while making policy recommendations. I have colleagues who do it all the time.

Being able to explain research clearly to the general public, like in a newspaper interview, is a different issue. Of course the public doesn't need to hear a long list of caveats about why an experiment may or many not have proven something or other. That's common sense! Researchers who do that don't have a gift for communicating to the public. Good science communicators can convey both the scientific consensus on an issue, while also noting the limitations of a study that are relevant to the audience - for example, if a clinical trial was only done on men and not women, or had a very small sample size.

With all due respect, both of you are making my point for me. :)

Thank'ee kindly, Ma'am. We aim to please.

By D. C. Sessions (not verified) on 14 Apr 2009 #permalink

Honestly I thought that video was thoroughly contrived, but I wouldn't mind showing it to the execs at SciFi channel who keep renewing Ghosthunters!

By Joe Leasure (not verified) on 14 Apr 2009 #permalink





The problem is not scientists. The problem is dumbfuck "science journalists" who copy paste press releases and then add a fuckton of intensifiers and dramatifiers.

ROTFLMAO! Science journalists who say those kinds of things are indeed a blight on the face of the Earth. We can all drink a Jameson to that!

This is why the best scientists make the worst expert witnesses. Lawyers and juries like witnesses who are sure of things rather than witnesses who will only say that "the evidence is consistent with" whatever theory he or she is there to support.

Damn the lower half of the bell curve . . . they're the only ones that can't find excuses to get out of jury duty!

Great point, Chris! I think being a neuroscientist, and thus knowing all about the faults in human memory, will probably get me off any juries for the foreseeable future.