Phony Experts

Nicholas Kristof has a great column today on Philip Tetlock and political experts, who turn out to be astonishingly bad at making accurate predictions:

The expert on experts is Philip Tetlock, a professor at the University of California, Berkeley. His 2005 book, "Expert Political Judgment," is based on two decades of tracking some 82,000 predictions by 284 experts. The experts' forecasts were tracked both on the subjects of their specialties and on subjects that they knew little about.

The result? The predictions of experts were, on average, only a tiny bit better than random guesses -- the equivalent of a chimpanzee throwing darts at a board.

"It made virtually no difference whether participants had doctorates, whether they were economists, political scientists, journalists or historians, whether they had policy experience or access to classified information, or whether they had logged many or few years of experience," Mr. Tetlock wrote.

Why are political pundits so often wrong? In my book, I devote a fair amount of ink to Tetlock's epic study. The central error diagnosed by Tetlock was the sin of certainty: pundits were so convinced they were right that they ended up neglecting all those facts suggesting they were wrong. "The dominant danger [for pundits] remains hubris, the vice of closed-mindedness, of dismissing dissonant possibilities too quickly," Tetlock writes. (This is also why the most eminent pundits were the most unreliable. Being famous led to a false sense of confidence.)

Is there a way to fix this mess? Or is cable news bound to be populated by talking heads who perform worse than random chance? As Kristof notes, Tetlock advocates the creation of a "trans-ideological Consumer Reports for punditry," which doesn't strike me as particularly feasible. Instead, I'd argue that the easiest fix is for anchors on news shows to simply spend more time asking pundits about all those predictions that turned out to be wrong. The point isn't to generate a public shaming - although that would certainly be more entertaining than most cable news shows - but to force pundits to genuflect and introspect on why they made a mistake in the first place. Were they too ideological? Were they thinking like a hedgehog? What relevant facts did they ignore? And why did they ignore them?

By asking the right questions, we can make our pundits better. They'll never be perfect, or even worth their inflated paychecks, but perhaps we can help them perform better than chimpanzees throwing darts. Tetlock ends his book with some take-home advice on what we can all learn from the failures of political experts: "We need to cultivate the art of self-overhearing," he says, "to learn how to eavesdrop on the mental conversations we have with ourselves." A big part of that mental conversation is studying the biases and habits that led us to err.

And here, via Steve, is a new study by Gregory Berns of Emory on how even bad "expert" advice can influence decision-making in the brain. Here's the Wired summary:

In the study, Berns' team hooked 24 college students to brain scanners as they contemplated swapping a guaranteed payment for a chance at a higher lottery payout. Sometimes the students made the decision on their own. At other times they received written advice from Charles Noussair, an Emory University economist who advises the U.S. Federal Reserve.

Though the recommendations were delivered under his imprimatur, Noussair himself wouldn't necessarily follow it. The advice was extremely conservative, often urging students to accept tiny guaranteed payouts rather than playing a lottery with great odds and a high payout. But students tended to follow his advice regardless of the situation, especially when it was bad.

When thinking for themselves, students showed activity in their anterior cingulate cortex and dorsolateral prefrontal cortex -- brain regions associated with making decisions and calculating probabilities. When given advice from Noussair, activity in those regions flat lined.

I'm most interested in the reduced activity seen in the ACC, since that brain area is often associated with cognitive conflict. (It's activated, for instance, when people are presented with contradictory facts or dissonant information.) I think one way to understand the influence of experts is that, because they're "experts," we aren't as motivated to think of all the reasons they might be spouting nonsense. If so, that would be an unfortunate neural response, since all the evidence suggests they really are spouting nonsense.

More like this

I suggest that the underlying cause of the pundits' poor track record could be the audience's lust for certainty, more than the pundits'. Pundits who communicate their uncertainty don't appeal to audiences looking for definitive predictions, and are therefore culled from the airwaves and editorial pages in short order. What's left? the ones who believe most strongly and blindly in their own predictions.

Assuming that all of the pundits that Tetlock studied have a certain profile (i.e., audience), he likely studied the group of pundits who are most likely to be wrong. It's conceivable that there are thousands of highly accurate pundits out there, but that nobody wants to hear predictions laced with percentage certainties and caveats. This is why any attempt to hold pundits publicly accountable for their predictions will be difficult. If any one of them starts being less certain of their predictions, they'll promptly be replaced by someone who sounds more certain, by popular demand.

The conclusion of Tetlock's study isn't necessarily that the vast majority of pundits are prone to the sin of certainty and bad predictions. It could also be that we as the audience elevate the most certain-sounding (and therefore most inaccurate) people to the status of "pundit". The solution, therefore, has to target the audience, and not the pundits, IMHO.