“How We Decide” author Jonah Lehrer, fresh from a book tour of the UK, offers what he calls a “spluttering answer” (it’s really quite lucid) to a question he says he’s getting a lot these days: What decision-making errors were involved in our current financial meltdown??
The short version of his answer — well worth reading in its entirety — is that we (and big investment outfits particularlyl) succumbed to an abhorrence of uncertainty.
We hate not knowing, and this often leads us to neglect relevant information that might undermine the certainty of our conclusions. I think some of the most compelling research on this topic has been done by Colin Camerer, who has played a simple game called the Ellsberg paradox with subjects in an fMRI machine. To make a long story short, Camerer showed that players in the uncertainty condition – they were given less information about the premise of the game – exhibited increased activity in the amygdala, a center of fear, anxiety and other aversive emotions. In other words, we filled in the gaps of our knowledge with fright. This leads us to find ways to minimize our uncertainty – we can’t stand such negative emotions – and so we start cherry-picking facts and forgetting to question our assumptions.
In other words, we look for false certainty. And in the case of the financial meltdown, much of that false certainty was found in fancy financial “instruments,” like mortgage-based derivatives, that promised to encapsulate and contain risk –but which have turned out to be so risky they’re bringing down the whole system. The dynamics these instruments claim to represent and control are almost impossibly arcane and complex — but they got boiled down to formula that, while flummoxing to normal people, had just the right combination of complexity and simplicity — complexity apparently solved — to convince mathematical investor types that they solved essential problems and put risk in a bottle.
Felix Salmon’s recent Wired article describes one such instrument masterfully. Jonah cites another, which I haven’t read, by Dennis Overbye. In both cases, overconfidence in these models, which were supposed to virtually eliminate risk, encouraged catastrophic risk-taking. As Jonah puts it,
Because everybody at LTCM believed in the state-of-the-art model, few were thinking about how the model might be catastrophically incorrect. The hedge-fund executives didn’t spend enough time worrying about the fat tail events that might disprove their theories. Instead, they pretended that the puzzle of the marketplace had been solved – the uncertainty of risk had been elegantly quantified.
Jonah is dead right about this, and I highly recommend Salmon’s article (as Jonah does Overbye’s) for a look at how this sort of elegant techy solution can breed a false confidence.
But I wanted to note a strong parallel in today’s medicine, which is our tremendous faith in high technology, and particularly in imaging technology. These images are so detailed and granular we tend to think they see everything, but they don’t. They often get things wrong, both false positive and negatives; but the allure is so great, and the process to satisfyingly neat and unmessy, that both the public and most doctors have far too much faith in them. When I was did a story on the death of the autopsy a few years ago, I asked every doctor I knew if they ever asked for autopsies. One of my own doctors told me, “No. When someone dies we generally know why.” And many doctors cited imaging technology as the reason they knew the cause of death. For this reason, we hardly ever do autopsies anymore — the U.S. used to autopsy 50% of deaths, now we do under 5%. But every time someone compares the autopsy reports to the pre-autopsy declared cause of death, they discover that the autopsy discovers contributing problems that were missed in about 15% of the deaths.
I’m no Luddite; I loves my technology, and I’m as thrilled with the judicious use of imaging as anyone. But as we try to rework our health-care system to make it more efficient, we need to realize that we seem almost congenitally overconfident in high-tech answers. This isn’t a reason to toss out all the scanners. But it’s great reason to subject high-tech procedures and tools to rigorous comparative effectiveness studies.
PS For a bracing look at certainty’s problemmatic allure, check out Robert Burton’s On Being Certain