Risk, Fear, Certainty

Apologies, once again, for the blogging silence. I was busy in London, on tour for the UK version of the book, which is called "The Decisive Moment". (We got some great press, including being featured as "Book of the Week" by BBC Radio 4.)

Although book tours can, on occasion, be frustrating and grueling - I'm so sick of airport food that I don't even like Egg McMuffins anymore, and I'm getting to the point where I detest the sound of my own voice - one of the genuine highlights is getting to answer questions from your readers. As an author, there is nothing more exciting than learning which parts of the book people find interesting and want to know more about or which points they disagree with or which stories resonate with their own experience. It's the thrill of seeing your words enter the world, of seeing a mass of pixels in a word processor become a collection of ink blots on pages made from dead trees. And then, because these ink blots are arranged just so, they can enter the mind of someone else, so that your sentences get remixed and reanalyzed. A single idea, typed more than a year ago on a laptop keyboard, has multiplied itself into a swarm of ideas.

One of the most interesting questions I've gotten while on tour goes something like this: "Given the massive decision-making flaws exposed by the current economic mess, what
variables should scientists be investigating in the future so that we can better understand how we got here? In other words, what will be the hot topic in decision-making science two years from now?"

Here's my sputtering answer: I think the financial crisis has helped expose a powerful bias in human decision-making, which is our abhorrence of uncertainty. We hate not knowing, and this often leads us to neglect relevant information that might undermine the certainty of our conclusions. I think some of the most compelling research on this topic has been done by Colin Camerer, who has played a simple game called the Ellsberg paradox with subjects in an fMRI machine. To make a long story short, Camerer showed that players in the uncertainty condition - they were given less information about the premise of the game - exhibited increased activity in the amygdala, a center of fear, anxiety and other aversive emotions. In other words, we filled in the gaps of our knowledge with fright. This leads us to find ways to minimize our uncertainty - we can't stand such negative emotions - and so we start cherry-picking facts and forgetting to question our assumptions.

A similar phenomenon is also at work when we're confronted with too many equivalent options, which is what happens to me every time I have to pick a toothpaste. There's some suggestive evidence by Akshay Rao that this "trade-off aversion" - do I want the Colgate Total or the Crest Pro-Health? - leads to increased activation in areas that are often associated with cognitive conflict and the detection of errors, such as the anterior cingulate cortex. This helps explain why I start getting anxious whenever I near the toothpaste aisle of the supermarket.

While Camerer's experiment is fascinating, I'd love to see data from a more realistic set of experiments. Why not bring in actual investment bankers and watch how they respond to varying levels of information, or how giving them a quantitative model that's supposed to assess risk (and thus remove the uncertainty) alters their decision-making process?

Which brings me to Dennis Overbye's fascinating analysis of "The Wall Street Physicists". He looks at how the rise of quants bearing impossibly complicated mathematical formulas gave financial firms a new kind of confidence to engage in risky trades and investment innovations:

The Black-Scholes equation resembles the kinds of differential equations physicists use to represent heat diffusion and other random processes in nature. Except, instead of molecules or atoms bouncing around randomly, it is the price of the underlying stock.

The price of a stock option, Dr. Derman explained, can be interpreted as a prediction by the market about how much bounce, or volatility, stock prices will have in the future.

But it gets more complicated than that. For example, markets are not perfectly efficient -- prices do not always adjust to right level and people are not perfectly rational. Indeed, Dr. Derman said, the idea of a "right level" is "a bit of a fiction." As a result, prices do not fluctuate according to Brownian motion. Rather, he said: "Markets tend to drift upward or cascade down. You get slow rises and dramatic falls."

One consequence of this is...that when you need financial models the most -- on days like Black Monday in 1987 when the Dow dropped 20 percent -- they might break down. The risks of relying on simple models are heightened by investors' desire to increase their leverage by playing with borrowed money. In that case one bad bet can doom a hedge fund. Dr. Merton and Dr. Scholes won the Nobel in economic science in 1997 for the stock options model. Only a year later Long Term Capital Management, a highly leveraged hedge fund whose directors included the two Nobelists, collapsed and had to be bailed out to the tune of $3.65 billion by a group of banks.

The collapse of LTCM is a microcosm of so many of the cognitive flaws that led us to the current mess. Because everybody at LTCM believed in the state-of-the-art model, few were thinking about how the model might be catastrophically incorrect. The hedge-fund executives didn't spend enough time worrying about the fat tail events that might disprove their theories. Instead, they pretended that the puzzle of the marketplace had been solved - the uncertainty of risk had been elegantly quantified.

Once this happens, we start making serious mistakes. The errors inherent in the model are compounded by our desire to prove the model right. Instead of using our reasoning powers to improve our predictions, we use reason to reassure ourselves, to rationalize away the warning signs of failure. Our sense of certainty - the model must be right - is dishonestly preserved. And so LTCM ignored the brewing troubles in the Asian markets. The executives discounted the rumors that Russia might default. They ruled out the possibility of a market meltdown, which led them to take massive risks that didn't appear risky. Because LTCM was making decisions under the spell of certainty, they ended up making a series of dangerous decisions.

Replace LTCM with, well, just about every major financial firm, and replace Russian and Asian markets with "subprime debt," and it's the same old story. Models can be a crucial decision-making tool, but they can also lead us to disaster. This isn't the fault of the models, or even the quants - it's the fault of all those executives who used these models they didn't really understand to silence their amygdala, so that their fear of risk disappeared. They were certain there was little to worry about, which is generally a sign that we should start getting scared.

More like this

"How We Decide" author Jonah Lehrer, fresh from a book tour of the UK, offers what he calls a "spluttering answer" (it's really quite lucid) to a question he says he's getting a lot these days: What decision-making errors were involved in our current financial meltdown?? The short version of his…
Over at Mind Matters, I've got an interview with Dr. Robert Burton on the danger of certainty and its relevance during a presidential election: LEHRER: To what extent does the certainty bias come into play during a presidential election? It seems like we all turn into such partisan hacks every four…
I am not in the habit of reading classic horror stories but this weekend I picked up John Kenneth Galbraith's 1955 book, The Great Crash: 1929. Unfortunately it is non-fiction. And even more unfortunately it is selling well in the university bookstore. Galbraith is gone but his book lives on. In a…
Brian Knutson, a very clever neuroeconomist at Stanford, sheds light on some of the cognitive biases currently holding back the economy over at Edge.org. From the perspective of the brain, uncertainty is hell: The brain responds to uncertain future outcomes in a specific region, and ambiguity (not…

NPR has had an interesting series (which you have written about as well) about how we want to buy stocks when everyone else is (and prices are high) and that now, when prices are low, we are reluctant to buy because of "our abhorrence of uncertainty." Prices certainly may go lower, but we were so happy to buy at such inflated prices because prices would go higher.

I like the way you have made connections between economic theory, modern financial technology, developed by those quants wihich Buffet rejects, and their ussually missed neural substrata.

And that´s the great challenge: how to connect microeconomics (decisions made by individuals that undoubtly depends on the brain) with macroeconomics.

Your book has broken the iPlayer - it works, but claims that Episode 3 has "10541 days left to listen" :)

Wired recently explored the failures of the Gaussian copula function, which in essence modeled complex risk thereby increasing certainty surrounding the trading of untested securities. In the end, the article makes the same point as Mr. Lehrer; "This isn't the fault of the models, or even the quants - it's the fault of all those executives who used these models they didn't really understand to silence their amygdala, so that their fear of risk disappeared".

It is a fascinating read:

http://www.wired.com/techbiz/it/magazine/17-03/wp_quant?currentPage=all

This is beautifully illustrated in The Black Swan, everyones favorite book on uncertainty and prediction.

The quants KNOW how many assumptions go into their models, and how dangerously inaccurate and misleading they can be. But when its suggested that they not rely on them, they're terrified, because the "the models are all we have!"

Our brains were designed for the African savannah, where it paid to be afraid of the unknown. Leaving the safety of your village and going into unknown territory generally meant getting eaten by a lion. Our world today is infinitely different more complex, but our brains haven't caught up to it. They still expect to be eaten by a lion if they're not careful.

By hegemonicon (not verified) on 10 Mar 2009 #permalink

There is an interesting blog post at "socializing finance" http://is.gd/mMe7 about a sociological study of a derivatives trading room at a large bank on Wall Street that built doubt into their processes and apparently avoided some of the disasters of the current financial crisis.

By the way, your book, The Decisive Moment, is my book of the year - really good stuff, beautifully communicated.

What a great book! I live in India, and i managed to procure the book from a library(my respect for their pre-emptive procurement has gone up!). It was a purely impulse pick-up, but I guess my dopamine was on overdrive!!!

There's something called a 'boundary condition' that enables engineers and scientists to specify information required for differential equations. These equations are typically 'solved' by then, but still require a seed crystal, so to speak, to generate a solution in the real world.

You'd be surprised to know that Engineers have liberally sprinkled their interpretations of boundary conditions into their models; but perhaps you won't be surprised to know that a decent number of executives don't even KNOW this happens. It's a finished product for them, and it's messy to go into the details of WHY a certain price level flashes.
If anyone stopped and asked WHY, instead of HOW all the time, there would have been cognizance of this.

I don't think it's realistic to expect executives to get smarter about black box models. The problem is not that they are using a model, the problem is that all of them are using almost the same model. The only way to solve this is to treat it asa public-good problem, like national defense, and find public policy solutions.

Seems to me that rather than just impotently waiting for evidence to come in over the transom on the latest Ponzi scheme, a body like the SEC ought to get engaged in the equivalent of war games. Put together a team of their own quants to see what will break these models. If it's a plausible scenario, like (for example!) housing prices falling, they need to restrict the use of the model for risk management. Sounds too simple, huh?

Another thing I read suggested that the models treated mortgage failues as independent events, when in reality they are not. So the calculated risk on bundled mortgages was much lower than the actual risk.

As has been discovered - there are amygdalae to be frightened of especially when reason does not exist, as seen in my art.

By lee pirozzi (not verified) on 12 Mar 2009 #permalink

Capital markets are unstable. In the past there was no way to make them stable. But today we have computer power that can be used to make them stable. By using the greater computer power of today we can have a much higher turn over of capital in the capital market. This higher turnover will make the market harder to game or control and the market will no longer have the unstable run ups or declines. Who can change or control the market when say 20% of the capital is trading each day? So now that we have the compute power to provide for all these transactions that will smooth out the market how do we force people to turn over at a rate of 20% a day? Easy, put a cap gains tax of 0% (zero) on all gains of 7 days or less and put a cap gains tax of 90% of all gains of more than 7 days. The likes of Yahoo, Micosoft and/or Sun Micro Systems will give us the systems that will provide automated software agents to support turning over one's investments every 7 days (based on the specs you give the agent). A system like this will make the financial markets work as smoothly as the local fruit market.

By Martyn Strong (not verified) on 12 Mar 2009 #permalink

There is another line of research that needs to be explored. This is how/why groups of intelligent professionals can persist in accepting assumptions without critical analysis. It was not a single banker who made these financial mistakes. It was whole departments who persisted in error.

A good non-financial example is the German General Staff in 1940. The German army simply did not have the resources to support a long war at the end of an ever lengthening supply line. So the staff simply assumed that the Red Army could be defeated both quickly and before it could retreat far from the borders. There were never any facts to support these assumptions.

Nor were there facts to support the assumption that housing prices would always rise or that the other assumptions of very scientific formulas would always be correct.

By Chuck Vekert (not verified) on 14 Mar 2009 #permalink

Mmmmm...for myself, I'm coming closer to a grand unified theory of the financial crisis, among other things: Humans Are Permanently Stupid in Incurable Ways. Glib, perhaps, but yet we seem --- more evidence? :) --- unable to grasp this. We seem to believe fundamentally in progress, that we are capable of fully comprehending complex subjects, and that we continually grow closer to full comprehension. In a way, as you allude to above, the entire crisis was born of bankers thinking they understood risk better than they did --- and because they understood it, could hedge against it, and because they could hedge against it, were able to take much much greater risks than before...

Let me suggest another factor: the money managers interests have become detached from those of the investors. Managers have as much as billions to gain by a few good quarters of performance. If the fund blows up, well you lose your job, but it wasn't your money anyway. This "head I win, tails you lose" philosophy almost perfectly captures the logic of the big bank managers in the latest debacle. The results were quite predictable, if your model takes that feature into account.

"We hate not knowing, and this often leads us to neglect relevant information that might undermine the certainty of our conclusions."

The best short explanation I have ever seen for the belief in god(s) and the existence of religion.

"The hedge-fund executives didn't spend enough time worrying about the fat tail events that might disprove their theories."

How is it we know (just from the fact that the unexpected events happened) that the tail was fat? Can't events in a skinny tail happen? I've never understood the inference of a fat tail from the fact that in a single trial (the one we live) the tail event occurred. Is there a natural tendency to see not only error, but stupid error made through use of a misspecified stupid model?

from Wired magazine;

Recipe for Disaster: The Formula That Killed Wall Street

For five years, Li's formula, known as a Gaussian copula function, looked like an unambiguously positive breakthrough, a piece of financial technology that allowed hugely complex risks to be modeled with more ease and accuracy than ever before. With his brilliant spark of mathematical legerdemain, Li made it possible for traders to sell vast quantities of new securities, expanding financial markets to unimaginable levels.

Where was Li from?
China
where is Li now?
China
Sabotage?

By truthynesslover (not verified) on 15 Mar 2009 #permalink