In a recent New Yorker, John Cassidy spends time with a number of influential economists at the University of Chicago, home to the Chicago School and its emphasis on the productive efficiency of free markets. Obviously, the financial maelstrom of the last few years has led many to question this premise, at least in its strongest form. How have these economists reacted? If you read my recent article in Wired on the psychology of failure, you probably aren’t too surprised to learn that Cassidy finds several eminent Chicago economists who insist that the market failure wasn’t actually a failure, or that even if there was a failure then it didn’t involve the markets. In other words, their assumption remains intact – it’s the evidence that’s so flawed.
Here, for instance, is Cassidy interviewing Eugene Fama:
I asked him how this theory [the efficient-markets hypothesis, which "underpinned the deregulation of financial markets] had fared in the recent crisis, which many, myself included, have described as an example of gross inefficiency. Fama was unruffled. “I think it did quite well in this episode,” he said…”Stock prices typically decline prior to a recession and in a state of recession. This was a particularly severe recession. Prices started to decline in advance of when people recognized that it was a recession and then continued to decline. That was exactly what you would expect if markets were efficient.”
The emphasis that Fama placed on the stock market surprised me. Surely, I said, we had experienced a giant credit bubble, which eventually had burst. “I don’t know what a credit bubble means,” Fama replied, his eyes twinkling. “I don’t even know what a bubble means. These words have become popular. I don’t think they have any meaning…People have jumped on the bandwagon of blaming financial markets. I can tell a story very easily in which the financial markets were a casualty of the recession, not a cause of it.”
The interview continues in a similar vein. The point is that nothing in the last few years, at least in Cassidy’s telling, has led Fama to reconsider his theoretical assumptions. The financial markets are efficient; government regulation is to blame. In my Wired article, I discuss some of the neuroscience behind such intellectual stubbornness, and the way the brain cleverly dismisses dissonant information.
But Cassidy’s excellent article also made me think about the role of colleagues in triggering new ideas, and the potential dangers of working in a department filled with people who share the same ideology. Here I describe the research of Kevin Dunbar, who spent several years watching scientists work:
While the scientific process is typically seen as a lonely pursuit — researchers solve problems by themselves — Dunbar found that most new scientific ideas emerged from lab meetings, those weekly sessions in which people publicly present their data. Interestingly, the most important element of the lab meeting wasn’t the presentation — it was the debate that followed. Dunbar observed that the skeptical (and sometimes heated) questions asked during a group session frequently triggered breakthroughs, as the scientists were forced to reconsider data they’d previously ignored. The new theory was a product of spontaneous conversation, not solitude; a single bracing query was enough to turn scientists into temporary outsiders, able to look anew at their own work.
But not every lab meeting was equally effective. Dunbar tells the story of two labs that both ran into the same experimental problem: The proteins they were trying to measure were sticking to a filter, making it impossible to analyze the data. “One of the labs was full of people from different backgrounds,” Dunbar says. “They had biochemists and molecular biologists and geneticists and students in medical school.” The other lab, in contrast, was made up of E. coli experts. “They knew more about E. coli than anyone else, but that was what they knew,” he says. Dunbar watched how each of these labs dealt with their protein problem. The E. coli group took a brute-force approach, spending several weeks methodically testing various fixes. “It was extremely inefficient,” Dunbar says. “They eventually solved it, but they wasted a lot of valuable time.”
The diverse lab, in contrast, mulled the problem at a group meeting. None of the scientists were protein experts, so they began a wide-ranging discussion of possible solutions. At first, the conversation seemed rather useless. But then, as the chemists traded ideas with the biologists and the biologists bounced ideas off the med students, potential answers began to emerge. “After another 10 minutes of talking, the protein problem was solved,” Dunbar says. “They made it look easy.”
When Dunbar reviewed the transcripts of the meeting, he found that the intellectual mix generated a distinct type of interaction in which the scientists were forced to rely on metaphors and analogies to express themselves. (That’s because, unlike the E. coli group, the second lab lacked a specialized language that everyone could understand.) These abstractions proved essential for problem-solving, as they encouraged the scientists to reconsider their assumptions. Having to explain the problem to someone else forced them to think, if only for a moment, like an intellectual on the margins, filled with self-skepticism.
The lesson is that the process of discovery benefits from our differences, from the disagreements and contradictions that arise when people with different assumptions discuss the same data. When everyone agrees, or has the same academic background,
then the stubbornness is reinforced. The theory doesn’t change. The School of X – and it doesn’t matter what X is – remains tethered to its dusty preconceptions. The failure never leads to a better answer.