The year was 2007. The event was the 118th annual meeting of the American Economic Association. The person was George A. Akerlof, recipient of the 2001 Nobel Prize in Economics and newly elected president of the AEA, who stepped up to the podium to deliver his presidential address titled “The Missing Motivation in Macroeconomics“. The missing motivation was norms.
Norms? Economists have been the primary advisors on public policy and they’re only newly considering a little thing called norms? This is the kind of disconnect between economic theory and reality that makes outsiders rub their eyes in disbelief.
Yet, it is important to resist the facile conclusion that economists are delusional as individuals. Most economists are very smart, very well informed, and very well-intentioned for the most part. We therefore have a mystery to solve that is far more interesting and consequential than mocking economists: What are the social and intellectual dynamics that cause smart, well-informed, and well-intentioned people to ignore something as manifestly important for our species as norms?
Akerlof offers a perceptive diagnosis in his address, which is published in the March 2007 issue of the American Economic Review. Evidently, the school of economic thought based on John Maynard Keynes was based on a commonsense view of human nature, or “their own introspection regarding how the various actors in the economy would behave,” as Akerlof puts it. Then something happened that I will let him describe in his own words:
But a new school of thought, based on classical economics, objected to the casual ways of these folks. New Classical critics of Keynesian economics insisted instead that these relations be derived from fundamentals. They said that macroeconomic relationships should be derived from profit-maximizing by firms and from utility-maximizing by consumers with economic arguments in their utility functions.
The new methodology had a profound effect on macroeconomics. Five separate neutrality results overturned aspects of macroeconomics that Keynesians had previously considered incontestable.
In other words, a formal body of mathematical theory became the new gold standard, trumping mere introspection. One can well imagine the glamour and authority of the new methodology, but in retrospect it can be seen as a profound mistake. It is important to go beyond introspection but a theorem-like body of mathematics isn’t right either. Is there a third alternative?
Every field of inquiry hits a complexity wall beyond which a theorem-like body of mathematics cannot go. For physics, it was the complexity revolution so ably recounted by James Gleik in his 1987 book Chaos: The Making of a New Science. Even the relatively simple problem of computing the mutual gravitational interactions of 3 masses in 3-dimensional space hits the complexity wall. In my field of evolution, population genetics theory hits the complexity wall with 3-locus models.
How can any field of inquiry progress beyond the complexity wall? Theory plays a role that is still essential but more humble, exploring small regions of the parameter space like a flashlight rather than trying to illuminate the entire world like the sun. Computer simulation models become an essential supplement to analytical models and need not be regarded as poor cousins. Theory becomes a refinement of intuition that needs to be constantly checked against empirical data to remain anchored to reality.
Economic theory is slowly going in this direction, as Eric Beinhocker (one of the conference participants) recounts in his 2006 book The Origin of Wealth: Evolution, Complexity, and the Radical Remaking of Economics. A lot of the pushing comes from within the field, as we have seen in the case of Maurice Allais (see E&E II) and George Akerlof. The folks who award the Nobel Prize in economics seem especially eager to recognize the pushers, most recently Elinor Ostrom, whose work will be featured in a future installment. All this pushing from within the field signifies the existence of a nearly immovable object. Some deep thinking about the nature of paradigms and the inadequacy of a theorem-like body of mathematics for any field of inquiry might help to make the object more moveable.
There is more to the mystery of why core economic theory resists movement in the direction of common sense–Milton Friedman’s claim in 1953 that a theory can remain predictive even when its assumptions flagrantly depart from reality. Here is how Akerlof diagnoses the problem in his 2007 presidential address:
The omission of norms from macroeconomics, as well as from economics more generally, can be explained by economists’ adherence to positive economics. Friedman’s (1953) essay on positive economics describes the methodological implications of such belief. In particular, he says that economic theorists should strive for parsimonious modeling. According to Friedman, they should even forsake realistic assumptions in pursuit of such parsimony.
It is important to review Friedman’s reasoning in detail because it relies upon evolution. He presented three examples to illustrate how a theory can be predictive while flagrantly departing from reality. In the first example, the distribution of leaves on a tree can be predicted as if it is trying to maximize exposure to light, when mechanistically it is doing nothing of the sort. In the second example, the behavior of an expert billiards player can be predicted as if he is solving complex equations, when mechanistically he is doing nothing of the sort. In the third example, the behavior of a successful business firm can be predicted on the basis of profit maximization, even if the members of the firm don’t think of it that way. The firm might have become successful by a learning process similar to the second example or a process of selection among firms, similar to genetic selection among trees.
All three examples involve adaptations that evolve by variation-and-selection processes, whether genetic evolution, cultural evolution, or individual learning. The distinction between ultimate and proximate causation in evolutionary theory can help us see both the truth and falsehood of Friedman’s claim.
As every evolutionist knows, all traits that evolve by a variation and selection process require two complimentary explanations. First, why does a given trait exist, compared to many traits that could exist? Returning to Friedman’s first example, why do trees maximize the exposure of their leaves to light, when there is an infinitude of other ways that they could be configured? The answer to this question is based on what survives at the end of a variation-and-selection process (ultimate causation). Second, how does the trait exist in a mechanistic sense? The answer to this question is based on physical mechanisms, such as those that cause real leaves to be distributed on real trees (proximate causation).
Friedman is right that ultimate causation provides a powerful way to reason about the properties of organisms without any knowledge of proximate mechanisms. He is even right that bothering with proximate mechanisms can get in the way. Suppose that I am a desert biologist interested in animal coloration. I can confidently predict that many desert organisms will be the color of sand, whether they are insects, reptiles, or mammals. I don’t need to know anything about the physical makeup of the organisms, not even their genes. To the extent that proximate mechanisms result in heritable variation, that is the extent to which I can ignore them while concentrating on the shaping influence of selection. Bothering with the physical makeup of sandy colored insects tells me nothing about the physical makeup of sandy colored snakes. Evolutionists use this mode of reasoning all the time. It is part of what is sometimes called the adaptationist program.
Friedman is wrong that ultimate causation provides a justification for ignoring proximate causation in all respects. That claim would be a flagrant example of what Stephen Jay Gould and Richard Lewontin criticized about adaptationism in their famous article titled “The Spandrels of San Marco and the Panglossian Paradigm: A Critique of the Adaptationist Program“, which was published in 1979. This is one of the most widely discussed papers in modern evolutionary thought. Much of the debate centers on whether Gould and Lewontin created a straw man in their portrayal of evolutionists using adaptation as their only explanatory tool. Regardless of the answer to that question, Friedman’s position is every bit as extreme as Gould and Lewontin’s straw man portrayal.
Mature evolutionary research programs pay equal attention to proximate and ultimate causation, which mutually inform each other. No evolutionist in their right mind would claim that ultimate causation eliminates the need to understand proximate causation, other than in certain narrowly defined contexts. Yet, that was Friedman’s claim for the field of economics, which still has a firm grip as late as 2007, judging by Akerlof’s presidential address.
Future installments will demonstrate how evolutionists rely upon both proximate and ultimate causation, providing a model for economics and other theoretical frameworks that guide public policy. For the moment, here’s a bit of advice: If an object resists pushing after decades of effort, try going around it.
To be continued