The ultimatum game is a simple experiment with profound implications. The game goes like this: one person (the proposer) is given ten dollars and told to share it with another person (the responder). The proposer can divide the money however they like, but if the responder rejects the offer then both players end up with nothing.
When economists first started playing this game in the early 1980s, they assumed that this elementary exchange would always generate the same outcome. The proposer would offer the responder approximately $1â¯a minimal amountâ¯and the responder would accept it. After all, $1 is better than nothing, and a rejection leaves both players worse off. Such an outcome would be a clear demonstration of our innate selfishness and rationality.
However, the researchers soon realized that their predictions were all wrong. Instead of swallowing their pride and pocketing a small profit, responders typically rejected any offer they perceived as unfair. Furthermore, proposers anticipated this angry rejection and typically tendered an offer around $5. It was such a stunning result that nobody really believed it.
But when other scientists repeated the experiment the same thing happened. People play this game the same way all over the world, and studies have observed similar patterns of irrationality in Japan, Russia, Germany, France and Indonesia. No matter where the game was played*, people almost always made fair offers. As the economist Robert Frank notes, "Seen through the lens of modern self-interest theory, such behavior is the human equivalent of planets traveling in square orbits."
There are, of course, many possible explanations for the ultimatum game. Perhaps we are programmed to protect our reputation, and don't want other people to think of us as too greedy. Perhaps we anticipate the anger of the other person - they'll be pissed off if they get treated unfairly - and so we make a fair offer out of practical necessity. Or maybe we're all instinctive socialists, hard wired to prefer equitable outcomes.
Interestingly, that last explanation has just gotten some experimental support. (This doesn't mean that the other explanations aren't valid, though. Human behavior rarely has simple causes.) In a paper published last week in Nature, a team of Caltech and Trinity College psychologists and neuroeconomists looked at how the brain's response to various monetary rewards is altered by the context of inequality.
The experiment had a straightforward design. Subjects were slipped into brain scanners and given various cash rewards, such as a gain of $20 or $5. They were also told about rewards given to a stranger. Sometimes, the stranger got more money and sometimes the subject got more. Such is life.
But there was a crucial twist. Before the scanning began, each subject was randomly assigned to one of two conditions: some participants were given "a large monetary endowment" ($50), while the others had to start from scratch.
Here's where the results get interesting: the reaction of the reward circuitry in the brain (especially the ventral striatum and VMPFC) depended on the starting financial position of the subjects. For instance, people who started out with empty pockets (the "poor" condition) got much more excited when given, say, $20 in cash than people who started out with $50. They also showed less interest when money was given to a stranger.
So far, so obvious: if we have nothing, then every little something becomes valuable; the meaning of money is relative. What was much more surprising, however, is that this same contextual effect also held true for people who began in the position of wealth. Subjects who were given $50 to start showed more reward circuit activity when their "poor" partner got cash than when they were given an equivalent amount of cash. As Camerer noted in the Caltech press release: "Their brains liked it when others got money more than they liked it when they themselves got money."
This is pretty weird, if only because it shows that even our neural response to cold, hard cash - the most uncomplicated of gains - is influenced by social context. We think we're so selfish and self-interested, but our ventral striatum has clearly internalized a little Marx.
That said, these results are still open to interpretation. One possibility discussed by the scientists is that the response of the reward areas in "rich" subjects represents a reduction in "discomfort" over having more, for seemingly arbitrary reasons. (Such are the burdens of being blessed.) This suggests that inequality is inherently unpleasant, at least when we know that the inequality is due to random chance.
But what if we believe that the inequality is deserved? Does the aversion still exist when subjects believe they deserve to be "rich"? In the ultimatum game, for instance, people suddenly start behaving selfishly - they keep the vast majority of the money - when they are given a test before the game begins. (They assume the test has determined who is a proposer and who is a responder.) The lesson, then, is that while we are inequality averse, it's a fragile kind of aversion. Even a hint of meritocracy can erase our guilt.
*A reader alerts me to a correction. The Machiguenga of the Peruvian Amazon play quite differently, as demonstrated here.
- Log in to post comments
You're leaving out a host of evidence here (as do Tricomi et al) about instances in which players don't adhere to the norms that you're claiming they do. Consider the differences when players 'earn' cash vs. the 'windfalls' they receive in the experiment above - how do we know they're not responding in some way to the fact that they got the money 'for free'? See List (2007) and Bardsley (2008).
Also, we know that players are constrained to observing positive allocations and therefore their reactions could be to knowing that the experimenters are using zero as a lower bound, rather than taking money away. Would the subject respond the same way considering the possibility that they'd walk away with nothing? (I'm simplifying here). See Ruffle (1998), Konow (2000), Fahr and Irlenbusch (2000), Cherry et al (2002) and several others.
I blogged about the paper here.
Also, the Machiguenga aren't the only odd ones: The Isanga, Yasawa, Shuar and Samburu also behave quite oddly with the Ultimatum game in terms of their lowest accepted offers. See Barr et al (2009) - 'Homo Aequalis' at the CSAE, Oxford. Also see the Tsimane, Hadza, Susurunga and Yasawa in Marlowe et al's (2008) paper 'More 'Altruistic' Punishment' for some comments on punishment in this respect.
"...our ventral striatum has clearly internalized a little Marx."
Would it be more accurate to say that Marx externalized a little of our ventral striatum?
It would be interesting, and expensive, to test this on larger scales. Instead of ten $1 bills, try it with ten $10 bills, ten $100s, ten Krugerrands, and possibly ten 10-ounce gold bars.
I'd expect at some point most people will accept the greed of the other and agree to a mere tenth of the windfall.
You're right, NoAstronomer. I was being cheeky.
Another possibility is that we're poor adverse. We don't like to see other people be poor, where the meaning of "poor" is relative to our income.
This distinction matters because if dictators are allowed to choose between situations where everyone is better off but there's more inequality, they will. See Charness and Rabin 2002.
the bit that I really want to know about it is a survey of the proposers that finds out what they thought of the responder if the responder rejected a lowball offer.
i.e. if the proposer offered <$5, and the responder rejected the offer. did the proposer then blame the responder for the proposer not getting the money?
it would also be interesting to have a survey of the people (if any) who rejected proposals of >$5
(side blog comment point... when you preview a comment, the comment system parses any html entities, and places the parsed entity back in the edit text box. so then you have to go through and replace them all again each time you preview... or else the comment might be missing chunks...)
I would imagine there was a spectrum of results that averaged out to the reported tendency. I would also imagine that, all other things being equal, the minority who tend to have less of an innate aversion to inequality have a clearer path to accumulating wealth, unhindered by moral qualms (based partly on my own experience as a waitress and noticing how stingy rich people were with tips). Wealth equals power equals the ability to shape a society to favor inequity, even though that might go against the natural inclinations of the majority of the population.
I recently read somewhere that subjects with lesions in their vmPFC had no trouble making rational choices in the ultimatum game. i.e. accepting a low offer (that was better than nothing) and that they thought others (who had no lesions) were "crazy" for not accepting their low-ball offers. Did I read that here or someplace else?
I seem to recall a fairly recent study widely (and mostly quite inaccurately) written about in the press about how feeling entitled affects our behaviour. Generally it found, iirc, that the more entitled we feel to whatever/everything, the less goodwill we have towards others.
Maybe the reason the 'rich' enjoy the 'poor' receiving money in this experiement is because they share in the experience via mirror neurons.
I'd also be interested how things would play out if the stakes were raised considerably. I'm suspicious that the amounts involved are so low, that it's worth giving the other person a couple bucks just to avoid their antipathy. I suspect that as the money got higher the proposer would start offering the responder a worse deal.
@Ray in Seattle: I *think* in Jonah's book the example given is that those with autism don't have the insight to understand why people won't accept the deal.
Frans de Waal has some interesting thoughts on this dual nature of fairness (equality vs reward from effort) in his book "Age of Empathy." Both matter, and i think we should strive for a society with some of each. We need some "liberte" to accomplish great feats, "equalite" to distribute the spoils, all in the name of peaceful "fraternite."
I'm skeptical that the results of the ultimatum game scale up to interesting amounts. The fact is that $5 -- or even $50 -- doesn't make a lot of difference to the future expectations of most Americans. So it's relatively easy to sacrifice that amount for the purpose of punishing someone who is perceived to play unfairly.
Up the game to a $100,000, and see if the respondents so readily walk away from a $5,000 freebie, just because it is so small relative to what the proposer would keep. My suspicion is that the old model of personal gain once again rears its head.
I speculate that the more the generously arbitrarily endowed value the welfare of the arbitarily less fortunate due to the potential value of future cooperative action. For example, there is evidence for cooperative hunting in the distant past.
I think the difference in the Ultimatum game conditions in which a wealthier participant chooses to share less with the less fortunate, after the former has taken a test, is that the perceived lower competence in test performane of the latter is associated with less fitness in terms of possible future cooperation.
These are guesses anyway. Perhaps there are other significant factors.
What happens when the proposer and responder have different races, social backgrounds, or genders? I'm assuming the initial experiments were performed using students or people from the same "group". I'm guessing the results would be different when the two subjects are from different backgrounds.
Jonah, interesting study, thanks for bringing that to our attention! I would be uneasy citing this experiment because of the setup, and the seeming lack of any hard evidence of anything other than that the brain's reward circuitry reacts to both "advantageous and disadvantageous inequality". I think the results may be more interesting if they can somehow find a way to control for social conditioning, and the whole "being in a brain scanner" thing- The experiment skews reality so far from the norm that I have a hard time accepting that the results would reflect a realistic median response from the "average human being". The participant is of course aware that he is participating in a study, and probably has a sympathetic bias toward the 'team' of human study participants undergoing the claustrophobic experience of being inside of a scanner for any length of time. Like #13 pointed out, the researchers do not appear to be accounting for socio-cultural/economic factors. These results may have more bearing toward the reward response elicited if someone's roommate receives more or less money than him/herself. I would wager that if the participant was completely anonymous and had no contact personally with either a researcher or another subject (perhaps performing the test online) the results would be quite different.
I think the fact that the money is given to the subjects is what makes them more charitable with it. Say the subjects had to perform different duties first and the money they received was compensation. And the rest of the experiment played out in the same fashion, where differing amounts were given/taken from the subjects. I think the effect would be strikingly dissimilar to the current results. In other words, people are more philanthropic in real-life if they have an excess of money - either from lottery winnings or where they've earned way more than they can ever spend (think Bill & Melinda Gates Foundation) versus those who are living paycheck to paycheck. The point is that the money wasn't earned in any way so the subjects felt more compelled to part with it.
It also means that the researchers didn't identify an unbiased reward center of the brain, unless they assume that given a choice, people would give money to poor people rather than themselves, which is probably rarely the case.
Well, I was right and wrong. Yes, this gives some good evidence that vmPFC and the Striatum are not where decisions actually happen, but O'Doherty's group has already shown that ( http://www.ncbi.nlm.nih.gov/pubmed/19805082 ) and implicated dmFC as where the computations that create choices actually get done.
Did you actually read marx? Marx had nothing to do with income redistribution or 'fairness' but rather everything to do with owning the means of production.
If anything, it's closer to Adam Smith's Theory of Moral Sentiment.
Fwiw, the Machiguenga link is bad.
I am surprised when people are surprised by such outcomes. Curse you, Randian freaks, for infecting modern popular belief so broadly. We're social animals. I'm not sure what's so astounding about the fact that we have both individual and societal values. People that think that we all exist in our own selfish little universes are quite amazingly blind and self-centered. Some might even say... sociopathic. Heh.
A fascinating post. Over last few days I have been watching a great series of 3 lectures by Samuel Bowles from the Santa Fe Institute that addresses this topic in great depth. The lectures main focus is humans as a cooperative species. I highly recommend downloading the first lecture and hear the intro to see if you want to watch more. I got the lectures from iTunes U.
Money may have been the wrong instrument for testing Ultimate Gaming on the Machiguenga. It does not have the same currency in a rural/remote subsistence community that food, or another currency resource has. A proper object needs to have the leverage money achieves in a community where money is the top resource, but not actual money itself. Even in western culture money proper loses some of its gloss in subsistence-weighted communities. My personal experience with this is having lived off the grid in Alaska for a several years: very quickly money losses its bargaining power. I would certainly deprive both myself and the Payer of a heat or food resource (say 5 cords of split,dry wood or 3 butchered moose), if they did not divide the resource equitably--I know how much risk is taken and energy expended in getting those resources and could confidently require equal payment or impose zero reward for both of us. $20 however, for an annual "town trip" buys about a day of food sustinence or several thrift store wool blankets or maybe a half a pair of old army 'bunny boots', but is not a resource to care much about.
i object to the idea that greed (offering $1.00) is "rational" while being fair (offering $5.00) is "irrational"...seems to me it is RATIONAL to be FAIR!!! even dogs have a sense of fairness...
The proposer-responder scenario is terribly flawed: it ignores the effect which need has on the responder. For example, if a destitute responder was truly hungry - completely without food, and with no prospect of eating - then I suggest the "inequality aversion" which the researchers think they observed would be totally absent. I think the responder would know that a minimal amount, say $1, would buy him bread, and that would be all he cared about. There is no way he would reject the offer as unfair, because he would be desperate to get something to eat. And should this be known to the proposer, then the offer would be lower for the same reason: because he knows the desperate need of the responder.
There is no "inequality aversion", unless and until the situations of the two parties are - and are seen to be - essentially the same. That is a significant exception to the supposed findings, and makes the experiment worthless.
Once again, a completely unrealistic experiment which was not thoroughly thought through. It does not come close to giving the results the researchers say it does. Indeed, it so obviously flawed that it almost amounts to intellectual dishonesty to claim it is a valid test.
Offering $5 is not because the proposer has an inate sense of fairness but is actually the equilibrium price of rational greed plus fear. A greedy person would offer $1 without taking into consideration the prospect of being declined. Whereas offering $5 almost guarantees acceptance and therefore is a low-risk, moderately high reward punt.
Several people have (rightly) wondered whether these results would hold up when the stakes are raised. I was surprised to learn that when the UBG task is run in developing countries, with an amount equivalent to a weeks' income at stake, the results are nearly identical. See Guth et al, 1982.
I don't quite see why offering $5 is "fair" in any sense. Being given $10 doesn't cause me to owe anything to my fellow subject. Because I have read the studies I would know to offer $5 or close to that even in an un-iterated game, but I don't think I would have thought of that myself. Has anyone considered the effect of the fact that many college students playing the Dictator Game have probably heard of similar studies in class and therefore come in knowing the right answer, introducing to the experiment what amounts to a confirmation bias?