Why punishment is worth it in the end

Blogging on Peer-Reviewed ResearchIs punishment a destructive force that breaks societies or part of the very glue that holds them together? Last year, I blogged about two studies that tried to answer this question using similar psychological games. In both, volunteers played with tokens that were eventually exchanged for money. They had the option to either cooperate with each other so that the group as a whole reaped the greatest benefits, or cheat and freeload off the efforts of their peers.

i-14b4f9424fbe2f357937988fef01b9d9-Punisher2.jpgIn both studies, giving the players the option to punish each other soon put a relative end to cheating. Faced with the threat of retaliation, most players behaved themselves and levels of cooperation kept stable. But this collaboration came at a heavy cost - in both cases, players ended up poorer for it. Indeed, one of the papers was titled "Winners don't punish", and its authors concluded, "Winners do not use costly punishment, whereas losers punish and perish."

But in both these cases, the experiments lasted no more than 10 'rounds' in total, and to Simon Gaechter, that was too short. He reasoned that more protracted games would more accurately reveal the legacy of punishment, and more closely reflect the pressures that social species might experience over evolutionary time spans. With a longer version of the games used in previous studies, he ably demonstrated that in the long run, if punishment is an option, both groups and individuals end up better off. 

Together with colleagues from the University of Nottingham, Gaechter recruited 207 people and watched as they played a "public goods game" in groups of three. All of them were told that their group would remain the same for the entire game, which could last for either ten rounds or fifty.

To begin, he gave each player 20 tokens, who could keep or invest as many as they liked. Every token they kept retained its full worth, while every token that was invested halved in value but was worth that much to every player. So as is normal in these games, players earn more for themselves by cheating but more for the group by cooperating. In some games, they also had the option to punish each other, sacrificing one of their own tokens to rob someone else of three of theirs.

As in other studies, Gaechter found that people were more likely to cooperate with one another in games when they had the option to punish cheats. In the ten-round game, players who could punish contributed about 3.6 more tokens than those who couldn't; in the fifty-round game, the effect was even greater and punishment increased the average contributions by 9.6 tokens.

After ten rounds, players who played a punishing game earned an average of 4.7 fewer tokens per round than their peers. Again, that matches the results of previous studies. But Gaechter found that things flipped around after fifty games - by that point, the punishers were earning about 3 more tokens per round.

i-5cbb3c2f091e7715561b3f05af989e9e-Punishment1050.jpg

Even at their earliest stages, the fifty-round games were different. Armed with the knowledge that they were in it for the long-haul, the players changed tactics. In the first ten rounds, they gave more to the central pot and doled out smaller punishments than those who knew that their games would go no further. It paid off too - the long-haul players were raking in higher dividends just within the initial rounds. 

Things weren't always so utopic though - the final and fiftieth round saw a massive drop in average earnings. By this point, people were just trying their luck or punishing with impunity. Gaechter argues that the experiment's end doesn't reflect real interactions very well, but I think it's interesting to show what can happens to seemingly stable alliances when a deadline is placed on them.

Nonetheless, Gaechter's suggests that the ability to punish freeloaders binds a group of people together and once this happens, the increasing gains from teamwork start to outweigh the diminishing costs of punishment. In the long run, both groups and individuals end up better off. So perhaps, winners can punish after all.

Reference: S. Gachter, E. Renner, M. Sefton (2008). The Long-Run Benefits of Punishment Science, 322 (5907), 1510-1510 DOI: 10.1126/science.1164744

More on punishment:

Subscribe to the feed

i-6f0c2ca6a9160438e46e91bfaa92bfd0-Bookbanner5.jpg

More like this

Hi Ed,

As one of the authors of "Winners Don't Punish", I just wanted to point out that in our study, the games were much longer than 10 rounds. On average subjects played about 80 Prisoners' Dilemma rounds, and still we saw no significant improvement in payoffs in the presence of punishment. Also, even if the trend reversed in very long games, punishers (who pay the cost to punish) will always do worse than cooperators who do not punish. Hence, an evolutionary argument would have to be based on group payoffs, and therefore group-level selection.

Thanks for your great science blogging!!

David Rand
Program for Evolutionary Dynamics, Harvard University
http://fas.harvard.edu/~drand

Since meting out punishment costs the "punisher" while the improved cooperation that results benefits everyone, it would seem that being a punisher is a form of altruism, as odd as that sounds.

The weird case, though, is anti-social punishment (where participants are punished for being "good"). That sort of punishment costs the punisher, and doesn't benefit the group. So what is the motivation for that?

Off-topic, the comments editor spell-checker doesn't think that "punisher" is a word. Obviously, it's not a Marvel comics fan.

I agree with what Daryl said about the punishers showing altruism..
Unfortunately in this particular game there is no benefit from a selfish individual's point of view to punish, unless there is pressure on every player to invest an equal stake in punishment. Perhaps those who never punish should become targets for punishment as cheaters as well, since they benefit from the punishers altruism, but dont sacrifice for the group themselves?

Were there any games of Iterated Prisoners Dilemma mentioned where the participants didnt know how long the game would last?
I'd think that'd make a good comparison to society since our 'game' is continuous and people cant try their luck because its the last turn.

But here's a question: When I punish someone, I'm primarily deterring them from hurting *me* again (think: Mafia). So I'm not so sure how altruistic that is.

So the next question is: When did the punishers choose to punish? When the group was cheated, or the individual?

I've always thought the evolution of 'moral'/'ethical' behavior is probably more complex than the game-theory models show. Those mostly only work for things directly related to evolution (i.e. punishing murderers and cooperating to find food) to make the selection pressure big enough to matter. I have to wonder, since humanity pretty much creates its own social environment, if evolution is even a factor in it anymore. It might be mostly nurture, not nature, now.

By William Miller (not verified) on 07 Dec 2008 #permalink

Game-theoretical work on the evolution of altruism is ingenious and interesting, but I doubt its relevance to human evolution. A standard assumption of these experiments is that the participants are not genetically related, but the first thing one learns from anthropology is that in 'primitive' communities nearly everyone is related to each other by blood, by marriage, or both. Studying human evolution and ignoring kinship is like studying insect societies (ants, bees, wasps, termites, etc) and ignoring the fact that most of their members are at least half-siblings.

Also, a standard assumption in most experiments is that the participants cannot directly communicate their intentions, e.g. to cooperate. But one of the most important and distinctive features of the human species is that we have verbal communication. So to study human cooperation without communication is kinda like studying soccer on the assumption that the players cannot use their feet.

Well, another thing that always bothered me is that game theory is a *mathematical* thing. If the rewards of altruistic behavior minus the costs are quantitatively greater than the rewards minus costs of selfish behavior, then a game theory model will show altruism. But how can you quantify the rewards in real life? It seems like this is a crucial step to test the hypothesis as it applies to real organisms.

Also, has anyone ever come up with *any* halfway decent explanation for the evolution of modesty? That always baffled me completely.

By William Miller (not verified) on 08 Dec 2008 #permalink