Winners don't punish: "Punishing slackers Part 2"

Blogging on Peer-Reviewed ResearchTwo weeks ago, I wrote about a Science paper which looked at the effects of punishment in different societies across the world. Through a series of fascinating psychological experiments, the paper showed that the ability to punish freeloaders stabilises cooperative behaviour, bringing out the selfless side in people by making things more difficult for cheaters. The paper also showed that 'antisocial punishment', where the punished seek revenge on the punishers, derails the high levels of cooperation that other fairer forms of punishment help to entrench.

i-04abbb8be0b2e9e0c16a983aa020fc22-Punisher.jpgNow a new study published in that other minor journal Nature adds another twist to the story. In it, Anna Dreber, Martin Nowak and colleagues from Harvard University confirm that groups of people are indeed more likely to cooperate if they can dole out punishment, but they also reap smaller rewards. In Dreber's experiments, the groups that left with the highest payoffs were those that shunned punishment completely. It's a conclusion best summed up by the stark and simple title of their paper: "Winners don't punish." 

Dreber revealed the dark side of punishment by modifying one of the classic experiments of game theory - the Prisoner's Dilemma. Inspired by the plight of separately interrogated prisoners, the game pits two players who have a choice to cooperate or defect. For each 'prisoner', the best choice no matter what his partner does is to defect, but if they both defect, their outcomes are far poorer than if they had both cooperated - hence the dilemma.

The artificial scenario represents many real-world choices where cooperation is good for a group but cheating is best for the individual. When defection presents such stark benefits, evolutionary theory predicts that it should be commonplace unless some force can maintain cooperation. Recently, costly punishment - where an individual can suffer slightly to punish a cheater - has been mooted as such as force, because people avoid cheating for fear of reprisals. 

A further dilemma

To test this idea, Dreber extended the Prisoner's Dilemma so that on every turn, players could punish as well as cooperate or defect. She recruited 104 local college students to play anonymously against each other. They were given a set of virtual tokens, informed about their options in neutral language and asked to make their moves together. Once played, the results were tallied and the choices revealed. The games went on for different times but players knew that every round had a one in four chance of being the last one. At the end of each game, each player was paid a dime for every remaining point.

i-1932bb9831f7151140d1cd896a1fb2b9-Punishmenttable.jpgIn one game (T1), cooperation meant paying 1 token for the other player to receive 2, defection meant taking 1 token from the other player and adding it to your pot, and punishment meant paying 1 unit to fine the other person 4.  The second game (T2) was exactly the same, except that cooperation was more valuable, with the beneficiaries receiving 3 tokens instead of 2. The table on the right shows what happens for different combinations of choices. Alongside T1 and T2, Dreber also ran two control experiments (C1 and C2) where players could only cooperate or defect. Punishment wasn't an option.

i-ef903933ab21db4b1c0d2f203dcb46e5-Gameresults.jpgAs the experiments unfolded, a number of different strategies became apparent. Some games were all-out cooperation (a). In others, one player defected but cooperation was restored when the other player either turned the other cheek (d) or defected in retaliation (c). In the face of forgiveness or the threat of mutual loss, the original defector decided to play fair again.

When punishment was played, it usually didn't restore cooperation. In some cases, the rebuked player simply carried on defecting only to be punished even further (b). When a punished player retaliated in kind (the 'antisocial punishment' studied in the earlier post), the game ended in mutually assured destruction (e). Finally, the ability to punish allowed irrational individuals to inflict harm on the undeserving with unprovoked pre-emptive strikes that had disastrous results for cooperation (g).

Payoffs

Even though actually taking punitive measures proved to be an anathema to teamwork, the option to punish did increase the overall levels of cooperation. In the two games that allowed punishment, T1 and T2, players chose to cooperate in 52% and 60% of their moves, but they only did so in 21% and 43% of the moves in the punishment-free control games, C1 and C2.

That seems like a good case for punishment, but not so. Dreber found no difference between the average takings in the two setups that included punishment (T1 and T2) and the two that didn't (C1 and C2). As far as the groups were concerned, the ability to punish bore no benefits. At an individual level, things were even worse, for the players who came away with the least money were also those who meted out punishment most frequently. In the T1 game, for example, the five players who ended up richest were those who never punished their opponents (see graphs below; g = T1; h= T2).

i-ae2d0614e079d9e02239fac5662534b4-Punishmentrank.jpg

You might suspect that these winners were just lucky and only faced opponents who always cooperated and never deserved punishment. But that wasn't the case - their opponents occasionally defected too and the one strategic choice that set the winners apart from the losers was how they dealt with a defection.  Simply put, losers chose to punish while  winners opted for a 'tit-for-tat' strategy and defected themselves. Winners, it seems, really don't punish.

The origins of punishment

The Science paper by Hermann et al actually found similar patterns In it, players from 16 cities showed more cooperative behaviour when they were allowed to punish their colleagues than when they weren't. But the supplemental data reveal that 13 of these groups actually ended up with lower average earnings in the games that involved punishment than those that did not. Only 3 groups netted higher payoffs and Dreber suspects that the differences were not statistically significant. She said, "I believe that our results agree with those of Hermann et al.: punishment leads to more cooperation, but not higher payoffs."

The results are a blow for the notion that costly punishment was critical for the evolution of human cooperation, for people who resort to punishment suffer for it. Instead, the authors suggest that costly punishment may have evolved for other reasons, like establishing pecking orders or allowing stronger members of a group to dominate weaker ones through coercion.

This is the second time I've written about a paper from Martin Nowak's research group and both papers had two things in remarkably clear and witty language. Though the new study is published in a journal that can often be incomprehensible even to hardcore scientists, it is replete with, economical turns of phrase and lay language - anyone could pick it up and get the point.

Such a feat is both rare and applaudable. To be fair, the subject matter lends itself somewhat to more pithy writing but I have seen other papers in a similar field that still managed to be far more unintelligible than this one. Dreber, Nowak and her co-authors should be commended for their efforts. Their own conclusion to the paper sums up the research better than I ever could:

"People engage in conflicts and know that conflicts can carry costs. Costly punishment serves to escalate conflicts, not to moderate them. [It] might force people to submit, but not to cooperate... Winners do not use costly punishment, whereas losers punish and perish."

Reference:Dreber, A., Rand, D.G., Fudenberg, D., Nowak, M.A. (2008). Winners donââ¬â¢t punish. Nature, 452(7185), 348-351. DOI: 10.1038/nature06723

More like this

I seen a couple of times the TV program about the guy who trains grizzly bears. He does it using only positive rewards. I made the inductive leap that if one can do that with grizzly bears, one can do it with humans. What administration I have done, I did with that in mind. It worked out well enough, even with folks known as difficult individuals.

By Jim Thomerson (not verified) on 19 Mar 2008 #permalink

Analisa - with four kids, you could run your own public goods game!

Disclaimer - Not Exactly Rocket Science does not advocate using your sprogs for game theory experiments and does not accept responsibility for any psychological damage that may ensue. :-p

I think the experiment was flawed if testing punishment, since punishment itself requires group cooperation. An individual who has been punished by the group may attempt to retaliate in some way, but has far less power to do so than the group has to impose further punishment.

If perhaps the punishment was related to the number of players simultaneously inflicting it on the 'transgressor' in a prisoners dilema scenario played by groups of individuals, I think this would demonstrate its contribution to maintaining group cooperation.

For example, in a group of four players, you may pay 1 token punish another player to the tune of 2, but if three players each decide to punish an individual on one turn, that individual will lose 6 tokens.

John - that's a good point but it's worth noting that the Hermann et al study which I wrote about previously found the same pattern (punishment stabilises cooperation but reduces payoff) and that study used a public goods game with four players.

Hi all,

I am one of the authors on this paper, and I wanted to thank Ed for such an excellent blog! This writeup is by far the most on point and interesting of the ones I've seen. Just wanted to say that we appreciate it!

Also, a response to John's comment above: We examined cooperation in the context of symmetric cooperative interactions between equals. When you have a group of people punishing an individual (for defecting, or any other reason), it's no longer a cooperation game in the strict sense. That moves into the world of dominance - the group forcing the individual to cooperate.

Cheers,

Dave

Dave and others: are there real-world examples of non-punishment in action, aside from grizzly bears? I would think/hope there are some cultures, however small, that recognize the benefit from eliminating or greatly curbing the use of punishment. It would be interesting to see how said cultures deal with their most difficult members.

Thanks for this study!

By wenchacha (not verified) on 21 Mar 2008 #permalink

Interesting set of experiments, but sadly they do not address the evolution of 3rd party punishment, because all the experiments are dyadic games. In a dyadic situation, all the benefits and costs of punishment are private, whereas in an n-person (public goods) settings, the costs are private (or can be) and the benefits are shared.

By R McElreath (not verified) on 03 Apr 2008 #permalink

Dave,

John and R McElreath's comments are right on. Whether your paper is very useful for thinking about the evolution of human behavior depends on whether you think humans evolved in groups of more than two people. If you think humans evolved in large groups, then this study has less relevance to that question.

Even the paper argues that "For millions of years of human evolution, our ancestors have lived in relatively small groups in which people knew each other."

I'm also confused about the terminology used in your comment (and in the paper) about "dominance" and "submission" vs. cooperation. How is a group using punishment to encourage an individual to cooperate a dominance/submission game, while an individual using punishment to encourage another individual to cooperate a "cooperation" game? Is this just a game of semantics or am I missing something deeper?