Does fighting against a common enemy promote trust?

The Prisoner's Dilemma is an ethical conundrum that's been used for years by psychologists, economists, and philosophers to explore human behavior. The basic scenario is this: two criminals have been captured and placed in separate cells. Neither prisoner is allowed to talk to the other, and the interrogators don't have enough evidence to prosecute either one. If prisoner A confesses and prisoner B doesn't, then prisoner A is released and prisoner B gets punished. If both confess, then both will get a lighter sentence. If neither confesses, then both will be released. For each prisoner, confessing guarantees a lighter sentence and opens the possibility of release, but also damns the other prisoner to certain punishment. If both prisoners trust each other, neither should confess, so ultimately the scenario is a measure of trust.

i-47d441d85f4e28bcd5320d9789e18477-prisoner.gif

Fighting together in a military unit has often been held up as an example of the ultimate in trust: the friendships formed in the course of military service can last a lifetime. But does cooperation in a violent context necessarily lead to more trust than in a nonviolent context? Brad Sheese and William Graziano designed an experiment to test this scenario using a modified version of video game Doom and a modified version of the Prisoner's Dilemma. Forty-eight college students were recruited to play violent and nonviolent versions of the game in teams of two. Each team was told that the highest-scoring individual would be rewarded with a $100 prize. They played the game for 25 minutes, with the goal of completing as many mazes as possible. In the violent version, players also had to fight off monsters and other hostile attackers. After this session, both players on the team had the same score.

After the game session, the two participants were escorted to separate rooms and offered a modified version of the Prisoner's Dilemma. Players were given three choices: cooperate, defect, or withdraw. Absent the "withdraw" option, it was the standard Prisoner's Dilemma. If both players decided to cooperate, then their scores would be multiplied by 1.5. If both players decided to defect, then their scores would be cut in half. But if one player chose to cooperate and the other player chose to defect, then the defector's score would be doubled, and the cooperator's score would be cut in half. The key difference is the withdraw option -- if either player withdrew, both scores would remain the same.

i-7c370ca21122dda11b8b4617b1a62ee3-prisoner2.gif

Players were first asked to predict how their partners would choose. These results were the same, regardless of condition: 21 out of the 24 players in each condition (violent or nonviolent) predicted their partners would choose to either cooperate or defect. As you can see from the chart, cooperating or defecting is only beneficial if you predict your partner will cooperate -- thus, these players believed their partners would trust them.

Next, players were asked to make their own choice. One player in violent condition and four players in the nonviolent condition chose to withdraw, indicating they didn't trust their partner (this difference was not significant). However, of the remaining players, only one player in the nonviolent condition chose to defect, while seven players in the violent condition defected. These players trusted their partners to cooperate, but still chose to defect in order to gain an extra advantage (and deprive their partners of half their points). This difference was significant.

Sheese and Graziano argue that playing violent games therefore encourages antisocial behavior: since players of violent games are more likely to betray their partners for personal gain, violent game play has antisocial effects. However, they do acknowledge some limitations of the study. The nonsignificant difference in withdrawal rates trends towards nonviolent gamers displaying less trust of their partners than violent gamers. With a larger sample, this difference might turn out to be significant (of course, it might not). Also, the vast majority of participants in the study were white males, so the effects on other population groups may be different.

I'd add some additional caveats: we've reported before on how different types of violent games affect aggressive behavior differently -- the same may be true for social behavior. Though this modified version of Doom does increase some antisocial behavior, other violent games may not. Video games are complex phenomena, so one limited study on one game certainly should not be used to justify such dramatic action as government censorship. On the other hand, parents are advised to take a close look at the games their kids are playing to see what types of behaviors they reward. Games that encourage haphazard violence might not be the best choice, especially for young kids.

Sheese, B.E., & Graziano, W.G., (2005). Deciding to defect: The effects of video-game violence on cooperative behavior. Psychological Science, 95, 354-357.

More like this

However, they may be a great choice for those of us whose job is to develop games to train Soldiers how to be effective in high-stress situations (e.g., when people are shooting at you). I know this isn't a great academic source, but here's a relevant article from the Washington Post:

http://www.washingtonpost.com/wp-dyn/content/article/2006/02/13/AR20060…

Sometimes, being desensitized to violence through playing video games comes in handy!

While it doesn't directly relate to the question of violent vs. non-violent games, it seems as though the following conclusions is a little strong:

"As you can see from the chart, cooperating or defecting is only beneficial if you predict your partner will cooperate -- thus, these players believed their partners would trust them."

First of all, it assumes that the people in the study have carefully worked out all of the possibilities, which is not necessarily the case. More importantly, it assumes that it is based on the player's belief that their partners would trust them. An equally valid conclusions is that they believe that their partners will try to improve thier scores, which can be only be done by choosing defect or cooperate.

This test also has a different "strategy" aspect than the prisoner's dilemma. While one can have either large or small changes in score, the ultimate object is to have the highest score. So there is no difference between keeping your original score and having your score cut in half if you need to improve your score to win, while there is a difference between being freed and getting a reduced sentence. It makes it a little less straightforward to interpret the results, since it may also depend on things like how good of a score you had to begin with (much like "final jeopardy").

Just an interesting (to me) aside that, as I said before, has little to do with violent vs. non-violent games.

drifted in here from the carnival of science...

i'm not a scientist, but i am an online gamer gurl who loves violent games, for nearly 15 years (going back to text MUDs.) and i think this is a bunch of bullshit. the experiment, as devised, replicates nothing of the actual gaming experience.

in my experience, in real life gamers rarely team with strangers and 'pick-up' games are generally considered a big waste of eveyone's time. actual gamers almost always team with members of their clan or guild (whether online or at LAN parties,) with whom they have strong ongoing social bonds. screwing over your clanmates by defecting (quitting a battle when it is going poorly) or exhibiting excess greed (such as taking loot without clan agreement that it was earned) would be grounds for strong censure, or in extreme cases permanent banishment from the clan. and even if one were allowed to stay after such behavior, other members of the clan would certainly remember (and would never let the perp forget) that they had engaged in such antisocial and unsportsmanlike behavior.

in the real vitual world clan loyalty and clan cooperation, leading to clan 'wins,' will always trump personal desire and greed. and gamers tend, generally speaking, to cooperate even on random pick-up teams because they are well aware that their reputation will become known throughout the community, and that it will determine whether they are invited to join an elite successful clan, or be asked to team for any big raids if they are already a member, in the future.

because this study removes the very real community dynamic of how teams function in actual games i think this study is teh lame. worthless academic masturbation.

r0xxOring y3r b0xx0rs since 1991,

anon gamer chick

By anonymouse gam… (not verified) on 15 Mar 2006 #permalink

I have to agree with gamer chick, the socioeconomic model of most on-line games is much richer and more complicated than what you've tested here. Long-term reputation incentives tend to out-weigh the short-term profits one can accumulate by betraying trust. I'm a scientist and relatively new to the online world, but it's very clear that establishing trust within a group is a crucial part of the gaming strategy and hence strongly modified behavior. I think the model youve set up here does match with gamers early experiences in with games, but youd need a different test to look for the same effect over time.

Rather than set up a different environment, you might want to run the same test with volunteers from an online game. It would be interesting to see if the results differed.

By BluShield (not verified) on 29 Mar 2006 #permalink

Fighting together in a military unit has often been held up as an example of the ultimate in trust: the friendships formed in the course of military service can last a lifetime. But does cooperation in a violent context necessarily lead to more trust than in a nonviolent context?

I think you may be confusing cause and effect. The trust built up in military units is not created by cooperation in a violent context, it is inculcated in order to ensure cooperation in a violent context.

John, just to be clear: I'm not the one asserting military service causes trust. Your argument, that soldiers are trained to trust each other before they are placed in a combat situation, isn't really addressed in this study, but you're right in that this study does offer some evidence that fighting together doesn't in itself promote trust.