The Neuroscience of Playing Chicken

Theory of mind, or how we think about what's going on in other people's heads, continues to be one of the hottest topics in cognitive science today. A debate continues to rage over whether we reason about other people's thoughts by means of theory-like propositional knowledge, or through simulation (i.e., putting yourself in their shoes... in your head). Since psychologists are unlikely to solve this debate by themselves, they've called in the artillery - cognitive neuroscientists. And those buggers have come up with some interesting ways to figure out where mentalizing (another name for theory of mind; there are about 50 names for it now) takes place in the brain. A common experimental design in their experiments involves having people perform some task in which they believe they're interacting either with another human or with a computer. You then compare the brain activity during human-human interactions with that of human-computer interactions, the assumption being that when interacting with another human, people will utilize whatever brain areas are involved in mentalizing in order to figure out what that person is thinking, but that we won't feel the need to figure out what the computer is thinking. In a paper in press at NeuroImage, Fukui and a half a dozen other people1 (seriously, does it really take that many people to stick someone in an fMRI machine? Why do neuroscience papers always have so many authors?) report one of the more creative versions of this task: a game of chicken.

If you don't know what chicken is (are you from this planet?), it involves two people in cars driving at each other at high speeds. The first person to get the hell out of the way is the chicken. Unfortunately, it's impossible to fit a car, much less two, into an fMRI machine, so Fukui et al. came up with a "game theoretical" version. They chose chicken over the more well-known game theoretical game, thePrisoner's Dilemma, because in Prisoner's Dilemma tasks, people don't always behave the way game theory says they should, which leads to empirical and theoretical problems. Chicken is different, largely in that there's no reward for trying to make the same choice as the other participant. In chicken, you want to make the opposite choice of your opponent. I'll let them describe their version of chicken (p. 3; figure from p. 2):

In this game, 2 players choose whether or not to aggress against each other; each is rewarded with a sum of money that depends upon the interaction of both players' choices. There are 4 possible outcomes: player A (subject) and player B reconcile (RR), player A reconciles and player B aggresses (RA), player A aggresses and player B reconciles (AR), or both player A and player B aggress (AA). The payoffs for the outcomes are arranged such that AR > RR > RA > AA and AR + AA = RA + RR = 0. Each cell of the payoff matrix [See Payoff Matrix Below - Chris] corresponds to a different outcome of the social interaction. In contrast to the Prisoner's Dilemma, the best strategy under this model is to make the opposite move. The task goal is to maximize the final amount of play money.

i-8425436c3c58179354172e1209761339-Fukuipayoffmatrix.jpg

In case that's not clear, the biggest payoff is for driving straight while your opponent swerves, and the biggest punishment is for driving straight when your opponent drives straight (crash!). In between those, turning off when your opponent turns off gives a small reward, and turning off when your opponent drives straight a small punishment. The large reward and the large punishment were of equal size, as were the small reward and small punishment. So, if you chose randomly across a large number of trials, your average reward would be 0.

Participants, 16 males, each completed 96 games of chicken. For half of the games, they were told they were playing against the experimenter (they saw a picture of a person), and for the other half, a computer (a picture of a computer). They were also told that the computer would choose to aggress (go straight) 50% of the time. In reality, both the human and computer trials used the same algorithm to choose the opponent's behavior, so the participants were really playing against a computer all along. While they played their games of chicken, they were strapped into an fMRI machine, and people were taking pictures of their brains.

The average number of games in which participants aggressed was about the same for both the computer (22 out of 48) and human trials (23.9 out of 48). Regardless of whether they believed they were playing against a human or a computer, they aggressed about half of the time. If I were a skeptic, I'd say it looks a lot like they were choosing randomly, but hey, I'm not a reviewer at NeuroImage, so nobody cares what I think. Anyway, the behavioral data isn't the interesting stuff. The purpose of the study was to contrast the brain activity observed during human-human trials with that during human-computer trials. In that data, there were statistically significant differences. Specifically, increased activation was observed in the supramarginal gyrus, near the rear (posterior) end of the superior temporal sulcus during all human-human trials, regardless of whether they regressed or aggressed. Increased activation was also observed in the anterior paracingulate cortex.

i-a77f9da5ed37df02d62a9078a77c2254-Suprmarginalgyrus.jpgBoth of these areas have been implicated in mentalizing in the past, but the supramarginal gyrus is more interesting (to me at least). Fukui et al. interpret the activation of the anterior paracingulate cortex as being involved in risk assessment. They even argue that this area is involved in risk assessment regardless of whether mentalizing is involved. How this fits with their data showing selective activation in this region during human-human trials, I don't know, but I don't really care either. Sure, I'd argue that this area serves the more cognitive, perhaps theory-like parts of mentalizing, which are more important when making risky social decisions, but that's just me. What makes the activation in the supramarginal gyrus (pictured above, from here) interesting is that it fits nicely with the simulation theory of theory of mind. That area, and much of the area surrounding the superior temporal sulcus (the red line surrounded by the peach and yellow in the picture below, from here) is involved in, among other things, movement, orientation, and motion planning. This is exactly what you would expect if we were simulating the physical behavior of another person. Furthermore, in monkeys and humans, the famed mirror neurons have been found in areas around the superior temporal sulcus. If you're not familiar with mirror neurons, they're brain cells that fire both when we perform an action, and when we see conspecifics performing that same action. Simulation theorists have latched onto these poorly understood cells, and argued that their existence provides evidence for the simulation theory. So, finding activation in this area helps their case a great deal.

i-203ea63990215df86fa9ed1aa6273f99-superiortemporalgyrus.jpg

Now, if I was a theory-theory theorist (I just like typing that), I'd argue that the activation of the anterior paracingulate cortex on aggression trials indicates that something other than simulation - something more cognitive - was going on. It might even be evidence that we use both simulation and theory-like knowledge in mentalizing, a revelation that should be greeted with a big "Well duh!" but which, given the heated nature of the debate between the two theories, and the tendency of people to take sides, would be quite a shock to some. Regardless of the theoretical implications, though, it's just cool to think that someone actually conducted a study on the neural correlates of playing chicken, even if they weren't playing in cars.

Categories

More like this

So, if you chose randomly across a large number of trials, your average reward would be 0.

I'm pretty sure that this is false. If your opponent chooses randomly and is equally likely to choose either option, then your expected reward is 0. But if your opponent always aggresses then you're going to lose money (regardless of your choices) and if your opponent always reconciles then you're going to make money. Unless you get lucky or identify a pattern, the same things will happen if your opponent only tends to aggress or reconcile.

Looking at this game strategically, I think that there are four main considerations 1) if your opponent chooses one option more than the other, then you should tend to favor the other option, 2) if you can find a pattern in your opponent's choices, you should choose accordingly, 3) if your choices can influence your opponent's subsequent choices, then you should try to induce your opponent to reconcile (like by using a tit-for-tat strategy, as in iterated prisoner's dilemmas), and 4) if you are risk averse, you should tend to favor reconciling. It's not clear how participants understood the game, but it's possible that they believed that none of the first three considerations were relevant to the game against the computer and that all three were relevant for the game against the human. (From your description, this is probably at least true for the first consideration.) In other words, switching the opponent from a human to a computer could change the game from a strategically complex iterated game of chicken to strategically simple one-shot games of chicken against a random opponent. So I wouldn't be surprised if there was some extra cognition going on in the human condition, and I don't think that would tell us much of anything about mentalization.

I would like to see what instructions the players were given, along with the players' explanations of what they thought of the games after they were played (since the understanding of the game that players developed while they were playing could have influenced their thinking for the rest of the game). But mostly, I'd like to see a study with a better control condition.

I am working on an experiment along these lines (using standard PD game) and one thing I hope we can look at when we run the task in the MRI scanner is whether the differences in neural activation (as well as pattern of play) when people think they are playing computer vs. person differ between people who think of the computer as "minded" and those who don't. Some of our subjects during debriefing use "theory of mind" talk in reference to the computer program at least as much as the other people--"I was trying to figure out what the computer was thinking," "I hoped the computer would want to cooperate and got angry when it didn't," etc. (Perhaps these people have watched too many sci-fi movies.) I predict that the differences in brain activity will be minimal for people who seem to be using ToM against the computer. I know of some other results on economic games where people will punish a computer program (even at a cost) as much as other people.

Could you post the site where you found the paper (it doesn't appear in Article in Press at NeuroImage website)?

By Eddy Nahmias (not verified) on 11 Jul 2006 #permalink

They chose chicken over the more well-known game theoretical game, thePrisoner's Dilemma, because in Prisoner's Dilemma tasks, people don't always behave the way game theory says they should, which leads to empirical and theoretical problems.

Isn't this true in chicken also? I expect the emotional desire to punish those that have sinned against you (even at cost to yourself) is part of human nature and not some weird artifact of the prisoner's dilemma game.

After thinking about this for a while, I've concluded that what I would do depends on how long I would have to think about it.

My first instinct (playing against the computer) was that it doesn't matter what I do, so I might as well aggress al the time. But this is wrong; although the expectation value is the same no matter what, the expected deviation is different. Assuming you believe in the decreasing marginal utility of money, then lower expected deviation is better, so you should reconcile all the time.

My first instinct (playing against a human) would again be to aggress all the time, on the theory that a rational opponent will have no choice but to reconcile. But then I realized that only works if I assume my opponent is a game theorist instead of a normal person (which gets me back to my original point). A normal person, as I said, would likely punish aggression wih agression, whether or not this is "rational". So, upon further reflection, I'd adopt a tit-for-tat strategy.

Incidentally, having had psych 101 I know that psychologists ususally lie when they describe the nature and purpose of their experiment, and if my "opponent" didn't catch on to my tit-for-tat pretty quickly, I suspect I'd realize the opponent was acting randomly.

By George Weinberg (not verified) on 11 Jul 2006 #permalink

Blar, you're right, it wouldn't be $0 unless your opponent chose randomly as well. I should have said that. You're also right that they needed a better control condition. That's a common criticism of imaging studies, in fact. It seems as though cognitive neuroscientists don't receive a lot of methods training.

Eddy, as I was reading the paper, I wondered whether people think about computer opponents like they do people. I know when I play chess or something against a computer, I use a lot of the thought processes that I would with a person. I think, for example, about what I know about the computer's strategy.

Also, the paper is under "Articles in Press" at the Science Direct page for NeuroImage, but it's on the second page, so you have to click "next."

George, the big problem with the Prisoner's Dilemma is that some people behave altruistically. You wouldn't expect much of that in chicken.

I'll third Chris & Blar's comment that we need a better control condition. Would it be an improvement on the current control condition if you simply told subjects what strategy their opponent was guaranteed to use (i.e., tit-for-tat, always reconcile, always agresss, or alternate), and that their opponent would be human? This seems to be a cleaner comparison, and would specifically highlight the neural activity correlating with *mental* simulation, above and beyond mere *strategy* simulation.

[Incidentally, my personal opinion on why cog neuroscientists have poor control conditions is that the majority of them were originally trained as cognitive psychologists; typical cognitive psychology research methods courses (even those at the grad level) don't explicitly cover imaging techniques. But that's neither here nor there...]

Anyway, cheers on another excellent post.

New fire safety rules affecting all non-domestic premises in England and Wales came into force on 1 October 2006.

A fire risk assessment helps you to identify all the fire risks and hazards in your premises. You can then decide to do something to control them.

Articles Fire Risk Assessments:
1. Fire Types & Fire Extinguishers
2. United Kingdom: Fire Departments
3. New Fire Safety Rules
4. Steps Needed For Fire Risk Assessment
5. Steps Are Needed To Save Lives
6. Fire Safety Engineering
7. Safety Rules: Fire Risk Assessment

Fire Risk Assessments
http://www.fireriskassessment.blogspot.com/