Over at Mind Matters we recently featured an interesting article by Walter Sinnott-Armstrong and Adina Roskies (two philosophers at Dartmouth) reviewing a recent paper by Joshua Greene, et. al. The paper tested the dual-process model of morality, which argues that every moral decision is the result of a tug-of-war between the “rational” brain (centered in the prefrontal cortex) and the “emotional” brain, rooted in areas like the amygdala and insula.
In their study Greene et al. give subjects difficult moral dilemmas in which one alternative leads to better consequences (such as more lives saved) but also violates an intuitive moral restriction (it requires a person to directly or intentionally cause harm to someone else). For example, in the “crying baby” dilemma subjects must judge whether it is wrong to smother their own baby in order to save a large group of people that includes the baby. In this scenario, which was also used by the television show M.A.S.H., enemy soldiers will hear the baby cry unless it is smothered. Sixty percent of people choose to smother the baby in order to save more lives. A judgment that it is appropriate to save the most lives, even if it requires you to suffocate a child, is labeled “utilitarian” by Greene et al., whereas a judgment that it is not appropriate is called “deontological.” These names pay homage to traditional moral philosophies.
Based on previous fMRI studies, Greene proposes a dual-process model of moral judgments. This model makes two central claims. First, when subjects form deontological judgments, emotional processes are said to override controlled cognitive processes. In other words, the subjects who are unwilling to smother the baby are being swayed by their emotions, and they can’t bear the idea of hurting a helpless child. This claim has been supported by a flurry of recent behavioral studies and neural studies. Greene’s dual-process model also claims that controlled cognitive processes cause utilitarian moral judgments. The new Cognition study puts that second claim to the test.
Neuroimaging reveals only correlations; it cannot determine whether a certain brain area is causing a particular judgment. But intervening in a process can provide evidence of causation. In the Cognition study, Greene et al. attempted to interfere with moral reasoning by increasing the cognitive load on subjects. They had subjects perform the moral judgment task at the same time as a monitoring task, in which subjects viewed a stream of numerals and responded to occurrences of “5.” If this added cognitive load interferes with the controlled cognitive processes that cause utilitarian judgments, the researchers surmised, then subjects should make fewer utilitarian judgments and should form these judgments more slowly.
As hypothesized, added cognitive load led to longer reaction times for utilitarian judgments, but the researchers found no effect on reaction times for deontological judgments. Although it took subjects longer to approve of acts like smothering a baby when also looking for the number 5, it did not take them longer to approve of acts like not smothering the baby.
I find this research interesting for a few reasons. While stories of Darwinian evolution often stress the amorality of natural selection – we are all Hobbesian brutes, driven to survive by selfish genes – our psychological reality is much less bleak. We aren’t fallen angels, but we also aren’t depraved hominids. Greene has helped illuminate the intricate network of brain areas that keep us, most of the time, from hurting other people. (As Richard Rorty put it, “avoidance of cruelty” should be the founding principle of liberalism. He was right, as the emotional areas of the brain are automatically activated in response to the sight of someone else in pain. We can’t help but sympathize with strangers, or what Adam Smith called fellow-feeling.) One of the interesting implications of this new Greene paper is that distracting the prefrontal cortex with a simple task might actually make us a little bit more sympathetic. Because our “rational” brain is distracted, we might become more likely to rely on our moral emotions, especially in situations that require a fast reaction time.
The second reason I find this research fascinating is that it helps us understand psychopaths, who seem to have a problem with their emotional brain. When normal people are shown staged videos of strangers being subjected to pain – like a powerful electrical shock – they automatically generate a visceral emotional reaction. Their hands start to sweat and their blood pressure surges. But psychopaths feel nothing. It’s as if they were watching a blank screen. Most people react differently to emotionally charged verbs like kill or rape than to neutral words like sit or walk, but not psychopaths. The words all seem equivalent. When normal people tell lies, they exhibit the classic symptoms of nervousness. Lie detectors work by measuring these signals. But psychopaths are able to consistently fool the machines. Dishonesty doesn’t make them anxious because nothing makes them anxious. They can lie with impunity. When criminologists looked at the most violent wife batterers, they discovered that, as the men became more and more aggressive, their blood pressure and pulse actually dropped. The acts of violence had a calming effect.