The Influence of Irrelevant Emotions on Moral Judgments

Back on the old blog, I wrote a series of posts in which I detailed a revolution in moral psychology. Sparked largely by recent empirical and theoretical work by neuroscientists, psychologists studying moral judgment have transitioned from Kantian rationalism, that goes back as far as, well, Kant (and in psychology, the Kantian Jean Piaget), to a more Humean approach, that considers emotion and motivation to be central.

Some of the more interesting work utilizing this new approach has been done by Joshua Greene and his colleagues1 They have demonstrated that we use different processes to make moral judgments about personal violations than about impersonal violations. Greene defines the two different types of violations as follows2:

A moral violation is personal if it is: (i) likely to cause serious bodily harm, (ii) to a particular person, (iii) in such a way that the harm does not result from the deflection of an existing threat onto a different party. A moral violation is impersonal if it fails to meet these criteria. (p. 519)

As an example of a moral dilemma that involves a potential personal violation, they use the famous footbridge problem, and as an example of a potential impersonal violation, they use the trolley problem. Here are their versions:

  • Trolley: A runaway trolley is hurtling down the tracks toward five people who will be killed if it proceeds on its present course. You can save these five people by diverting the trolley onto a different set of tracks, one that has only one person on it, but if you do this that person will be killed. Is it morally permissible to turn the trolley and thus prevent five deaths at the cost of one?
  • Footbridge: The trolley is headed for five people. You are on a footbridge over the tracks next to a large man. The only way to save the five people is to push this man off the bridge and into the path of the trolley. Is that morally permissible?

When presented with the footbridge problem, people almost always say that it is not permissible to push the man off the bridge, but when presented with the trolley problem, they almost always say that it is morally permissible. It seems as though participants tend to use utilitarian principles to make judgments in the trolley problem, but not in the footbridge problem. Greene and his colleagues explain this difference by arguing that potential personal violations elicit an emotional response, while impersonal violations allow us to use more cognitive processes (perhaps reasoning from moral principles) to make moral judgments. As evidence of this, they have shown that the footbridge problem causes activation in areas of the brain associated with emotion, while the trolley problem activates cognitive areas. It appears that the negative emotions associated with pushing someone off a bridge override the moral principles that demand we save the five people.

In Greene's studies, the negative emotions people experience are elicited by the moral dilemmas themselves, but what would happen if we were experiencing emotions that were unrelated to the dilemmas as we encountered them. Could these emotions influence how we make moral judgments? For example, if we were experiencing positive emotions when we encountered a potential personal moral violation, could those irrelevant emotions override the negative emotions caused by the personal violation, and causes us to use more principle-based cognitive processes in making a judgment?

In a study published in the June issue of Psychological Science, Piercarlo Valdesolo and David DeSterno3 sought to answer this question by inducing positive emotions in participants, and presenting them with the trolley and footbridge problems. Before being presented with the problems, half of their participants watched a 5-minute Saturday Night Live clip (I assume it was an old clip, so that it would actually be funny), intended to induce positive emotions, while the other half watched a "neutral" 5-minute clip of a documentary on a Spanish village. All participants were then presented with both problems, in random order.

Recall that in most cases, participants presented with these two problems answer that it is OK to turn the trolley (impersonal violation), but not OK to push the man off the bridge (personal violation). If the SNL clip induces positive emotions, and these override the negative emotions caused by personal violations, then participants who viewed that clip should be more likely to reason using utilitarian considerations when they read the footbridge problem, and thus say that it is morally permissible to push the man off the bridge to stop the train. This is in fact what they found. Twenty-five percent of the participants who viewed the SNL clip said that it was permissible to push the man off the bridge, while only 8% of the participants who viewed the neutral clip said it was permissible. As expected, the clips did not affect the judgments made in the trolley problem. So, emotions elicited by stimuli unrelated to a moral dilemma can influence judgments we make about that dilemma.

I have to admit that I find this a bit disturbing. It seems that when we encounter a moral problem, we're unable to distinguish the emotions that are elicited by the problem, and those that are a result of other properties of the context in which we encounter the moral problem. One can imagine situations in which this actually helps us to make moral judgments, as was the case in this study. If using more cognitive, "rational" processes is better than using emotion-driven processes, then the experiment shows that contexts that induce positive emotions can actually cause us to make better moral judgments. However, the potential for manipulation is great. One can imagine politicians, for example, presenting moral problems (e.g., abortion) in contexts designed to induce positive or negative emotions unrelated to those problems, in order to influence the moral judgments people make (and, in turn, how they vote). And I suspect that as we learn more about the roles played by emotion in moral judgment, disturbing findings like this one will become more common.

1Greene, J.D., Sommerville, R.B., Nystrom, L.E., Darley, J.M., & Cohen, J.D. (2001). An fMRI investigation of emotional engagement in morajudgmentnt. Science, 293, 2105-2108.
2Greene, J.&d Haidt, J. (2002) How (and where) does moral judgment work? Trends in Cognitive Sciences, 6(12), 517-523.
3Piercarlo, V., & DeSterno, D. (2006). Manipulations of emotional context shape moral judgments. Psychological Science, 17(6), 476-477.

More like this

I've been posting about moral cognition anytime a new and interesting result pops up for a while now, and every time I think I've said before, though it bears repeating, that every time I read another article on moral cognition, I'm more confused than I was before reading it. Part of the problem, I…
Over at Mind Matters we recently featured an interesting article by Walter Sinnott-Armstrong and Adina Roskies (two philosophers at Dartmouth) reviewing a recent paper by Joshua Greene, et. al. The paper tested the dual-process model of morality, which argues that every moral decision is the result…
If you've been reading this blog for a while, you might remember my old posts on moral psychology (I'm too lazy to look them up and link them, right now, but if you really want to find them, I'll do it). Well, after I discussed that research with a couple other psychologists who, it turns out, are…
It's hard to believe it's been four years since the war began. If you missed Bob Woodruff's important documentary on the epidemic of brain injuries caused by war, I highly suggest watching it. According to Woodruff, up to 10 percent of all veterans suffer some sort of brain injury - often caused by…

Hi, been reading your blog for a while, one of my favourites. I especially appreciated this post, since I am writing my honours thesis on Greene. Hope you find time to post regularly.

Nick.

I've always wondered why, if Smith is able to push Jones off the bridge, Smith is not able to jump himself. This is a problem of moral _imagination_ which, if overcome, completely changes the situational factors deemed relevant to the moral calculus.

By bob koepp (not verified) on 16 Sep 2006 #permalink

I only want to say that there is a very, very real difference between being presented with a paragraph describing an abstract and rather hackneyed situation, and holding the lives of five people in your hands. I think these thought exercises are fine, but I'd be far more inclined to give weight to an experiment that tested the influence of positive or negative mood states on a small, but real, moral problem. The real question ought be whether people in a positive frame of mind more or less likely to BEHAVE unethically than those in a negative mood state.

Of course, written scenarios are different from real-life situations, but you need this sort of foundation to begin conducting more ecologically-valid studies.

That paper makes an interesting contrast with this, which finds that "happiness increases in ethical proclivities and that greater happiness results in improved ethical judgments"

Doesn't this just mean that about 17% (one in six) of people act this way, not the grand "we"?

Well, that's 17% with a fairly minor manipulation, in a task where fewer than 10% is the norm across dozens of studies. When I say "we," I mean to imply that all of us are likely subject to this sort of manipulation, even if not all of us react to the manipulation in every context.

Hi Siona,
The effect of positive affcet or mood on everyday moral problems has been documented to some extent. If one considers helping someone gather the books that had fell down as a moral action, then a study (I dont have reference) had earlier reported that those in a happy mood would be more willing to help pick up the books.

Similarly another study had found that positive affect (reading positive words on a list like 'courtsey') would determine the length of time one would wait before interrupting someone (the results were astonishing - people primed on positive words would apparently wait indefinetly before interrupting. Others, who had not been primed, would interrupt very early and that too at times rudely).

You, and other readers of this blog, may find interesting, an alternative explantation I have, for the Piercarlo & DeSterno study.

Intersting that only 25% found the action permissible - only 17% more than the control group. The other 75% were not influenced by emotions. What does that say? Are they apathetic? Wrong and uninfluenced? Principled? What do you think about them? The fact that less than 20% were actually influenced only means that some are susceptible to influence by postive emotions in a moral dilemna. How can you predict who would be and who wouldn't? Maybe we would want air traffic controllers to be reading comic books on their breaks. (or at least that 17% of them.)

Based on the little information we have from other studies, I'd guess that it's not a matter of only 17% being influenced (that's still a pretty high relative number), but instead a matter of the effectiveness of the manipulation and the general context. I suspect that we're all susceptible to such manipulations, but not in every context, and for every question.

When you think about it, viewing 5 minutes of SNL is a pretty minor mood manipulation. In most studies, they make the mood manipulation more personal (e.g., writing about a really positive experience in your life) and that have shown, in pilot studies, to significantly increase positive affect.