I’ve been posting about moral cognition anytime a new and interesting result pops up for a while now, and every time I think I’ve said before, though it bears repeating, that every time I read another article on moral cognition, I’m more confused than I was before reading it. Part of the problem, I think, stems from a tendency towards theoretical extremes. For a long time, in fact for most of the history of moral psychology, empirical or otherwise, some form of “rationalism” dominated. That is the view that there are ethical rules in our heads, and that moral judgment involve applying those rules to the situation to be judged. More recently, moral psychologists have argued that moral judgment may be much less “rational” than previously believed. In the extreme view, represented most explicitly by Jonathan Haidt and his “social intuitionist” theory, rational justifications for moral judgments are largely, if not entirely post hoc rationalizations, while the actual decision-making process is driven by emotion and automatically activated “intuitions.”
These extremes are, I’ve long felt, partly a result of methodological considerations. In particular, the moral dilemmas most often used in moral psychology research tend to admit either a rule-based solution (most often, a utilitarian one) and an “emotional” solution (which can, it should be noted, framed in terms of “deontological” rules). Much of the research, then, is centered around when and under what circumstances people are more likely to make the rule-based or the emotional decision. There is no in between solution, and so you’re left with a sharp distinction between the two kinds of choices.
For example, consider the classic trolley-footbridge problems. In the trolley problem, participants are told that a run-away trolley is bearing down on five unsuspecting people hanging out on the track (like idiots). The participant can flip a switch, causing the trolley to switch tracks, but doing so will result in the trolley striking a single person working on this new track. In most studies, the vast majority of participants indicate that they would flip the switch, killing the one poor bastard but saving the five idiots hanging out on the track. This is the proper utilitarian choice, and is generally treated as the “rational” or rule-based one. In the footbridge problem, the trolley is yet again bearing down on five oblivious people hanging out on the track, but this time the participant is told that he or she is standing on a footbridge over the tracks, and the only way to prevent the trolley from hitting the five people is to throw the poor bastard standing next to you into the trolley’s path, resulting in that one person dying but the five people being saved. The ultimate dilemma in the footbridge version is the same as in the standard trolley problem, because you’re sacrificing one to save five, but in this version participants almost always say that they won’t push the guy off the bridge. This is usually treated as the “emotional” or “intuitive” decision.
In addition to their tendency to cause researchers to see moral judgment in terms of theoretical extremes, these sorts of moral dilemmas have a whole host of problems, not the least of which is the fact that they’re incredibly unrealistic. How many of us are ever going to be in a position to stop a trolley from killing five people, and how likely is it that throwing a guy off a bridge is going to stop the damn thing anyway? Furthermore, the two versions of the problem differ in a bunch of potentially important ways, making interpretation of people’s decisions difficult. Despite these and many other problems, such dilemmas continue to be widely used, for reasons that I must admit escape me.
The most recent example comes from an in press paper by Greene et al.(1), which is an admirable attempt to develop a dual process theory of moral judgment. Dual process theories, which are becoming increasingly common in cognitive science, involve two distinct types of processes or systems, one of which is usually automatic and “intuitive” or heuristic-based (and may be influenced by emotion), and the other of which is more “rational” and deliberate. Since these two types of processes line up nicely with the two posited types of moral decision processes, Greene et al. see a dual process theory as a potential bridge between the “rationalist” and “intuitionist” camps in moral psychology. Under their view, when people make rule-based (e.g., utilitarian) decisions, they’re using the rational (often denoted System 2) process, and when they make the “emotional” (e.g., non-utilitarian) decision, they’re using the “intuitive (or System 1) process.
If this dual process theory is correct, then interfering with one of the two systems should selectively interfere with the corresponding decision-type, without affecting the other. To test this theory, then, Greene et al. provided participants with moral dilemmas like the footbridge problem (other dilemmas included the “crying baby” problem, in which a crying baby will alert a hostile enemy to the position of several people) in one of two conditions: the cognitive load condition or a control condition. In the cognitive load condition, participants had to perform a digit selection task in which digits scrolled across the screen while they were making the moral decision, and they had to indicate whether the digit was a 5. This task increases the participants’ cognitive load (hence the condition name), and makes it more difficult to cognitively process other information. Thus, it should selectively interfere with System 2 processes, but not affect System 1 processes. The prediction, then, is that the cognitive load condition will interfere with utilitarian judgments, but not the “intuitive” ones.
And this is essentially what Greene et al. found. While the cognitive load condition didn’t effect the amount of utilitarian responding (around 60% in both conditions), it did selectively influence the amount of time it took to make a decision. That is, in the cognitive load condition, participants took longer to make utilitarian than non-utilitarian decisions, while there was no difference in decision time in the control condition. This suggests that the cognitive load made the “rational,” System 2 decisions more difficult, but didn’t affect the intuitive, System 1 decisions, consistent with the dual process theory.
I think there are two potential problems with this data. The first, and most obvious, is that the cognitive load didn’t affect the amount of utilitarian responding. One would think that if the cognitive load is interfering with with System 2 processing, it would reduce System 2 responding. The second is that the study doesn’t involve a corresponding manipulation that should interfere with System 1 responding. My suspicion for some time has been that intuitive and cognitive, or both System 1 and System 2 processes are involved in both types of responses to footbridge-like dilemmas, with one or the other system dominating. For example, in the recent HPV vaccine debate in Texas, one group of people objecting to making the vaccine mandatory essentially argued that doing so would make teenage girls more promiscuous. Given that there is no evidence that such policies increase adolescent sexual behavior, I’m convinced that what happens is, people have an an emotional reaction to the association between the HPV vaccine, sex, and children, and this limits the information that they are able to use as input for System 2 processing. I suspect that something similar happens in the moral dilemmas often used in moral psychology research. That is, the type of emotional reaction people have to a dilemma influences what information they will, and will not consider. If such is the case, then simply manipulating cognitive or emotional reactions will not provide a clear picture of what’s going on when people make moral decisions, and we’ll be stuck with extremes, even if both extremes are used, as is the case in dual process theories. It may be the case, for example, that manipulating different emotions, or focusing people’s attention on particular information in the dilemmas, can result in patterns of behavior similar to (or perhaps even clearer) the one Greene et al. observed in their study. Until more rigorous studies are conducted, hopefully with better stimuli than the silly moral dilemmas, though, I’m just going to be confused about exactly what might be going on.
1Greene, J.D., Morelli, S.A., Lowenberg, K., Nystrom, L.E., & Cohen, J.D. (In Press). Cognitive load selectively interferes with utilitarian moral judgment. Cognition