A few weeks ago, I wrote a post that was pretty critical of the current state of Experimental Philosophy. In the post, I focused on the work of Joshua Knobe, not because his work is the worst Experimental Philosophy has to offer, but because it is, in my mind, the best by far. Yesterday on the Experimental Philosophy blog, David Pizarro linked to a manuscript he’s writing with Knobe and Paul Bloom that demonstrates quite well why I think this, and furthermore provides a very good example of what Experimental Philosophy can be when it closely aligns itself with scientific psychology.
The manuscript is on implicit moral judgments, and is based largely on Knobe’s work on morality and intentionality. For those of you who don’t know this work, I’ll briefly describe it here. I’ll also verybriefly describe some of the past work on implicit attitudes. It might also help if you’ve read this post on Jonathan Haidt’s social intuitionist model, but you can probably get by without that. If you’re already familiar with all of these, skip down to the section titled Pizarro, Knobe, and Bloom below the fold.
Knobe’s Work on Intentionality and Morality
Over the past few years, Knobe has conducted several experiments that demonstrate a connection between negative moral judgments (“That’s bad!”) and inferences of intentionality. When a person does something that, as a side effect, causes a foreseeable morally bad outcome, people are more likely to say that he or she caused the morally bad outcome intentionally than if a person does something that, as a side effect, causes a foreseeable good outcome. To illustrate, here are two of Knobe’s scenarios (from his OPC paper, p. 3):
Scenario 1: The vice-president of a company went to the chairman of the board and said, ‘We are thinking of starting a new program. It will help us increase profits, but it will also harm the environment.’
The chairman of the board answered, ‘I don’t care at all about harming the environment. I just want to make as much profit as I can. Let’s start the new program.’
They started the new program. Sure enough, the environment was harmed.
Scenario 2: The vice-president of a company went to the chairman of the board and said, ‘We are thinking of starting a new program. It will help us increase profits, and it will also help the environment.’
The chairman of the board answered, ‘I don’t care at all about helping the environment. I just want to make as much profit as I can. Let’s start the new program.’
They started the new program. Sure enough, the environment was helped.
Most participants who read the first scenario, the vast majority of participants say that the chairman intentionally harmed the environment (a morally bad outcome), whereas when they read the second scenario, very few participants say that the chairman intentionally helped the environment (a morally good outcome). Why is this? I haven’t the slightest idea, and so far, there hasn’t really been any empirical work to answer this question. However, Knobe has demonstrated this relationship between morality and intentionality over and over again, and for our purposes, the important thing is that the relationship appears to be real and robust.
Chances are, if you’re interested enough in psychology to be reading this blog you’ve heard of the Implicit Association Test (IAT)1, but in case you’ve been living under a rock, or have extreme retrograde amnesia, I’ll give you a little bit of a description. In a typical version of the IAT (which you can take yourself here), participants are presented with target words (e.g., stereotypical black or white names) or images (e.g., black or white faces), one at a time, that are from two categories (e.g., black and white). In addition to the target words or images, there are also filler words or images of pleasant or unpleasant things (e.g., roses and spiders). Each time a word or image is presented, participants are asked to place it into one of two categories. For versions of the IAT that measure racial attitudes, one trial will use the categories “White or Pleasant” and “Black or Unpleasant,” and a second trial will use the categories “White or Unpleasant” and “Black or Pleasant.” The time that you take to categorize each item is measured, and the difference between response latencies determines your IAT score. The method for computing the difference scores is a little complicated, but in essence it determines whether you’re faster to place white names or faces into the category “White or Pleasant” than you are to place black faces into “Black or Pleasant,” and so on.
The IAT isn’t the only test of implicit “attitudes.” Other tests include the go/no-go association test (GNAT), which is based on the IAT, the bona fide pipeline technique (BFP), the Extrinsic Affective Simon Task (EAST), and the Evaluative Movement Assessment (EMA). However, the IAT is the most popular, and has received a great deal of attention in the popular press, due in large part to a public relations campaign by its authors and the NSF and NIMH. In my mind, giving the IAT so much publicity is the most irresponsible thing I’ve seen in psychology since I began studying it, short of testifying in court that there is scientific verification of the existence of recovered memories (the IAT, at least, has not ruined anyone’s life). While the IAT has been publicized (by its authors!) as a measure of implicit attitudes, and even more, as a measure of implicit prejudice, there is no real evidence that it measures attitudes, much less prejudices. In fact, it’s not at all clear what it measures, though the fact that its psychometric properties are pretty well defined at least implies that it measures something. On top of that, the IAT (like all of the other implicit tests) has serious methodological flaws that are currently being discussed in the literature. It’s just irresponsible to publicize work, and claim that it does something very particular, when the work is still in the early stages and it’s not at all clear what it’s actually doing (read paper, or this one, for discussions of some of the problems with the IAT and other measures, including whether they actually measure “attitudes”).
OK, I can climb off my high horse now. On to the Pizarro, Knobe, and Bloom study.
Pizarro, Knobe, and Bloom
With recent theoretical and empirical work in neuroscience and moral psychology, researchers have begun to treat moral judgment as a largely intuitive, and perhaps even unconscious phenomenon. However, it’s not very easy to measure (possibly) unconscious, moral intuitions, and with all the problems inherent in the implicit measures described above, not the least of which is that it’s not clear they’re actually measuring attitudes, moral psychology is in desperate needs of methods for measuring implicit moral intuitions so that these new theories can be thoroughly tested. And this is why I am so excited by the Pizarro, Knobe, and Bloom experiment. Granted, it’s just the preliminary work, but it’s still very promising.
Here’s the logic behind the experiment: if people tend believe that an outcome was produced intentionally when that outcome was both morally bad and foreseeable, but not when it was morally good and foreseeable, then if people believe that an outcome was produced intentionally, and that outcome was foreseeable, we can infer that those people believe the outcome was morally bad. The logic is very simple, and unlike the other implicit measures, it gives us a measure (intentionality) that has an empirically demonstrated connection to what it’s measuring (moral valence).
So, Pizarro et al. used four scenarios, two of which involved outcomes that some people might see as morally bad (“implicit transgression scenarios”), and two of which involved morally neutral outcomes (morally neutral scenarios). Here are descriptions of the four scenarios (quoted from p. 3 of the manuscript):
Implicit Transgression Scenarios
- A director who makes a music video that had the effect of encouraging French-kissing in public among gay men.
- A vice-president of advertising who approves an ad campaign encouraging interracial sex.
- A director who makes a music video encouraging French-kissing in public among heterosexual couples
- A vice-president of advertising who approves an ad campaign encouraging the placement of gardenias in one’s office.
Morally Neutral Scenarios
Each of these outcomes was described as a side effect, as in the environment scenarios described above. Half of the participants received the “implicit ransgression scenarios,” and half the morally neutral scenarios. The participants were asked two questions for each scenario (p. 3):
Did [Person A] intentionally encourage [Behavior X]
Is there anything wrong with [Behavior X]?
The first question was answered on a 1 (not at all) to 7 (definitely) scale, and the second was a yes or no question.
The answers to the second question were mostly unsurprising. Few participants said that they thought there was anything wrong with encouraging gay men to French kiss in public (in fact, more people said it was wrong to encourage heterosexual couples to kiss in public) or to encourage interracial sex. These percentages were not much different from those of the two morally neutral scenarios. However, the answers to the first question looked different. The mean intentionality ratings for both of the implicit transgression scenarios was close to 4.5 (on the 7-point scale), while the mean for the two morally neutral scenarios was under 3. So, on average, people felt that the outcomes of the two morally neutral scenario were not intentionally produced, but that the outcomes of the implicit transgression scenarios were. Using the logic described above, we can thus infer that despite their explicit statements to the contrary, they actually did believe the outcomes of the implicit transgression scenarios were morally bad.
To further connect the results to past research, they also correlated the intentionality scores with overall disgust scores, and found statistically significant positive correlations between overall disgust and intentionality for the two implicit transgression scenarios, but not for the morally neutral scenarios. This is interesting because previous research showing that feelings of disgust tend to produce negative moral judgments2 (see also this post).
Once again, I am very excited by this research. I think it has a great deal of potential to help us study moral judgment from an intuitionist perspective, and also to help us to understand why morality and intentionality are connected, perhaps through theories of moral judgment like Haidt’s social intuitionist theory. Of course, this is just the preliminary work; there’s still a lot of work to be done. For one, in order to be really useful, this measure of moral intuitions would have to be sensitive to the graded structure of moral valences. Presumably, most people feel that murder is much worse than gay kissing (at least most people in blue states), and a measure of moral intuitions would have to capture this. Furthermore, it will be important to show that intentionality scores correlate with moral emotions other than disgust (see Haidt, 2003, or this post, for a discussion of other moral emotions3). And there are some obvious problems. It won’t be possible to come up with “Knobe Scenarios” for many morally bad outcomes, for example. Still, it’s a promising start, and in my mind, a brilliant use of Knobe’s research that could do as much for experimental philosophy as moral psychology.
1Greenwald, A. G., McGhee, D. E., & Schwartz, J. L. K. (1998). Measuring individual differences in implicit cognition: The implicit association test. Journal of Personality and Social Psycholog, 74, 1464-1480.
2E.g., Nichols, S. (2002b). Norms with feeling: Towards a psychological account of moral judgment. Cognition, 84, 221-236.
3Haidt, J. (2003). The moral emotions. In R.J. Davidson, K.R. Scherer, & H.H. Goldsmith (Eds.), Handbook of Affective Sciences. Oxford: Oxford University Press, 852-870.