A few weeks ago, I wrote a post that was pretty critical of the current state of Experimental Philosophy. In the post, I focused on the work of Joshua Knobe, not because his work is the worst Experimental Philosophy has to offer, but because it is, in my mind, the best by far. Yesterday on the Experimental Philosophy blog, David Pizarro linked to a manuscript he's writing with Knobe and Paul Bloom that demonstrates quite well why I think this, and furthermore provides a very good example of what Experimental Philosophy can be when it closely aligns itself with scientific psychology.
The manuscript is on implicit moral judgments, and is based largely on Knobe's work on morality and intentionality. For those of you who don't know this work, I'll briefly describe it here. I'll also verybriefly describe some of the past work on implicit attitudes. It might also help if you've read this post on Jonathan Haidt's social intuitionist model, but you can probably get by without that. If you're already familiar with all of these, skip down to the section titled Pizarro, Knobe, and Bloom below the fold.
Knobe's Work on Intentionality and Morality
Over the past few years, Knobe has conducted several experiments that demonstrate a connection between negative moral judgments ("That's bad!") and inferences of intentionality. When a person does something that, as a side effect, causes a foreseeable morally bad outcome, people are more likely to say that he or she caused the morally bad outcome intentionally than if a person does something that, as a side effect, causes a foreseeable good outcome. To illustrate, here are two of Knobe's scenarios (from his OPC paper, p. 3):
Scenario 1: The vice-president of a company went to the chairman of the board and said, 'We are thinking of starting a new program. It will help us increase profits, but it will also harm the environment.'
The chairman of the board answered, 'I don't care at all about harming the environment. I just want to make as much profit as I can. Let's start the new program.'
They started the new program. Sure enough, the environment was harmed.
Scenario 2: The vice-president of a company went to the chairman of the board and said, 'We are thinking of starting a new program. It will help us increase profits, and it will also help the environment.'
The chairman of the board answered, 'I don't care at all about helping the environment. I just want to make as much profit as I can. Let's start the new program.'
They started the new program. Sure enough, the environment was helped.
Most participants who read the first scenario, the vast majority of participants say that the chairman intentionally harmed the environment (a morally bad outcome), whereas when they read the second scenario, very few participants say that the chairman intentionally helped the environment (a morally good outcome). Why is this? I haven't the slightest idea, and so far, there hasn't really been any empirical work to answer this question. However, Knobe has demonstrated this relationship between morality and intentionality over and over again, and for our purposes, the important thing is that the relationship appears to be real and robust.
Chances are, if you're interested enough in psychology to be reading this blog you've heard of the Implicit Association Test (IAT)1, but in case you've been living under a rock, or have extreme retrograde amnesia, I'll give you a little bit of a description. In a typical version of the IAT (which you can take yourself here), participants are presented with target words (e.g., stereotypical black or white names) or images (e.g., black or white faces), one at a time, that are from two categories (e.g., black and white). In addition to the target words or images, there are also filler words or images of pleasant or unpleasant things (e.g., roses and spiders). Each time a word or image is presented, participants are asked to place it into one of two categories. For versions of the IAT that measure racial attitudes, one trial will use the categories "White or Pleasant" and "Black or Unpleasant," and a second trial will use the categories "White or Unpleasant" and "Black or Pleasant." The time that you take to categorize each item is measured, and the difference between response latencies determines your IAT score. The method for computing the difference scores is a little complicated, but in essence it determines whether you're faster to place white names or faces into the category "White or Pleasant" than you are to place black faces into "Black or Pleasant," and so on.
The IAT isn't the only test of implicit "attitudes." Other tests include the go/no-go association test (GNAT), which is based on the IAT, the bona fide pipeline technique (BFP), the Extrinsic Affective Simon Task (EAST), and the Evaluative Movement Assessment (EMA). However, the IAT is the most popular, and has received a great deal of attention in the popular press, due in large part to a public relations campaign by its authors and the NSF and NIMH. In my mind, giving the IAT so much publicity is the most irresponsible thing I've seen in psychology since I began studying it, short of testifying in court that there is scientific verification of the existence of recovered memories (the IAT, at least, has not ruined anyone's life). While the IAT has been publicized (by its authors!) as a measure of implicit attitudes, and even more, as a measure of implicit prejudice, there is no real evidence that it measures attitudes, much less prejudices. In fact, it's not at all clear what it measures, though the fact that its psychometric properties are pretty well defined at least implies that it measures something. On top of that, the IAT (like all of the other implicit tests) has serious methodological flaws that are currently being discussed in the literature. It's just irresponsible to publicize work, and claim that it does something very particular, when the work is still in the early stages and it's not at all clear what it's actually doing (read paper, or this one, for discussions of some of the problems with the IAT and other measures, including whether they actually measure "attitudes").
OK, I can climb off my high horse now. On to the Pizarro, Knobe, and Bloom study.
Pizarro, Knobe, and Bloom
With recent theoretical and empirical work in neuroscience and moral psychology, researchers have begun to treat moral judgment as a largely intuitive, and perhaps even unconscious phenomenon. However, it's not very easy to measure (possibly) unconscious, moral intuitions, and with all the problems inherent in the implicit measures described above, not the least of which is that it's not clear they're actually measuring attitudes, moral psychology is in desperate needs of methods for measuring implicit moral intuitions so that these new theories can be thoroughly tested. And this is why I am so excited by the Pizarro, Knobe, and Bloom experiment. Granted, it's just the preliminary work, but it's still very promising.
Here's the logic behind the experiment: if people tend believe that an outcome was produced intentionally when that outcome was both morally bad and foreseeable, but not when it was morally good and foreseeable, then if people believe that an outcome was produced intentionally, and that outcome was foreseeable, we can infer that those people believe the outcome was morally bad. The logic is very simple, and unlike the other implicit measures, it gives us a measure (intentionality) that has an empirically demonstrated connection to what it's measuring (moral valence).
So, Pizarro et al. used four scenarios, two of which involved outcomes that some people might see as morally bad ("implicit transgression scenarios"), and two of which involved morally neutral outcomes (morally neutral scenarios). Here are descriptions of the four scenarios (quoted from p. 3 of the manuscript):
Implicit Transgression Scenarios
- A director who makes a music video that had the effect of encouraging French-kissing in public among gay men.
- A vice-president of advertising who approves an ad campaign encouraging interracial sex.
Morally Neutral Scenarios
- A director who makes a music video encouraging French-kissing in public among heterosexual couples
- A vice-president of advertising who approves an ad campaign encouraging the placement of gardenias in one's office.
Each of these outcomes was described as a side effect, as in the environment scenarios described above. Half of the participants received the "implicit ransgression scenarios," and half the morally neutral scenarios. The participants were asked two questions for each scenario (p. 3):
Did [Person A] intentionally encourage [Behavior X]
Is there anything wrong with [Behavior X]?
The first question was answered on a 1 (not at all) to 7 (definitely) scale, and the second was a yes or no question.
The answers to the second question were mostly unsurprising. Few participants said that they thought there was anything wrong with encouraging gay men to French kiss in public (in fact, more people said it was wrong to encourage heterosexual couples to kiss in public) or to encourage interracial sex. These percentages were not much different from those of the two morally neutral scenarios. However, the answers to the first question looked different. The mean intentionality ratings for both of the implicit transgression scenarios was close to 4.5 (on the 7-point scale), while the mean for the two morally neutral scenarios was under 3. So, on average, people felt that the outcomes of the two morally neutral scenario were not intentionally produced, but that the outcomes of the implicit transgression scenarios were. Using the logic described above, we can thus infer that despite their explicit statements to the contrary, they actually did believe the outcomes of the implicit transgression scenarios were morally bad.
To further connect the results to past research, they also correlated the intentionality scores with overall disgust scores, and found statistically significant positive correlations between overall disgust and intentionality for the two implicit transgression scenarios, but not for the morally neutral scenarios. This is interesting because previous research showing that feelings of disgust tend to produce negative moral judgments2 (see also this post).
Once again, I am very excited by this research. I think it has a great deal of potential to help us study moral judgment from an intuitionist perspective, and also to help us to understand why morality and intentionality are connected, perhaps through theories of moral judgment like Haidt's social intuitionist theory. Of course, this is just the preliminary work; there's still a lot of work to be done. For one, in order to be really useful, this measure of moral intuitions would have to be sensitive to the graded structure of moral valences. Presumably, most people feel that murder is much worse than gay kissing (at least most people in blue states), and a measure of moral intuitions would have to capture this. Furthermore, it will be important to show that intentionality scores correlate with moral emotions other than disgust (see Haidt, 2003, or this post, for a discussion of other moral emotions3). And there are some obvious problems. It won't be possible to come up with "Knobe Scenarios" for many morally bad outcomes, for example. Still, it's a promising start, and in my mind, a brilliant use of Knobe's research that could do as much for experimental philosophy as moral psychology.
1Greenwald, A. G., McGhee, D. E., & Schwartz, J. L. K. (1998). Measuring individual differences in implicit cognition: The implicit association test. Journal of Personality and Social Psycholog, 74, 1464-1480.
2E.g., Nichols, S. (2002b). Norms with feeling: Towards a psychological account of moral judgment. Cognition, 84, 221-236.
3Haidt, J. (2003). The moral emotions. In R.J. Davidson, K.R. Scherer, & H.H. Goldsmith (Eds.), Handbook of Affective Sciences. Oxford: Oxford University Press, 852-870.
This actually is fairly interesting; I wasn't expecting to be impressed. The title of the paper seems to me to be a bit strong given what they show (e.g., I'm not convinced that placing X under the same presumptions usually associated with things identified as morally wrong need always be an 'implicit judgment' that X is morally wrong -- and I think 'transgressive' is much better than 'morally wrong' here, anyway, since the relation between the two is not always straightforward even in conscious thought), but you're right that the content of the paper is quite exciting. As I said, I wasn't expecting to be impressed by it.
Hm, right now I'm not impressed, but I probably could be.
Did Knobe test for any finer correlations between intentionality and other feelings? My instinct for the implicit transgression scenarios is that they're subversive, but not wrong. I would guess I would be more likely to say something was intentional if it went against the norm (i.e., the morality of the majority), no matter what I thought of it, morally. For example, even though I'm an atheist, if something has the effect of strengthening people's faith in God, I'll think that was unintentional, and if something has the effect of making people question the existence of God, I'll think it was intentional.
In my humble opinion the pointed connection between the intention and moral judgments can be easily explained, if we put sentences in different frames. It seems that the cases from the scenarios might be put under the two following cases in general:
1.Consider someone that has a moral reason not to do what he does. (moral judgment). By doing what he does, he is intentionally (intention) ignoring that reason .
2.Consider someone that doesn't have any moral reason not to do what he does. (no moral judgment). By doing what he does, he isn't intentionally ignoring any reason, as there is no reason to ignore...
For example, in hurting the environment case, we think that the vice-president had reason not to start with the program, and he intentionally ignored that reason.
But in the second example, in helping the environment case, we think that vice-president had no reason not to start with the program, so there was no reason to ignore ("will help the environment" doesn't play any role there, and might as well be changed with "will swap places of two grains of sand in desert")
In both cases "doing what he does", is continuing with the program with intention to make money.
If I'm right, what Pizaro, Knobe and Bloom were asking the tested people (in non-obvious way) is:
-Do you think there was moral reason for the person *not to do* what they have done?
Other thing that I think they should be carefull about is if 'reason' has to be moral reason, or if other kind of reasons can play the role in those kind of examples..
Band is interested in gaining lot of money with the next record. As a result they will affect their fans, so that they will stop being their fans. But the band don't care about keeping their fans. They choose to get a lot of money? Did they lose their fans intentionally?
Band is interested in gaining lot of money with the next record. As a result they will affact their fans, so they will continue being their fans. But the band don't care about keeping their fans They choose to get a lot of money? Did the band keep their fans intentionaly?.
I didn't refresh the page for some time, so I didn't read the ThePolynomial's comment before posting my comment.
But if my theory is right, then the society might be as well be treated as reason not to do something, and it doesn't have to be that the people are considering it morally wrong, maybe just socially unacceptable (in given society).
These are some good points. I'll first note that, as Pizaroo et al. note, the correlation with disgust strongly suggests that it is their own moral reaction, and not a reaction to societal mores. However, it would be interesting to see if you could correlate inferences of intentionality with emotions not associated with moral judgment.
Tanasije, your interpretation is certainly possible, but I think it would be difficult to explain the range of experiments in which Knobe has found a connection between morality and intentionality. Like I said, it's not clear why there's a connection, but check out Knobe's website (just google Joshua Knobe) for some of his other papers. He just finished a review of his work on intentionality, I think, and it should be available on his site.
Would you say that what those experiments are showing are prototype effects as for the concept of the intentional action?
And connected to that... would you say that proper explanation would need to give the schemata of the concept of intentional action, or in which the concept of intentional action is understood (be that it includes the general cases like "reason to do something", "reason not to do something", or more specific case like concrete feelings?)
Tanasije, you know, I'm not sure I know how to describe these as prototype effects for intentionality. In fact, my own guess is that the primary concept at play is a concept associated with intentionality, namely blame. I think when people assign blame, they're more likely to assign intentionality, and "morally bad" is part of the blame concept. It's been my suspicion all along that what happens in these experiments is that counterfactually mutating negative outcomes leads to blame, which in turn leads to inferences of intentionality.