A Little Help

If you've been reading this blog for a while, you might remember my old posts on moral psychology (I'm too lazy to look them up and link them, right now, but if you really want to find them, I'll do it). Well, after I discussed that research with a couple other psychologists who, it turns out, are as dissatisfied with it as I am, we decided to throw our hats into the moral psychology ring. Now, as people who study representation for a living, we all agree that the important part of moral decision making is in how people represent moral situations, so that's how we're approaching it. We've got some definite ideas about certain aspects of representation, but I don't want to get into that now. If you're really curious, drop me an email, and we can talk about it.

Our first idea was to use the traditional moral dilemmas, because they're what most people use in research on adult moral judgment these days. But then we remembered that we don't like their research, and a big part of why we don't like it is because the moral dilemmas suck. In case you don't know what the traditional moral dilemmas are, I'll give them to you real quick. There are two of them, and most studies involve contrasting people's decisions on the two. The first is the trolley problem:

A trolley is running out of control down a track. In its path are 5 people who have been tied to the track by a mad philosopher. Fortunately, you can flip a switch which will lead the trolley down a different track to safety. Unfortunately, there is a single person tied to that track. Should you flip the switch?

I stole that wording from the Wikipedia entry on the trolley problem, but that's basically the one that most researchers use. When presented with this problem, most people say yes, you should flip the switch. It's an OK problem, even though it's outlandish. I mean, how many of us really have any idea how to switch a train's course (unless we suddenly find ourselves in the Old West, and can use our knowledge of train tracks from westerns), or could ever imagine being in a position to switch a train's course? Hold your hands high, so that I can count them. None? OK.

The second moral dilemma is the footbridge problem (again from the Wikipedia page):

As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by dropping a heavy weight in front of it. As it happens, there is a very fat man next to you - your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?

Most people say no, you shouldn't proceed. Philosophers and psychologists draw all sorts of conclusions from this, when they contrast it with people's answers to the trolley problem, but we don't think they should. I mean, the footbridge problem is pretty damn silly. Who really believes that throwing a person in front of a train, regardless of how heavy that person is, is going to stop it? Cars don't stop trains! But even if we set aside the absurdity of the problem itself, it's simply not alignable with the trolley problem. For example, if we suspend disbelief for a moment, and run with the assumption that a large person really can stop the train, large people would be faced with a separate dilemma: instead of throwing this guy next to them onto the tracks, shouldn't they throw themselves onto the tracks? You don't get any purely altruistic options in the trolley problem, but the footbridge problem, if people can get past its physical impossibility, definitely has one.

So we threw out the idea of using these moral dilemmas altogether. Then there's another problem with traditional moral dilemmas, almost all of which are drawn from recent analytic philosophy. They're all extreme situations (sacrificing one life for many, or one life for a large sum of money, to take a couple examples), which may make it difficult for participants to really place themselves in the situation. This makes the ecological validity of any conclusions drawn from research with these problems suspect (at best). So, we're trying to find more mundane situations that involve moral dilemmas, that are alignable (in the sense that the differences between the two involve relations that are present in both -- I'll give you an example in a moment), and that induce people to make utilitarian or "principled" judgments in one case (as in the trolley problem, where people sacrifice one person to save five), but not in the other (as in people not sacrificing the one guy in the footbridge problem, and thus letting the five die).

As I've talked about these problems, I've become more impressed with two of Joshua Knobe's scenarios (they're not moral dilemmas), because they are perfectly alignable. In case you don't know Knobe's work, here are two of his scenarios:

Scenario 1: The vice-president of a company went to the chairman of the board and said, 'We are thinking of starting a new program. It will help us increase profits, but it will also harm the environment.'

The chairman of the board answered, 'ÂI don'Ât care at all about harming the environment. I just want to make as much profit as I can. Let'Âs start the new program.'

They started the new program. Sure enough, the environment was harmed.

Scenario 2: The vice-president of a company went to the chairman of the board and said, 'We are thinking of starting a new program. It will help us increase profits, and it will also help the environment.'

The chairman of the board answered, 'I don't care at all about helping the environment. I just want to make as much profit as I can. Let's start the new program.'

They started the new program. Sure enough, the environment was helped.

Notice how the only difference between the two is whether the outcome is good or bad? In other words, the difference is on a dimension (or in a relation) that is common to both scenarios. They're perfectly alignable, which makes them easy to contrast. That's the sort of alignability we need.

Anyway, I've told you all of this because we're stuck. We're having trouble coming up with relatively mundane moral dilemmas (ones that people might actually face once or twice n their lives, or at least believe that they could possibly face them one day) that have the features I mentioned above. So I'm looking for help. You folks are creative. Maybe you can come up with some ideas.

More like this

Maybe a euthanasia case? Say, contrasting the case of a button that will turn off life support vs. a button that will deliver a lethal concoction. (Keeping constant the background conditions of a fatal and debilitating disease that the patient no longer wishes to fight.) Will people be more likely to support the act of button-pressing in the first case than in the second?

Or you could come up with a "Robin Hood" scenario which trades off property rights vs. human welfare.

The moral dilemma is the way you have posed the problem.

People cannot properly intuit extreme probabilities. I doubt that you can fool them into applying their normal moral calculus to absurdly improbable situations. They will play a joke on you, or attempt to calculate whether you will pay them with cash or grade-points.

They can decide whether to sleep with the neighbours husband, or steal a pencil from work. But how are you going to isolate the decision. Entire novels are written trying to provide details sufficient to clarify a simple seduction. "work" includes the boss, other corporations, the charity donation in the newspaper, the pollution expose on NPR last night.

I'm inclined to think that the reason you're having difficulty with thinking up relatively mundane moral dilemmas is that moral dilemmas play very little role in our everyday moral life. Arguably our most common moral problem is akrasia; and that is not a dilemma but something else entirely. Dilemmas are just not a big part of ordinary moral life.

But Sartre has a famous dilemma that's relatively mundane and might be revisable for your purposes: A student has a brother who was killed by the invading Germans in 1940; because of this he wants to become a soldier in order to avenge his brothers death and fight the invaders, whom he regards as evil. However, he lives with his mother, who now has no family, and no consolation, but him. On Sartre's interpretation, the student is torn between two different moral structures, or kinds of morality: one of limited scope but high efficacy, namely, personal devotion to parents; and another of wide scope but limited efficacy, namely, defeating an unjust invader. (Obviously, it need not be even as extreme as it is here; the dilemma is between feeling called to some small contribution in large-scale moral good, like society at large, and feeling called to some large contribution in small-scale moral good, like family.)

If you want more mundane moral dilemmas than you could possibly need, I suggest watching the complete seasons 1-4 of "Curb Your Enthusiasm." There is no one better than Larry David when it comes to moral reasoning at this level of analysis.

(Seriously)

Scenario 1:
You are thinking about buying your favorite group's new $15 CD when you find that you can illegally download it for free on the internet. You love the new music and tell a friend who was also thinking about buying it. Your praise of the new music leads your friend to buy the CD.

Scenario 2:
You are thinking about buying your favorite group's new $15 CD when you find that you can illegally download it for free on the internet. You love the new music and tell a friend who was also thinking about buying it. Your praise of the new music leads your friend to illegally download the music.

By Joel Schneider (not verified) on 03 Feb 2007 #permalink

Scenario 1:

Someone in your company offers coffee in the mornings based on the honor system. Every cup of coffee is worth 50 cents, which precisely covers its cost (i.e., the person makes no profit). People pay by putting the money in a box next to the coffee machine. One morning you are really craving your coffee but discover you have no change, so you take a coffee without paying for it.

Scenario 2:

Someone in your company offers coffee in the mornings based on the honor system. Every cup of coffee is worth 50 cents, which precisely covers its cost (i.e., the person makes no profit). People pay by putting the money in a box next to the coffee machine. One morning you discover you really need two quarters for bus fare, so (without taking any coffee) you take 50 cents out of the box.

--

My intuition is that Scenario 2 is somehow more immoral, even though either way you're stealing 50 cents from the coffee person. Is it because of the same active vs. passive distinction as in the trolley car problems? Or maybe other people don't have the same intuition?

Another interesting thing about this set of scenarios is that you could explore if intuitions change when you change the cost of the item. Is a fifty cent cup of coffee treated the same way as, say, a ten dollar CD, or something even more expensive?

By Amy Perfors (not verified) on 03 Feb 2007 #permalink

Oh, I thought of another pair that's even more directly analogous to the trolley car problem:

Scenario 1:

You're camping overnight in a campground alone in your single-person tent, when it starts to rain hard. You notice that the three people in campsite A next to you don't have a tent and are getting drenched. So is the person in the nearby campsite B. You have a tent in the trunk of your car that can fit three people at the absolute maximum. Should you offer the three-person tent to the people at campsite A?

Scenario 2:

You're camping overnight in a campground alone in your single-person tent, when it starts to rain hard. You notice that the three people in campsite A next to you don't have a tent and are getting drenched. The person in campsite B, however, seems to be the only person in a spacious tent that will hold a maximum of three people. Should you offer the three-person tent to the people at campsite A?

-----

This is almost ridiculous, because who would seriously offer the tent in Scenario 2? Yet the two scenarios are directly analogous in their consequences -- either way the person in campsite B gets drenched -- and the only way they differ is in the initial circumstance of the person in campsite B.. just like the trolley problem. So perhaps this might be interesting.

By Amy Perfors (not verified) on 03 Feb 2007 #permalink

Sorry not to be of much help but I think that the whole idea of bringing "rationality" to moral dilemmas is futile and dangerous.
This is the typical legacy of the Greeks who "invented" logic for this very purpose.
Yet it does not make any sense because our actual decision making (in moral matters as well as in anything else) is NOT rational.
I don't remember having seen any reference to George Ainslie ( http://www.picoeconomics.com/about.htm ) at scienceblogs nor at mixingmemory and neither does Google recalls any.
But Ainslie thesis that we use hyperbolic discounting ( http://www.picoeconomics.com/breakdown.htm ) to evaluate distant rewards instead of the "rational" exponential discounting casts serious doubts about our ability to EVER come with consistent judgements on any matter.
That may be the real cause for most occurrences of akrasia which are always noticed ex post facto.
AND...
Trying to enforce "rationality" upon our decisions may lead to severe psychiatric problems:

Intertemporal bargaining also predicts four serious side effects of willpower: A choice may become more valuable as a precedent than as an event in itself, making people legalistic; signs that predict lapses should become self-confirming, leading to failures of will so intractable that they seem like symptoms of disease; there is motivation not to recognize lapses, which might create an underworld much like the Freudian unconscious; and concrete personal rules should recruit motivation better than subtle ones, a difference which could impair the ability of will-based strategies to exploit emotional rewards.

Thus your very endeavour may bring more harm than good.
Comments are welcome about other articles ( http://www.picoeconomics.com/articles.htm ) from Ainslie, too.

( Why does the silly anti-spam rejects named links ??? )

By Kevembuangga (not verified) on 04 Feb 2007 #permalink

It occurs to me that, in terms of mundane moral dilemmas, my choices are wildly inconsistent. I would respond differently to the same scenario (I think I would anyway) on different occasions.

My instincts, I guess, would conclude that the vast majority of the choices available to me over which I have some sort of binary power are pretty much inconsequential. And in those cases where I'm confronted with a binary choice, the utility is almost impossible to calculate. "Should I shove this fat guy onto the tracks? Hmmmm. Let me fire up my blackberry and check out the game theory section of wikipedia."

So, for guideposts, I'm left with principal or some sort of approximately random decision making.

By Tim Sullivan (not verified) on 05 Feb 2007 #permalink

I think it would be helpful to lay out the variables involved here, so that they can be tweaked one at a time. So, we might present the variables in Knobe's scenarios as: projected outcome of the program; goal of the actor; action by the actor; actual outcome.

We can see here that, according to those variables, it's not true that Knobe's examples differ only in their outcome: they also differ in the expected outcome of the program. So perhaps the correct variables are: expected outcome; actor's goal; actor's action; whether actor's expectations were correct. In this model, it's true that Knobe's scenarios differ only on one axis, that of the expected outcome.

My point, I suppose, is that we need to know the relevant variables before coming up with other good scenarios.

Definitely interested in the continuance of the Moral Psychology series of posts; have found II but no III et seq. Do they exist?

Found it.

Here's a topical one, OIF style:

You're in a firefight in Iraq, and have just seen a shooter pop out from behind cover to fire at something off to the side, and he doesn't see you. He's quite distinctive, and you recognize him as a high-value target, known to have been responsible for multiple deaths among your buddies with IEDs, etc. Killing him will almost surely save lives of more buddies in the fairly near future.
But just in front of him are a few (true) civilians -- say, women huddled behind inadequate cover -- who you will have to fire a burst of automatic fire through their position to have a chance of hitting your target.

Shoot, or not? ROE say "no". But you REALLY hate and fear this SOB.

You have an estimated 1.56 seconds to decide.
~~~~~~~~~~
So, dilemmas do exist, and happen daily. Just not in "average life". It's probably reasonable to guess/assume that the patterns and routines of "average life" are arranged and evolved to minimize such dilemmas, in fact. They are very disturbing, disruptive, and outcomes and nastily unpredictable. Indeed, a dilemma is almost by definition a breakdown of routine rules and average assumptions. So most of us probably go years without hitting more than a few trivial cases.

But in exceptional circumstances, like war, they may be as common as dirt and bullets.

Ugh. Typos etc. above.
Corrections:
Here's a topical dilemma, OIF style:

and you will have to fire a burst of automatic fire through their position