Means vs. Ends & morality

A few days ago Alex Palazzo posted about Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong. The title is pretty self-explanatory. The author has a Moral Sense Test that you can take. I took it. If you plan on taking the Moral Sense Test, please click now before you read further and get "spoilers."

Back? Below is my summary, you can compare it to yours.

The scenarios you judged in this test pit means against ends, whic is a common philosophical contrast. Each of the characters must choose whether to use bad means to acheive good ends -- for instance, whether to harm a single person in order to help many others. The statistic provided is an indication of the choices you made about means versus ends. The closer it is to 1, the more heavily you appeared to weight means (the rights of one); the closer it is to 7, the more heavily you appeared to weigh ends (the benefit of many). Your statistic is 3.5. So far, the average statistic for subjects on this test is 3.9. It is important to realize that this statistic is merely provided for your own interest. The MST researchers make no claims about its validity as a psychological measure, nor do we make any claims about what choices are right or wrong. If you refer other people to this test we ask that you do not describe this statistic or its derivation so that they may complete the test with an open mind. Thank you for your participation.

This makes a little sense, as my libertarianish tendencies have a mild skew toward means as opposed to ends. Reading Anarchy, State and Utopia will do that to you....

Tags

More like this

The greatest myth within religious communities is that religion is the basis of all morality. Unfortunately for them, science is catching up. Just as Chomsky argued that humans have a language instinct, Marc Hauser from the main campus (Harvard) is arguing that humans have a morality instinct. This…
The title is self-explanatory. From digby: When the TB guy story was blazing I wrote that one of the most annoying things about it was that the guy said he was willing to put god knows how many other people at risk because he believed that he would die in a European hospital and had to get back to…
...I thought it appropriate to revisit some advice I gave to visitors to Boston last year (with a few changes): Bring a street map. You will get lost. You don't believe me? Then why are you asking me for directions? Your Google search map didn't help, did it? Seriously, the entire city street…
I generally resist doing the meme thing, but I saw this one over at See Jane Compute and I liked the idea of it. It's the year in review, by taking the first sentence of the first blog post of each month. So here goes: January I wonder if Executive Creative Director Don Schneider, of BBDO, (my…

I scored a 4.5, but the scenarious were just too full of false dichotomies for my taste. The two options they give for each one just aren't the only options available. I understand the point is to make you choose, but it just doesn't seem to work right.

A very interesting research study: thanks for the link.

It's also interesting to me that the study directions say "we ask that you do not describe this statistic or its derivation so that they [other test takers] may complete the test with an open mind." So does this mean that your not following their instruction by posting a description of the statistic could skew their results? Or not, I guess, if people reading your post follow your instruction to take the test before reading your summary? Could that make for an interesting little test question in itself, say "Is it OK not to follow study instructions providing you give instructions to others such that if they follow your instructions your not following the study instructions will have no effect?" Or are such instructions as "do not describe this statistic" so generally useless that they can't be depended upon when evaluating the study's validity? Obviously, I don't know a lot about insuring validity in internet studies.

3.0. Hmm, I'm not a Libertarian.

By somnilista, FCD (not verified) on 11 Sep 2006 #permalink

I scored 2.2.

I chose the second option for all of the questions (next to "not bad at all") except the one about the cable hanging outside the van (which was bad because he was stupid enough to leave those cables out).

I scored a 2.

But there is a reason. Part of it was that as the test went on, I chose more and more for not punishing.

I really think the test missed something, particularly when it claims that it was testing means vs. ends. Because what it was really testing through the questions was whether the actor should be punished for making a different moral choice.

My recognition early on was that these were hard moral choices, and that I would not punish somebody for making a different, difficult moral choice than I would.

I read about a similar study to this, but the goal was to see what sort of actions people feel are wrong. Such as, very few people were willing to push a man in front of a train to save five people, but they were willing to flip a switch that would send the train to a track where it would hit only one person instead of five.

By Rob Cozzens (not verified) on 11 Sep 2006 #permalink

I went consistently with assigning blame. But there's no contradiction in saying a person was responsible for the death of someone else while also acknowledging that they had no better option, so I don't think the result is particularly meaningful (at least in my case).

"what it was really testing through the questions was whether the actor should be punished for making a different moral choice."

Interesting. I don't recall any suggestion of punishment. The instructions I had talked about responsibility...and I just retried the test under different demographic data and got 'how bad a person is...?'. I think that different people get different questions...and I would answer the questions I got the second time differently from the ones I got the first time...

By Christopher Gwyn (not verified) on 11 Sep 2006 #permalink

I scored a 1, as I though in each case that the guy made the right decision to save 5 by sacrificing 1, and as such was blameless...

Seems that Matt would have scored this oppositedly?!

pconroy - Yeah, I got a 7. The funny thing is that we agree that in each case the right decision was made, given the assumptions of the questions. It is the guy's fault that the one person died, but on net he achieved the best result.

I realised that I was more lenient with murders for the sake of the children. Apparently, to me, the five children in a burning house are a more important end than the five railroad workers hit by a box car. I wonder if this behaviour correlates with the number of children the test taker has. I have three.

I realised that I was more lenient with murders for the sake of the children.

i was too. children have long potential lives.

Christopher Gwyn wrote:


Interesting. I don't recall any suggestion of punishment. The instructions I had talked about responsibility . . .

You're right, it didn't mention punishment. My bad. It asked how much blame each person bore.

But my point is still valid. I was not about to blame somebody for making a hard moral choice, even if it was not the choice that I would have made myself under the same circumstance. And, in my opinion, that totally invalidated the test.

I'm pretty suspicious of the attempt to find innate athics. I'm willing to grant that there's something there, but what you have is a hodgepodge of miscellaneous principles which often conflict with one another. And any look, not just at actual ethical practice, but at the history of ideal ethical teachings, finds enormous changes and cultural variations.

There really have been enormous ethical changes in history, and I think that biology will only illuminate them a little. For example, I'm sure there's an "honor killing" site in the brain, but in our society we need to repress it (in ourselves and others). Some for other forms of feud, vendetta, and revenge.

I guess I'm grumbly today, but I don't like that kind of test. I know that these are supposedly mental experiments which are intended to get us to clarify our values (usually by finding out how utilitarian we are), but this kind of very artificial hypothetical strikes me as malicious. You could pump it up as high as you want: "Lex Luthor will destroy civilization unless you agree to take a dull knife and dismember an innocent child one joint at a time. What do you do?"

My conclusion is that sometimes, in very rare circumstances, all of your ethical choices are bad -- ethically speaking, you're fucked. ("Would you kill your mother or your wife if you had to choose".) There's no particular point in preparing yourself for those hopeless situations.

Bertolt Brecht built a play (Mother Courage) around this kind of choice, and his motive was the justification of Communist "ethical realism".

There are less rare circumstances where, for example, there is clearly a good ethical choice to be made, but it's a very painful one. That's where ethics does its work, if it ever does.

By John Emerson (not verified) on 12 Sep 2006 #permalink

The test also is a flagrant use of utilitarian calculus, whether the "greatest good for the greatest number" is to be achieved. Under these circumstances, it would be difficult not to choose in favor of saving the many over the one; it's a calculus of maximal benefit. However, benefitting the greatest number is not necessarily the "moral thing to do." But given untenuous situations that are dichotomously dire, it would appear to be the best option available to save many rather than one.

But what if you and the one were of the same ethnic and racial background, perhaps related and went to church and school together, and is known to be an immensely successful scientist, whereas the five were of a different ethnic race and unrelated and their value to society is dubious or unknown? Would you choose the five unknowns to the one known? Adding this "color" changes the dynamic, immensely.

If one reads that a Harvard research study found support for utilitarian ethics, though, I'd laugh my head off (despite its probability). By adding my complexity, however, you will have achieved (1) the way most philosophers regard the problem, and (2) reciprocal altruism's insight that our concern for those we are closer to is higher than strangers. In other words, context makes the calculus of "greatest number" often irrelevant.

Its seems to me that one scenario, the burning building, had an extra degree of complexity. In it, the actor is also saving himself by sacrificing the 1. Is it more wrong because the actions of the actor could be explained by selfishness?

Those of the questions I looked at all described complex situations, where a lesser-evil choice occured essentially by coincidence.

A more realistic ethical choice is something like "In the midst of negotiating a business deal, you find that your company is committing fraud. There will be a big payoff for you if the fraud succeeds, but otherwise not. If you don't get the money, you won't be able to send your son to a good school." Here it isn't a coincidental dilemma. If the money goes wrongly to you, you benefit; if the victim keeps the money, he benefits. It's just one sum of money which can go two ways which is question, in other words -- and not the coincidence that some completely unrelated people will happen to be hurt because they're standing in the wrong place.

To Gay Species: Precisely my problem when taking the test. Although I have heard the researcher interviewed (On Point, NPR) and understand what they were seeking to learn, I was uncomfortable with the utilitarian calculus as well, as it is often fundamentally immoral, reducing humans to objects and deciding morality based on quantity. That is supremely rational, but reason isn't necessarily moral. That could be my own post-utilitarian bias (that is, I've read a lot of critiques of it, and I happen to fall more in the Thomas Jefferson school of social ideals for happiness, which is that if we are all equally valuable that there is no calculus to decide which one(s) deserve to live).

Just FYI, from what I gathered in the interview, what they are testing for is actually something different. They were trying to see if people make a distinction between an "action" that causes harm and an "omission" that causes harm; and as a correlary, is there an issue with distance (is it more difficult to kill a person with your own hands right next to you than it is to kill many people at a distance).

I think the utilitarian frame is a spandrel ;)

A building is on fire. In one room is a small child. In a second room there are a dozen viable frozen fetuses. In a third room there are a thousand mice having continuous orgasms as part of a psychology experiment.

You control the sprinkler system from the distance, and it can only save one room. There's not time for you or anyone else to do anything else. What do you do?

By John Emerson (not verified) on 12 Sep 2006 #permalink

What do you do?

Grab the Jenna Jameson DVD that the mice are watching.

...by the way, this "moral intuition" test is rather stupid as it doesn't actually present real world scenarios. Some kinds of things, like physics, are useful to test in very unrealistic situations (such as on the moon). Other things are not.

Well, GC, if you interfere with the million mouse orgasms, you better have a million-mouse-power orgasm yourself, or else the God of Utility will get radical on your groin area. That's one hell of a lot of utils there.

It would be a mouse-porn tape anyway.

By John Emerson (not verified) on 13 Sep 2006 #permalink