Is Moral Psychology About Morals Or Their Function?

ResearchBlogging.orgQuandaries such as those involving stealing a drug to save a spouse's life or whether or not to have an abortion have historically dominated the study of the development of moral thinking. The predominant research programs in psychology today use dilemmas in which one choice is deontologically correct (it is wrong to rotate a lever that will divert a train and kill one person instead of five), and the other is consequentially correct (kill one person if it will save five others).

i-c53cdf4f8319c180a050dcf687fa24b9-choice highway sign.jpg

It is not surprising that psychologists have followed philosophers in proposing definitions for morality that are shaped by quandary-based ethics. In that vein, Turiel defined the moral domain as "prescriptive judgments of justice, rights, and welfare pertaining to how people ought to relate to each other." More recently, however, Haidt has argued that the study of moral psychology should not focus on the content of morality, but rather on the function of moral systems: "moral systems are interlocking sets of values, virtues, norms, practices, identities, institutions, technologies, and evolved psychological mechanisms that work together to suppress or regulate selfishness and make social life possible." Haidt has also suggested that a comprehensive moral psychology must study the full array of psychological mechanisms that are "active in the moral lives of people of diverse cultures." Further, it should go beyond the neural and psychological systems that support moral reasoning, and show how psychological mechanisms and culture mutually influence each other.

For a skill so complex as moral reasoning, it is difficult to identify the aspects of the moral reasoning process that may be innate, and therefore would not require any form of implicit or explicit instruction by cultural institutions (e.g. parents, schools, communities). Many such institutions (especially schools, secular and religious) champion virtue-based approaches to moral education, and claim that they provide "character education" or a "values education." Even major research universities claim to do so. USC's mission statement states: "We strive constantly for excellence in teaching knowledge and skills to our students, while at the same time helping them to acquire wisdom and insight, love of truth and beauty, moral discernment, understanding of self, and respect and appreciation for others."

How do people come to learn the ethical conventions of their communities? Is there evidence for universal moral systems? Is there any indication that building blocks of moralty can be found in human infants? In non-human animals? Find out over the course of the next week!

Haidt J (2007). The new synthesis in moral psychology. Science (New York, N.Y.), 316 (5827), 998-1002 PMID: 17510357

More like this

Nice. Looking forward to more posts on this subject.

By justlurking (not verified) on 17 Sep 2010 #permalink

Fascinating topic, anxiously awaiting more! Thanks.

By Nebularry (not verified) on 17 Sep 2010 #permalink

I find the stupid traintrack 'dilemmas' and their ilk pointless. Word games like that are so far removed from actual real world moral choices that, in my opinion, they do not engage the 'moral faculty' at all and instead invoke academic legalistic thought processes and often essentially random choices. They provide little or no information about the putative subject or the actual choices people make.

I've always found the study of morality to be one of the more interesting things studied in the field of psychology. In particular, what I liked about Kohlberg when I first heard him discussed, was that the action was less important than the rationale. Not as an excuse, but rather as a point of truly deliniating higher and lower moral functioning. e.g. Not stealing the drug out of fear of punishment vs. Not stealing the drug, regardless of being punished or not because it upholds a greater value by maintaining the order of society. The same sort of thinking of course applies to all of our systems of justice: when is it okay to steal? To kill? To have sex? Again, an intersting area of study. I too look forward to hearing more...

By Mike Olson (not verified) on 17 Sep 2010 #permalink

@3/anatman: you're right, these dilemmas are a bit contrived, but when you begin to study a complicated system it helps to break it down to its essentials - its constituent components. most of what we know about vision is based on people looking at flashing checkerboard patterns, for example. in a sense, visual cognition and moral reasoning are not so different; both complex mental systems in which some components are likely innate and some components are acquired based on associative learning.

Statements like this: "moral systems are interlocking sets of values, virtues, norms, practices, identities, institutions, technologies, and evolved psychological mechanisms that work together to suppress or regulate selfishness and make social life possible." always puzzle me. It may be that this Hobbesian view of human nature is the wrong way around. What we know (as well as we are able to say we "know" anything) is that 1) humans are social animals and 2) the social nature of humans is fundamental to the survival of the species. Isn't it possible that humans are naturally cooperative and it is selfishness that is a learned (cultural) behavior? The question is not insignificant because how you answer it colors every aspect of one's moral reasoning. I am fascinated by the subject and look forward to this series. Thank you for such an interesting topic!

(PS-Have an easy fast. Absolutely no goulash.)

aidel: couldn't both cooperation and conflict both be fundamental?

The purpose of philosophy is not to provide answers, which it patently cannot do. Only borderline cases aid understanding. Extreme quandaries only lead to infinite what-ifs (what if the one to be sacrificed is Hitler? The Pope?) akin to fantasising about screwing your sibling. Used properly, philosophy achieves clarity of thought through repeated passes which restate the question in ever less ambiguous terms. Any other use of philosophy is merely an elegant game played with words.

By Shadeburst (not verified) on 17 Sep 2010 #permalink

Jason: Absolutely! Conflict is fundamental for change/development of any sort. But I'm afraid that many things that appear to be conflicts are in fact confabulations. A little investigation and a big-picture/zoom lens are necessary to discern the difference. Otherwise you have sophisticated (in the Greek sense) language puzzles rather than significant issues -- a little like looking at the "heads" side of a penny, then looking at the "tails" side of a penny and thinking that there are two different coins.

The geek's answer to the train quandary is to throw the lever once to begin to divert the train, and then throw it back when the train is halfway across the switch, to cause the train to derail harmlessly. And no, it's not legitimate, upon hearing that answer, to post-facto introduce conditions that would rule it out.

These types of quandaries also suffer from the fatal flaw of failing to consider "not-doing" (heh, religion to the rescue, in this case Taoism and Buddhism).

Failing to throw the switch entails killing the train passengers: the not-doing is equivalent to doing, so the deontological result is the same as the consequentialist result. This applies across this entire set of problems.

---

Shadeburst at #9 is "not even wrong."

If you rule out philosophy as a means of providing the answers to moral quandaries, what do you rule in? What's left is religion (which can be seen as a subset of philosophy) and emotional reaction.

Would you rather elect a President who comes to moral conclusions as a result of philosophical reasoning or as a result of emotional reactions? Been there, done that, eight years of total fail culminating in an economic depression.

A:Individuals usually learn ethical conventions in conventional ways.
B:Evidence for 'universal moral systems' is the fact that pain is bad, well-being is good & freedom is good.
C:The evidence of moral 'building blocks' in infants is that they don't like pain & they like well-being.
C2:Many non-human animals, like pigs, have the moral capacity of 4-year old children; chimps & dolphins are more comparable to young adults.

@Toby Saunders

You're correct in that most humans don't like pain, they like "well-being", and most of them think that freedom is good. That doesn't really mean much, though.

Why is pain bad? Even if it damages someone, why is that bad?
Same question for well-being. Why is that a good thing? You could argue that it's good from the standpoint of continuing the existence of a particular person, but then why is continued existence a good thing?

Why is freedom good? Just because it works well for many societies doesn't mean that it's necessarily universally good. What if there was a society that that in exchange for perfect security demanded perfect obedience? If someone makes the choice to live in that society, is that a bad thing?

Re. Heywood #13: Pain is bad because it's painful: this is not a tautology, it's an irreducible intrinsic that is based on the physical properties of your central nervous system. Pain, pleasure, free will and its opposite, and so on, are all hardwired in the brain. They are part of "human nature" because they are empirical facts about the nature of the organism.

That is not to say that we should seek to make life totally painless, because doing so would no doubt impair our darwinian fitness. The utility of pain is as a negative feedback system to produce avoidance behavior toward stimuli that might be harmful. But beyond that utility, it becomes gratuitous, and seeking to inflict it becomes cruelty.

---

Here are a few more intrinsics & irreducibles for you:

Organisms seek to continue their own existence, and exceptions such as altruistic sacrifice don't disprove the generalization.

Organisms engage in behavior demonstrative of free will proportional to their intelligence (per findings on fruit flies, birds, and other species).

Organisms display approach behaviors toward pleasant stimuli and avoidance behaviors toward painful stimuli.

And there in a nutshell we have "the unalienable rights (18th century language for "intrinsic characteristics") of life, liberty, and the pursuit of happiness."

Is moral reasoning a "complex" skill? Is it even a skill? It's always seemed to me to just be nature's way of preventing you from freezing up when it would take too long to figure out what to do from first principles. Like John Donne, you're involved in mankind, so you don't just walk idly away from the lever, but on the other hand you're don't have time to research whether the one potential victim is Einstein or the five potential victims are The Simpsons. You just go with what's been dinned into you.

Skill, on the other hand, would be what you would be exercising if you did sit down and try to work out the answer from first principles.

By Ian Kemmish (not verified) on 20 Sep 2010 #permalink

I am interested in this topic, but to my knowledge, these questions are still under intensive investigation and debate hotly in moral psychology.
Personally, I think Haidt's account is most convincing; However, more empirical evidence are needed

@9 "Any other use of philosophy is merely an elegant game played with words."

I thought that was the only use for philosophy. "I have just proved motion to be impossible!" Cried Zeno as he ran down the street.

Thank you. I'm looking forward to more on this subject.

@17...yes, but he only ran half way down the street, then half that distance down the street, then half that distance down the street, then half of that half of that half...ad infinitum, never actually getting down the street!

By Mike Olson (not verified) on 23 Sep 2010 #permalink

If language turns out to be an incredibly complex algorithim we haven't solved yet, would that justify philisophical reasoning? Or would it simply render answers as one persons favority algorithim as opposed to another? Or given such language algorithim could we solve complex moral issues? Or is this just a great example of how philosophy chases it's own tail?

By Mike Olson (not verified) on 23 Sep 2010 #permalink

Aidel:

What we know (as well as we are able to say we "know" anything) is that 1) humans are social animals and 2) the social nature of humans is fundamental to the survival of the species. Isn't it possible that humans are naturally cooperative and it is selfishness that is a learned (cultural) behavior?

That seems highly unlikely to me. Selfishness has too much survival value in too many circumstances, and it seems to be a strong default for humans.

Like Jason, though, I think we're "fundamentally both." Cooperation in some form or other has been advantageous for our species and various precursor species for many, many millions of years.

Even seemingly "simple" such as fish are often "both"---they have a balance of selfish drives and some sense of fairness or equity. (See this comment and especially the next one in the proto-fairness thread.)

IIRC, some animals also exhibit what's called "facultative behavior" where their degree of selfishness or social responsibile behavior is influence by the environment they grow up in. If they grow up in a relatively coherent group with well-enforced norms, they learn to be more socially responsible. If they grow up in a more chaotic environment, where there's more advantage to "taking care of #1," and less advantage to cooperating, they're more biased toward ruthless exploitative selfishness.

(Unfortunately, I can't recall the animals or studies in question, and I don't recall whether they convincingly showed a developmental trigger for extra selfishness, selected for by evolution, as opposed to just a failure of normal social maturation that wasn't evolved in by selection for that, but wasn't evolved out either.)

You can speculate a lot about connections to things like human sociopathy here---the genetic capacity for sociopathy might be evolved in as a facultative behavior, and be endemic at a low genetic frequency in the population---but I don't think anybody really knows.

By Paul W., OM (not verified) on 23 Sep 2010 #permalink

Mike Olson:

If language turns out to be an incredibly complex algorithim we haven't solved yet, would that justify philisophical reasoning?

Huh? The question sounds not even wrong. Language isn't an algorithm. It's more like a communication protocol schema, implemented by something like algorithms, which learn specific communications protocols (languages).

And that has directly little to do with the validity of what's being communicated. To even frame a reasonable question about that, you need to discuss things like meaning, reference and truth---e.g., does what's being communicated mean something, and mean something true about what's being referred to.

And I'm not sure what that has to do with "justifying" "philosophical reasoning" in general. Philosophical reasoning is something we inevitably do; some of it is justified, and some of it isn't.

Or would it simply render answers as one persons favority algorithim as opposed to another? Or given such language algorithim could we solve complex moral issues? Or is this just a great example of how philosophy chases it's own tail?

I seem to detect some presuppositions and favored answers there.

Philosophy doesn't always chase its own tail. Some philosophy is really good. Too bad the bad philosophy tends to get more attention from the public at large, and even from non-philosophers in academia.

By Paul W., OM (not verified) on 23 Sep 2010 #permalink

g724:

We don't instinctively care about our Darwinian fitness---e.g., I prefer sex without the prospect of reproduction---and the idea that we should care about it is suspect; it seems to be conflating a lot of issues at different levels.

Evolutionary selection pressures and psychological motivations are related, but it's a subtle relation.

The utility of pain is as a negative feedback system to produce avoidance behavior toward stimuli that might be harmful. But beyond that utility, it becomes gratuitous, and seeking to inflict it becomes cruelty.

So? You haven't come close to addressing why cruelty is bad, or even addressing how the distinct pain of eating hot chili peppers can make eating more pleasant. (As per the recent "Culinary Masochism" thread.)

I think you're right that pain is importantly related to displeasure in certain systematic ways, but with some very interesting exceptions that demonstrate that there are multileveled subtleties here, and any greedily reductive account is wrong.

Likewise, I think you're also right that pleasure and displeasure are importantly related to inclusive fitness, but again any greedily reductive account is just wrong---what fundamentally motivates me is what evolution actually evolved into me under particular selection pressures in particular environments, and if that's not what actually promotes my inclusive fitness in my actual environment, I don't really care---and I morally shouldn't care.

Here are a few more intrinsics & irreducibles for you:

Organisms seek to continue their own existence, and exceptions such as altruistic sacrifice don't disprove the generalization.

As a universal claim, they clearly do disprove it, and besides, the generalization is wrong anyhow.

As a less general "generalization," they indicate that the subject is complicated and simplistic generalizations are missing something deep and important.

Organisms are, by and large, not psychologically motivated to continue their own existence, or to reproduce.

Most don't even have the concepts of death, or of the relationship between sex and reproduction. The ones that do have the concepts don't necessarily care in the obvious or direct ways.

E.g., I don't fear dying because I instinctively fear being dead---I don't have an instinctive concept of being dead. I fear dying largely because I realize that if I'm dead, I don't get to do a lot of things I want to do, which I am more directly and deeply (and in some rough sense "instinctively") motivated towards.

And crucially, sincere psychological altruism does in fact crucially invalidate your generalization that organisms are motivated to continue their own existence.

If you don't realize that, you don't know what "altruism" means. It means being motivated to enhance or protect others' well being at the expense of one's own.

If you're arguing that there are no conflicts between selfish and unselfish motivations, you're just wrong.

And if you're arguing that unselfish motivations reduce to selfish ones---at a deep psychological level, not just in terms of inclusive fitness---I think that's wrong, too. There is no reason to think human psychology works that way, with everything being subservient to selfish drives, and there's good reason to think it doesn't.

Organisms engage in behavior demonstrative of free will proportional to their intelligence (per findings on fruit flies, birds, and other species).

Only if you define free-will as something like "intelligent choice-making"---in which case it's something like tautological. That's usually not what discussions of "free will" are about.

For most discussions of choice-making, we can just talk about intelligence and "will" as mechanistic thinking and choosing, and the word "free" is superfluous and probably misleading. Often we reserve "free will" for situations where there's a moral aspect and a conflict of motivations. That's a much hairier subject than you seem to imply.

Organisms display approach behaviors toward pleasant stimuli and avoidance behaviors toward painful stimuli.

Yes and no.

And there in a nutshell we have "the unalienable rights (18th century language for "intrinsic characteristics") of life, liberty, and the pursuit of happiness."

Um, no---not even close. Your statements are simplistic in general, but the big problem is that you haven't explained altruism, and why we would (or "should") generalize from what we want to ourselves into wanting that for others.

If you think moral and political philosophy are that simple, the nutshell is an appropriate container.

By Paul W., OM (not verified) on 23 Sep 2010 #permalink