The Moral Mind

There's an odd article in the NY Times today on Marc Hauser's hypothesis that the human mind contains a "moral grammar," somewhat akin to a Chomskyan linguistic grammar. The article is odd because, while it acknowledges that Hauser's idea is supported by almost no direct evidence, it never mentions any alternatives to Hauser's theory. (If you're going to write about a tentative hypothesis, you should at least mention that other hypotheses exist.) If you only read this article, you'd assume Hauser was the first person to argue that human morality is an evolved, biological trait, and that his moral grammar is the only biological approach to morality.

So what are some other alternatives? I think the most promising hypothesis is also the simplest: humans make moral decisions using the same cortical machinery that we use to make every decision. This means that you don't need a specialized moral grammar to account for our odd moral instincts, since our moral instincts are directly derived from our more generalized decision-making instincts. Joshua Greene has done some of the pioneering work in the field. Greene argues that many of our moral decisions - just like all of our other decisions - result from the competition between our emotional limbic system (centered in the amygdala and dopaminergic midbrain) and a more "rational", deliberate, and cognitive system that is located in the frontal cortex. Whenever we are confronted with a dilemma, these distinct systems compete for control. The area with the most activity--the feeling we feel most intensely--is the one that ultimately decides what we do.

According to this hypothesis, there is nothing unique about morality. It wasn't handed down by God, and it isn't derived from some unique mental grammar. From the perspective of our brain, moral decisions are nothing special. They are just another type of decision, subject to the same biases and instincts that distort all of our decisions. This is our original sin: that there is nothing special about sin. We decide whether or not kill somebody using the same brain regions that also decide whether or not buy a pack of gum, or eat an apple.

Tags

More like this

Not to mention that fact that talking about "moral grammar" begs the question. The hypothesis is that there is some innate grammar that controls moral decisions. The assumption that begs the question is that there is something special about the decisions that are classified as moral.

Yes, I like this idea. Very blasphemous. Maybe we need religion just so that we think our moral decisions are different from our other decisions, even if they aren't.

Excuse my CS hat, but the term 'grammar' is wrong here imho. Your 'machinery' is more appropriate. A grammar only tells whether a given sentence is in the language; it doesn't describe how to act on it.

the main empirical factor behind hauser's hypothesis is the universality of the proposed moral principles: he tested widely among differing cultures, and yet found the same pattern of moral decision making process. the fact you alluded to, that humans use the same brain machinery (cognitive vs affective) for moral decisions as for other decisions, doesn't mean that there is no specific, identifiable computational principles for moral choices; they're both explanations at different levels.

He still begs the question because he decides what is a moral decision. The use of the term "moral" contaminates the entire hypothesis. He would be better served to remove his own prejudices and establish a definition for what he is seeking that is without judgemental connotations. If he cannot define "moral decisions" in a scientific way without the use of judgemental terms, then what he is doing is not science. I suspect fuzzy thinnking at best.

mark, you don't have to define 'life' to fruitfully discover how the DNA works, that's what francis crick once said; same goes for research on consciousness, morality, and other fuzzy, ill-defined areas. i'm not defending hauser, but i think most people agree that the kind of thought experiments he's conducting is best described as moral experiments. after all, he's just trying to find some principles that may underlie a particular type of decision making (moral, that is). defining which is moral and which is amoral, in my opinion, is a task best left to philosophers. i'm interested to know your version of moral science, what would it look like, if any?

Jonah,

I don't agree with you at all. You make the assumption that any being with a brain would automatically (and some would say magically) have a moral sense. Why would you think that? Would a computer (which was sufficiently sophisticated) automatically have a moral sense? All of our behavioral tendencies are strongly shaped by natural selection, including when we choose to reward and punish, how we think of cheaters, when to be selfish, when to be self-sacrificing and when to build alliances. Perhaps you would like to say that we have evolved an instinct for economics, or for politics ... but I would argue that this is just semantics, in the end all of our behavioral tendencies are due to natural selection. But just like the eye and hox genes, our political/economic/social/moral instinct has quirks. These oddities are due to the way in which these mental predispositions evolved ... and this is what is suggested by the body of scientific knowledge that is summarized in the book.

Alex: thanks for your comment but you missed my point. Of course, our morality is shaped by natural selection. What isn't? But I don't think our morality was shaped by different selective pressures than have shaped our neural decision making system in general. If you look at the brain, there is no evidence that we recruit specialized neural machinary (or a specialized neural grammar, to use Hauser's terminology) when making moral decisions. Instead, natural selection gave us an all-purpose decision making system in the brain. The same moral inconsistencies that Hauser celebrates as evidence of his specialized moral grammar can also be seen in all sorts of other non-moral situations. As many neuroeconomists and neuroscientists believe, these inconsistences result from a competition between our emotional intuitions and our more "rational" PFC and DLPFC. My main problem with arguing for a unique moral grammar (apart from the paucity of neurological evidence) is that it would imply that we also have a specialized grammar for economic decisions, and philosophical decisions, and political decisions, etc. But we don't: evolution only gave us a single decision making system. This system relies on the same biases and heuristics regardless of what the decision actually concerns.

I have been conditioned to doubt dualistic models. I suspect it would be better to say that emotions and cognition cooperate rather than compete, but cooperation in some situations is smoother than in others. And if unity isn't the answer, why wouldn't there be more than two systems to contribute to a solution? In the link you give to Joshua Greene, he describes activity in the cingulate cortex when someone has to make a difficult moral decision. So at least in that respect there is a different process for a moral decision than just any decision. How many more differences might there be?

Doesn't there have to be some difference between a decision with no emotional or moral consequence, like the order in which I eat my food, vs. something that is either emotionally or morally charged or both? Wouldn't our experience of them be more alike if they were alike? Of course there is one overall system that manages all that - our brain, but Greene already has shown at least three components to that system. I would guess there's more. From living my life making decisions, I'm sure it's not a simple calculation.

Discussions of innate morality are so abstract, I find it helpful to keep a principle in mind to test what is being said, one that I suspect does come to us from natural selection: "It's wrong to hurt people." Culture teaches us various definitions for those words. What consequences come from being wrong? What does "hurt" mean? What are people? Do people include my dog, but not human beings I don't know? Do people include all animals, all vertebrates? Underneath all those cultural variations is this moral principle from evolution. Is that wrapped up in our emotions somehow, just as our sexual orientation may be? Or does it have some special location? Is it better to break down the cortex into more than two systems, to include ones that only get involved when there is emotional or moral consequence to an action as opposed to some action that's purely existential. I suspect that's where a principle like, "It's wrong to hurt people," is hiding rather than in cortex used for our easiest decisions.

Maybe there's a better sentence to express that principle, but Greene's trolley dilemma points out that the principle exists. It's OK for me to turn a trolley to kill someone, saving 5. It's not OK for me to kill someone directly to do that. The first one doesn't violate this innate principle, because it wasn't me hurting someone. It was the trolley. My involvement was peripheral. The second one does violate the principle, somehow too wrong for me to do even if it would save 5 people. It's not everyone's principle, but it certainly doesn't seem to be a distinction people make up.

It's not that we're so illogical. One just has to understand our programming. Maybe in 100 years.

It may be that a "moral" question can be expressed in non-moralistic terms. That is to say, the question must be able to be defined in specific, unambiguous, nonjudgemental terms that include all such questions and exclude all others. If that be the case, then it might be possible to tell whether such questions are handled in a different part of the brain from other questions. Until that is settled, any investigation of where "moral" questions are processed is meaningless.

Doesn't there have to be some difference between a decision with no emotional or moral consequence, like the order in which I eat my food, vs. something that is either emotionally or morally charged or both?

Ah, so I take it that you're from a culture where the order in which food is eaten isn't deeply important. IT CAN BE. And that's why your objection is meaningless.

By Caledonian (not verified) on 01 Nov 2006 #permalink

Not meaningless or do you think Joshua Greene's observations in the cingulate cortex are meaningless?

So where in the brain is the difference behind the behavior of someone in a culture that puts absolutely no significance to the order in which they eat and one that does? Do you think that's all cognitive? Where is that? How could an innate morality arise strictly from cognition? Aren't emotions necessary? Yet it can't be purely emotional either. So why are you so sure there isn't something else?

DavidD: You're incorrect about Greene's finding re: cingulate cortex. The cingulate is activated in numerous cognitive tasks that have an element of response conflict, such as the Stroop color-word task. No moral dilemmas are involved in Stroop responding, so the same neural machinery is used for both moral and non-moral decisions, as far as this brain region is concerned. This goes back to Jonah's original hypothesis:

I think the most promising hypothesis is also the simplest: humans make moral decisions using the same cortical machinery that we use to make every decision.

By The Neurocritic (not verified) on 03 Nov 2006 #permalink

DavidD: You're incorrect about Greene's finding re: cingulate cortex. The cingulate is activated in numerous cognitive tasks that have an element of response conflict, such as the Stroop color-word task. No moral dilemmas are involved in Stroop responding, so the same neural machinery is used for both moral and non-moral decisions, as far as this particular brain region is concerned. This goes back to Jonah's original hypothesis:

I think the most promising hypothesis is also the simplest: humans make moral decisions using the same cortical machinery that we use to make every decision.

He still begs the question because he decides what is a moral decision. The use of the term "moral" contaminates the entire hypothesis. Yes Ofcourse

No moral dilemmas are involved in Stroop responding, so the same neural machinery is used for both moral and non-moral decisions, as far as this particular brain region is concerned. This goes back to Jonah's original hypothesis:

I think the most promising hypothesis is also the simplest: humans make moral decisions using the same cortical machinery that we use to make every decision.

It may be that a "moral" question can be expressed in non-moralistic terms. That is to say, the question must be able to be defined in specific, unambiguous, nonjudgemental terms that include all such questions and exclude all others..

Doesn't there have to be some difference between a decision with no emotional or moral consequence, like the order in which I eat my food, vs. something that is either emotionally or morally charged or both? Wouldn't our experience of them be more alike if they were alike? Of course there is one overall system that manages all that - our brain, but Greene already has shown at least three components to that system. I would guess there's more. From living my life making decisions, I'm sure it's not a simple calculation.

Of course there is one overall system that manages all that - our brain, but Greene already has shown at least three components to that system.

mark, you don't have to define 'life' to fruitfully discover how the DNA works, that's what francis crick once said; same goes for research on consciousness, morality, and other fuzzy, ill-defined areas.

mark, you don't have to define 'life' to fruitfully discover how the DNA works, that's what francis crick once said; same goes for research on consciousness, morality, and other fuzzy, ill-defined areas.

It may be that a "moral" question can be expressed in non-moralistic terms. That is to say, the question must be able to be defined in specific, unambiguous, nonjudgemental terms that include all such questions and exclude all others..