Values and facts

Shorter Sam Harris: FAQ :: How can you derive an âoughtâ from an âisâ?:

No.

This is not quite how he'd probably shorter himself, but that's not the immediately important issue. The immediately important issue is that he thinks we could get from "is" to "ought" by saying that actions which produce the worst possible suffering for all sentient beings are bad actions, and we ought to do something else. In other words, the "ought" Harris manages to derive (by various assumptions) is that we should not do the very worst thing possible.

As a reply to his critics (cf.), this goes nowhere special. Philosophically, I don't think this really cuts it. Even if we grant all his arguments, he's not gotten from "is" to "ought," but from "is" to "ought not." It strikes me that an "ought" should actually say what we "ought" to do.

Even that bit of the argument doesn't hold together, but Harris argues that not only is this "ought not" a useful contribution, he insists that it is all that's needed: "All lesser ethical concerns and obligations follow from this." This strikes me as either demonstrably false, or just useless. I don't know of anyone who thinks that their ethical system will tend to produce the most possible suffering for all sentient beings, or which outside observers claim would maximize suffering for all sentient beings. A guide to which moral values are right or wrong but which cannot actually distinguish between any extant moral systems is fundamentally not helpful.

Let's take a fairly standard extreme of a profoundly evil value system: Nazi Germany. Hitler did a lot of horrible things, slaughtering millions, waging unprovoked wars of aggression, wantonly attacking civilian centers with bombs and rockets, crafting a state policy dedicated to eradicating certain classes of people based on ethnicity, disability, sexuality, religious practice, etc. The suffering he caused was astonishing. But some people benefitted from his system â not least Hitler himself, but more generally all healthy members of the Aryan race.

In short it is not clear to me that even Hitler violated Harris's injunction against causing "the worst possible misery for everyone" (emphasis from original, repeatedly), there being more misery Hitler could have wanted to impose. Indeed, someone who undertook to destroy all life on earth but for himself or herself would still not, by Harris's standard, be "wrong" (emphasis original). Only if this person proposed to torture and then kill (in a maximally painful manner) all sentient life including himself or herself (and whatever sentient life exists on other planets) could we call that action wrong. Forgive me if I'm nonplussed by this moral insight. Indeed, if the capacity of sentient life to suffer is upwardly unbounded (which is not immediately implausible), then value systems could never produce "the worst possible suffering," and one could argue that no value system would ever violate Harris's prohibition.

Things would be better if he could generate some plausible metric for what we "ought" to do, so that we could identify peaks and valleys within a moral landscape. But doing that would involve making a value judgment. And that's where Harris's list of alleged "facts" really falls apart. Because most of it is deeply value-laden, and few if any of the claims are actually fact claims (as opposed to syllogistic logic or value-laden premises for waffly syllogisms).

Ophelia Benson has pointed some aspects of this out in Harris's comment thread and her blog, noting for instance the fallaciously excluded middle in Harris's attempt to get from ought-not to ought. If we ought-not create maximal suffering for everyone, he argues, we should seek to increase well-being for everyone. As Ophelia observes, most moral systems:

lead to well-being for some people and misery for other people. It just isn't usually the case that cultural practice X leads to well-being for everyone or that cultural practice Y leads to misery for everyone. One of the things that cultural practices do is sort people and allot more well-being to some than to others.

Harris's contention (underlined in all its occurrences) is that our concern should be to avoid creating "the worst possible misery for everyone," where "everyone" is treated throughout as "all sentient beings." That's a big value judgment. Ophelia is dead-on in pointing out that most of the variation among moral systems is variation regarding who counts in terms of the moral calculus. "Do unto othersâ¦" is uncontroversial, but who counts among those "others"? Other members of your family (weighted by genetic similarity, perhaps)? Members of your racial group? Members of your nation? Residents in your town? Only those who share your gender? Only those who share your sexual orientation? Only humans? Only sentient beings? Only living things? Only physical entities (but not corporations or abstract ideas)? Only physical entities or conglomerates of physical entities? Only sentient beings or conglomerates of sentient beings (i.e., corporations)?

Any of those can be and has been defended by some group at some point. The Supreme Court during the Lochner era gave greater weight to the rights of corporations than to individual workers, a doctrine that the Roberts court seems intent on reviving. I think that's immoral, but I don't know if it causes the most suffering possible. I do know that corporations would not even be part of Harris's moral calculus. Which is also a fair choice, but not one motivated purely by empirical evidence.

Nor, troublingly, does Harris allow any inherent moral status to the natural world. Aldo Leopold argued convincingly for a "land ethic" which would bring the natural world into the system of ethical obligations we feel towards family and society more broadly. He argues convincingly that the issue is not the life or wellbeing of individual deer on a mountain, but the integrity and wellbeing of the ecosystem as a while. Killing off the wolves may leave a lot of happy deer on the mountain, but it ultimately causes overgrazing and degradation of the landscape. This doesn't extend the same rights to mountains as we would to people, and it also runs exactly counter to the ethics underlying the animal rights movement. For Leopold, the individual animals aren't what matters. Suffering is part of life (as the Buddhists say), and the important thing is to ensure the consistency of the natural system itself. If that means hunting deer or weeding out some plants or reintroducing wolves (who can be crueler hunters than humans), then that's the right thing to do. Animal rights activists argue that animals have moral status and rights as individuals, including at minimum a right to their own lives, and generally also a right to individual agency (thus, not to be kept as pets or for purposes of labor or experimentation, let alone the harvesting of flesh, skin, eggs, honey, milk, etc.).

Leopold's land ethic is a foundation of modern environmental ethics, and a major factor in the growth of environmentalism in the 20th century. The closest one could come to wedging it into a Harrisian ethical framework would be to evaluate the ways in which environmental degradation contributes to the suffering of sentient beings. But the central ethical claim of a land ethic is that it is wrong to treat the natural world as a means to an end, that the integrity of natural systems is an end unto itself. Harris's system thus not only fails to account for environmental ethics (a nontrivial subset of the modern discourse on ethics), it is actively at odds with the principles of environmental ethics. I'd bet money that I could find similar examples from other fields (space exploration comes to mind, where a similar concern for non-interference in the natural state of other planets is a major topic of discussion). Animal rights would not uniformly fall into Harris's scheme, nor would phenomena like fruitarianism, where not only animals but plants are extended certain moral rights. That these systems fall outside Harris's framework doesn't say that they are wrong, it just shows how narrow his own view of moral philosophy is. He's just bundling all his assumptions into a structure that he thinks he can pass off as scientific. I'd think his goal was to impose this on others, but once you scratch the surface, he hasn't actually got anything to impose.

Let's briefly consider his 9 claimed facts:

FACT #1: There are behaviors, intentions, cultural practices, etc. which potentially lead to the worst possible misery for everyone. There are also behaviors, intentions, cultural practices, etc. which do not, and which, in fact, lead to states of wellbeing for many sentient creatures, to the degree that wellbeing is possible in this universe.

â¨Set aside that "the worst possible suffering" is a standard so absurd as to be meaningless. Focus instead on the choice to emphasize suffering and sentience. He does this because the ability to recognize pain is a mark of sentience, so he can claim that using suffering as a metric here is not an arbitrary value judgment. But the choice of sentience is still just such a value judgment, and not one which is uncontroversial. Again, it creates a potential direct conflict with environmental ethics (in which hunting and other killing of wild animals or plants can be not only acceptable but morally obligatory). Then note that Harris is treating wellbeing and suffering as opposites. What about people who get joy from suffering? Does this mean it's immoral to be a masochist? That sadists must be made to forego their own wellbeing â which is enhanced by causing suffering â in order to avoid diminishing the wellbeing of others? What about the unresolved problems with defining wellbeing?

This "fact" is simply not a fact claim. It is too deeply embedded in value judgments to be a fact on the order of a claim like "objects with mass exert an attractive force on one another."

FACT #2: While it may often be difficult in practice, distinguishing between these two sets is possible in principle.

Which "two sets"? What about the excluded middle, in which suffering is caused to some but the wellbeing of others is increased? This is not a fact claim, it is an attempted bit of logical deduction from the previous claim, and thus packages on top of its own logical fallacy all of the non-factual claims from the previous point.

FACT #3: Our âvaluesâ are ways of thinking about this domain of possibilities. If we value liberty, privacy, benevolence, dignity, freedom of expression, honesty, good manners, the right to own property, etc.âwe value these things only in so far as we judge them to be part of the second set of factors conducive to (someoneâs) wellbeing.

The first sentence is at best an attempt at definition, is value-dependent itself, and is in any event not a matter of scientifically testable fact. Lots of people do value liberty, freedom, privacy, etc., etc. on their own merits, regardless of whether they enhance wellbeing in all cases. Consider the ACLU's defense of the free speech rights of Nazis to march in Skokie. The ACLU wasn't happy to have Nazis marching through Skokie, Skokians weren't happy to have Nazis march through their city, and I expect that the Nazis would have been just as happy to be able to make a ruckus over being censored as they were to be able to march. The ACLU defended them not because doing so enhanced anyone's immediate wellbeing, but because they regard free speech as an end unto itself. I support the ACLU's work because I share that value. To the extent Harris's first sentence is meant to be descriptive rather than normative (a description of how "value" is described rather than how it ought to be described), it is simply false. If the claim is not an is but an ought, well, it undermines his claim to have gotten from is to ought (which he doesn't claim to do until "fact" 9).

FACT #4: Values, therefore, are (explicit or implicit) judgments about how the universe works and are themselves facts about our universe (i.e. states of the human brain). (Religious values, focusing on Godâs will or the law of karma, are no exception: the reason to respect Godâs will or the law of karma is to avoid the worst possible misery for many, most, or even all sentient beings).

The second word of this "fact" is misleading. Saying "therefore" suggests that there is some sort of logical necessity linking the prior statements to this one. No such logical necessity is obvious, if it exists at all. And even if it did exist, that would make this a deduction, not a fact.

FACT #5: It is possible to be confused or mistaken about how the universe works. It is, therefore, possible to have the wrong values (i.e. values which lead toward, rather than away from, the worst possible misery for everyone).

If we needed proof that "[i]t is possible to be confused or mistaken about how the universe works," we need look no farther than Mr. Harris. Alas, Harris ignores the category of "not even wrong," a category including untestable claims such as value judgments.

FACT #6: Given that the wellbeing of humans and animals must depend on states of the world and on states of their brains, and science represents our most systematic means of understanding these states, science can potentially help us avoid the worst possible misery for everyone.

That wellbeing "must depend on states of the world" has not been established here. The experience of suffering in life as we know it is the result of physical brain states (which is not an undisputed point, but Harris and I agree there and I don't want to quibble), so I suppose one could argue that suffering is empirically measurable. But it doesn't seem obvious that "wellbeing" is empirically measurable. Nor would I want to get too hung up on the details of brain state as the sole definition of suffering. Fetuses which do not yet have the capacity to feel pain let alone to process it intellectually still deserve some status in our moral calculus, as do people with severe brain damage that prevents them from recognizing pain, or with neurological disorders that prevent their nerves from transmitting pain signals. This also doesn't account for the feelings of sympathy we have for robots and other entities which we know haven't actually got an internal emotional state. Consider Mark Frauenfelder's encounter with the Pleo, an encounter which reminds Mark of a novel chapter anthologized by Doug Hofstadter and Dan Dennett, and which reminds me of an Army colonel's sympathetic defensiveness toward a mine-destroying robot:

At the Yuma Test Grounds in Arizona, the autonomous robot, 5 feet long and modeled on a stick-insect, strutted out for a live-fire test and worked beautifully⦠Every time it found a mine, blew it up and lost a limb, it picked itself up and readjusted to move forward on its remaining legs, continuing to clear a path through the minefield.

Finally it was down to one leg. Still, it pulled itself forward. â¦The machine was working splendidly.

The human in command of the exercise, however -- an Army colonel -- blew a fuse.

The colonel ordered the test stopped.

Why? â¦

The colonel just could not stand the pathos of watching the burned, scarred and crippled machine drag itself forward on its last leg.

This test, he charged, was inhumane.

This possibility â that sentient beings might care for the wellbeing of non-sentient beings â is not even on Harris's radar. Harris might say that this sort of sympathy is misplaced, is a value which is "wrong," but given that neither this nor anything else under discussion actually constitutes the worst suffering possible for everyone, I don't see how he'd justify that. The borders around "suffering" and "wellbeing" and "sentience" are too blurry for the fine distinctions Harris is trying to make, and the basis for using those concepts as his basis for making moral choices are too ambiguous.

FACT #7: In so far as our subsidiary values can be in conflictâe.g. individual rights vs. collective security; the right to privacy vs. freedom of expressionâit may be possible to decide which priorities will most fully avoid the worst possible misery for many, most, or even all sentient beings. Science, therefore, can in principle (if not always in practice) determine and prioritize our subsidiary values (e.g. should we value âhonorâ? If so, when and how much?).

"It may be possible" is not a testable claim, thus not legitimate as a scientific fact. Nor can one legitimately go from "it may be possible" to "science can." At best, he's saying "science may be able to determine and prioritize our subsidiary values." And few people would disagree that science can inform those choices. The question is whether other factors enter into those choices, a question that this point fails to dismiss.

Every extant moral system already has a complex system for weighing the importance of certain values against others in context-dependent ways. I think (but accept the possibility of error) every major conflict over moral questions either boils down to a disagreement about which value ought to take precedence in a given situation, or which group affiliation should take priority. If Harris is unable to establish that science can do this, he's not creating anything that can parallel extant moral systems.

FACT #8: One cannot reasonably ask, âBut why is the worst possible misery for everyone bad?ââfor if the worst possible misery for everyone isnât bad, the word âbadâ has no meaning. (This would be like asking, âBut why is a perfect circle round?â The question can be posed, but it expresses only confusion, not an intelligible basis for skeptical doubt.) Likewise, one cannot ask, âBut why ought we avoid the worst possible misery for everyone?ââfor if the term âoughtâ has any application at all, it is in urging us away from the worst possible misery for everyone.

I have infinite faith in the ability of philosophers to concoct reasonable ways to ask questions that seem utterly absurd, so I hesitate to fully endorse this claim. I will say that it's a uselessly weak claim that does nothing to get us to a genuine "ought." It also leaves us grasping a bit to empirically measure "misery," to then determine which misery is "worst," how to determine which entities to include within the scope of "everyone," and how best to aggregate their suffering to determine which misery is "worst for everyone." In short, every noun and adjective in this phrase â a phrase he repeats and underlines every time he repeats it â is value-laden.

FACT #9: One can, therefore, derive âoughtâ from âisâ: for if there is a behavior, intention, cultural practice, etc. that seems likely to produce the worst possible misery for everyone, one ought not adopt it. (All lesser ethical concerns and obligations follow from this).

Incorporating by reference all the previously cited flaws with the underlined phrase, and my general sense that this "ought not" is not the sort of "ought" which anyone could find useful as a moral code, I will note that the "therefore" is not part of any obvious syllogistic structure that compels the truth of the subsequent claim. The parenthetical might rescue this from a charge of utter uselessness, but Harris makes no effort to explain how he would derive any subsidiary values from the injunction against doing the absolutely worst thing possible. No value system in wide use seems intent on creating the maximum suffering possible for all sentient beings, so nothing at all really follows from this injunction.

Categories

More like this

If we value liberty, privacy, benevolence, dignity, freedom of expression, honesty, good manners, the right to own property, etc.âwe value these things only in so far as we judge them to be part of the second set of factors conducive to (someoneâs) wellbeing.

One cannot reasonably ask, âBut why is the worst possible misery for everyone bad?ââfor if the worst possible misery for everyone isnât bad, the word âbadâ has no meaning.

These two points seem to me to be at the core of Harris' confusion. He's basically saying that 1) everybody's values are ultimately utilitarian, and 2) everybody's moral terminology is ultimately utilitarian, so we can just go straight from the "is" of well-being-based utility to the "ought" of moral obligation.

1) is, as you note, totally wrong. Haidt and Hauser and everybody else who's studied the matter have found that we value things for all sorts of reasons besides their contribution to anybody's well-being, and that the majority of people will, in certain situations, make moral judgments which are directly detrimental to total well-being. most people would not kill an innocent person at close range to save several other people, for instance.

What we value is a matter of empirical investigation, and the evidence is all against Harris on this score. Of course, it may be that Harris values things almost entirely for utilitarian reasons--I have no reason to think that the average utilitarian is lying or confused about their own motivations. But most humans aren't utilitarian.

As for 2), this is also a matter of empirical investigation. You can't just declare that the word "bad" has no meaning other than what you want it to mean; you have to actually find out what other people who use that word mean by it. And, again, it's pretty clear that Harris is wrong--people use "good" and "bad" in all sorts of non-utilitarian ways:

Any transgression of social norms is "bad."

Disobeying God's law is "bad," not because it indirectly leads to greater suffering, but because it's sinful in itself. In fact, most fundamentalists seem to agree that, if not for Christ's sacrifice, the worst possible suffering for everyone would be good: that is, all sentient beings would morally merit eternal damnation if they weren't redeemed.

Me, I'm an ethical subjectivist. For me, "X is bad" means "I would feel guilty if I contributed to X, and I would disapprove of anyone else who contributed to X, and I would feel satisfied and proud of myself or anyone else who prevented X."

In practice, of course, most of us would find it "bad" to inflict unspeakable torments on all sentient beings. (People who simultaneously believe in a benevolent god and near-universal damnation excepted.) But the fact that we tend to agree with Harris' morality in this boundary case is hardly evidence that he's accurately characterized human morality in general. There are lots of possible value systems which would give this same result, many of which diverge significantly from Harris' in more realistic scenarios.

By Anton Mates (not verified) on 12 Apr 2010 #permalink

In fact, most fundamentalists seem to agree that, if not for Christ's sacrifice, the worst possible suffering for everyone would be good: that is, all sentient beings would morally merit eternal damnation if they weren't redeemed.

Rather than a valid objection, it seems to me that Harris' argument simply illustrates what a deranged mindset such people are coming from.

To over-simplify things, people who insist 2+2=5 aren't given a platform at mathematics conferences. It follows that their moral equivalent should be given a voice in the ethics discussion (for promoting, as Harris' argument puts it, worst possible suffering for all).

So maybe we actually already have a practical result?

Whoops, hit post instead of preview there. It should read:

It follows, from my reading of Harris, that their moral equivalent should no more be given a voice in the ethics discussion...

nuspirit,

Rather than a valid objection, it seems to me that Harris' argument simply illustrates what a deranged mindset such people are coming from.

If by "deranged" you mean that you find their mindset bizarre, nonsensical or otherwise repugnant to your values, okay; so do I. I'm sure they feel the same about us weird people who think Hell would be morally atrocious. Harris' argument doesn't particularly explain why we're right and they're wrong.

If by "deranged" you mean something more objective--for instance, judged to be insane by the law, or by the standards of psychological/medical community--then this is simply wrong, since most of the people with this mindset are considered totally sane by those criteria.

In either case, your objection doesn't help Harris' argument, since his claim here was that "we" don't have such a mindset in the first place. The people who believe a universal Hell would be just are sentient and human, and many of them are Harris' fellow citizens, so presumably they fall under his category of "us." And, again, there are many other people--the vast majority of human beings, every study I've seen--whose moral values are not founded exclusively on well-being considerations. So Harris is wrong...unless by "we" he meant "Me and a few other followers of a particular type of utilitarianism." And I don't think that's the case.

To over-simplify things, people who insist 2+2=5 aren't given a platform at mathematics conferences. It follows that their moral equivalent should be given a voice in the ethics discussion (for promoting, as Harris' argument puts it, worst possible suffering for all).

In the first place, people who insist 2+2=5 certainly are given a platform at math conferences, provided they provide their definitions of addition and equality so everyone else can follow along. It's trivially true that 2+2=5 in arithmetic modulo 1, for instance. Whether their definitions are the "right" ones is not particularly relevant; mathematicians make entire careers out of taking some commonly-accepted axiom or definition, rejecting or inverting it, and exploring the consequences. Math doesn't come with a creed.

In the second place, even if this were true for math conferences, nothing whatever follows concerning ethical discussion unless you first demonstrate some relevant similarity between mathematical and ethical statements. What is the relevant similarity between "2 + 2 = 5" and "it would be moral for God to send all sentient beings to Hell", other than the fact that you, I and Sam Harris disagree with both statements?

By Anton Mates (not verified) on 12 Apr 2010 #permalink

What is the relevant similarity between "2 + 2 = 5" and "it would be moral for God to send all sentient beings to Hell"?

To be honest I'm having a bit hard time parsing the latter in an is-ought context since it deals specifically with what would be moral to God, not us.

If you permit me to replace it with "all sentient beings are inherently worthless and deserving of eternal torture", the morally honest thing then to do would be instate the torture here and now since God might not, after all, exist (and in that case beings deserving of torture might escape it).

If on the other hand the worthlessness of sentient beings is contingent on God existing in the first place, the whole value judgement is suspect since again God might not exist (this of course would also be a (super)naturalistic fallacy).

My point, which I freely admit may not be very well made, is that it is no more irrational to dismiss the morality of a person who thinks we should immediately start torturing each other to the best of our ability than it is to not have monetary transactions with a person who demands 5 dollars in exchange for 4.

Sam Harrisâ argument for his definition of the defining âpurposeâ of moral behavior (to increase the well-being of conscious creatures) was disappointing. This assertion does not appear to open any credible paths for bringing morality into the realm of science.

It seems to me that Sam has ignored the present state and direction of the relevant literature. Based on that literature, I expected him to propose that the âpurposeâ of morality could be defined based on what moral behavior âisâ as a matter of empirical science, rather than what moral behavior âoughtâ to be based on opinion, intuition, or logic without regard to science.

The literature as I read it would support Sam arguing something along the lines of what follows:

In the field of the evolution of morality, the following observation would be uncontroversial: âMoral behaviors increase, on average, the synergistic benefits of cooperation and are unselfish at least in the short term.â Such an observation is either empirically true or false. Scientific provisional âtruthâ could be established relative to other competing observations based on 1) explanatory power for the diversity and contradictions of moral behaviors and cultural moral standards, 2) explanatory power for puzzles about moral behaviors and moral intuitions (as why peopleâs moral intuitions and actions are often not strictly utilitarian), 3) predictive power for moral intuitions, 4) universality, 5) lack of contradictions with known facts, and so forth.

If shown to be provisionally âtrueâ, this observation could be the basis for an objective definition of morality that, as Sam is attempting to do, brings morality into the realm of science. Of course it says nothing about what morality âoughtâ to be, where âoughtâ entails justificatory force beyond reason for accepting its burdens. But arguments can be made that it would be rational to accept the burdens of such a morality even when the individual expects that will not be in their best interests. Those arguments can be summarized by the statement that âIt is more rational for me to rely on the wisdom of the ages (moral wisdom) to predict what action will be, on average, in my best interests rather than my confused, imperfect predictions of the momentâ.

I was prompted to post this here due to Joshua Rosenauâs interest in Evolutionary Biology where much of the relevant literature can be found.

By Mark Sloan (not verified) on 13 Apr 2010 #permalink

Just FYI, the example of moral behavior described in âan Army colonel's sympathetic defensiveness toward a mine-destroying robotâ provides a useful example of consistency with the idea that moral behaviors are strategies to exploit the benefits of cooperation.

The moral intuitions that drove the colonelâs perhaps misplaced concern for the âwell beingâ of the machine can be understood as heuristics for increasing the benefits of cooperation between conscious beings. Without such intuitions, it seems evident that the benefits of cooperation would be reduced in interactions with conscious beings with self interests. The important thing is the explanatory power for why such intuitions exist, not how we can be mistaken in applying them appropriately.

By Mark Sloan (not verified) on 13 Apr 2010 #permalink

Science is usually not imperatively normative, which is why we have philosophy (and religion and morality and ethics).

Then again, most people's religious beliefs - deriving, as they usually do, from their parents and their personal histories - aren't all that normative for me, either.

By Marion Delgado (not verified) on 13 Apr 2010 #permalink

nuspirit,

To be honest I'm having a bit hard time parsing the latter in an is-ought context since it deals specifically with what would be moral to God, not us.

Well, one might derive as a consequence, "It would be immoral for us to complain or try to prevent it if God chose to do such a thing."

If you permit me to replace it with "all sentient beings are inherently worthless and deserving of eternal torture", the morally honest thing then to do would be instate the torture here and now since God might not, after all, exist (and in that case beings deserving of torture might escape it).

I don't think you can replace it with that, though. For such people, it's a critical part of the statement that "all sentient beings are deserving of eternal torture by God"; it doesn't follow that anyone else is morally authorized to do act on God's behalf in this case. "Playing God" is generally considered wrong, after all.

Here again is a widespread feature of human morality that doesn't fit into Harris' model: the rightness of actions is judged not just by their consequences (such as their impact on well-being), but by the authority of the actor. It's often possible to justify respect for authority on utilitarian grounds--for instance, you can say that collective well-being is usually enhanced by respecting a woman's right to bodily autonomy or by refraining from vigilante justice, so it's a good rule of thumb to do such things. But a lot of people don't do this. They value respect for authority as a moral good in itself.

Again, Harris has a perfect right to consider such values misguided, harmful, invalid, morally dishonest, even deranged. But he's wrong to say that we don't hold them.

If on the other hand the worthlessness of sentient beings is contingent on God existing in the first place, the whole value judgement is suspect since again God might not exist (this of course would also be a (super)naturalistic fallacy).

Weelll, by that argument all of Harris' value judgments are suspect because other minds might not exist, hence we can't be sure anyone else has the capacity for well-being. I don't think that value judgments are automatically delegitimized if they're founded on contingent claims that might be wrong.

My point, which I freely admit may not be very well made, is that it is no more irrational to dismiss the morality of a person who thinks we should immediately start torturing each other to the best of our ability than it is to not have monetary transactions with a person who demands 5 dollars in exchange for 4.

I agree; I think there can be any number of rational reasons to dismiss just about anyone's morality. A Christian could rationally dismiss the morality of Let's Torture Everyone Guy on the grounds that Christ is the fundamental moral authority, and Christ's instructions to his followers are pretty clear on ruling out global torture. I can rationally dismiss the morality of Let's Torture Everyone Guy on the grounds that his moral values are clearly very, very different from mine, so I gain little or nothing from trying to work with him to satisfy both our moralities simultaneously. For his part, LTEG would rationally dismiss my morality on the grounds that I'm a torture-hating scumbag who clearly has no claim to moral expertiseâ¦as he defines it. Harris' caricature of moral relativism notwithstanding, pretty much no one has a problem dismissing the morality of someone they consider a moral monster.

However, I think Harris is going beyond this fairly uncontroversial position, to claim that it is irrational not to dismiss such a person's morality--that is, that in some sense we're objectively obligated to leave them out of the conversation. And I don't think he supports that claim very well.

To use your economic example, it's perfectly rational not to exchange 5 dollars for 4, if you don't want to. But there are many scenarios under which it would also be rational to do so. Maybe you're receiving the 4 dollars as a loan, and paying them back later with 1 dollar interest. Maybe the 4 dollars are silver dollars, of a type you like to collect, and are willing to pay $1.25 for each. Maybe you're giving the person 1 dollar as charity, but you don't want them to be ashamed so you embed that action within a ritual of exchange between equals. We can't answer the question of whether your engaging in this transaction would be rational until we know what system of values you're applying to it.

Harris would like to be able to say, "Pssh, people who think you should exchange 5 dollars for 4 are financially crazy, everyone knows that. It's objectively the case that you shouldn't listen to them." But I don't think he can support that.

By Anton Mates (not verified) on 13 Apr 2010 #permalink

Mark Sloan: I tend to agree generally with your take on how one might get at a scientific study of morality (though not all the way to scientifically determined right and wrong). I laid a bit of that out at the end of this post: http://scienceblogs.com/tfk/2010/04/correct_crank_or_crazy.php

I'll offer a different angle for Harris to pursue, in the spirit of friendly discourse. Rather than relying on mental state as a measure of rightness or wrongness of values, look to evolutionary game theory.

The only seemingly universal moral value I can think of â and the only one Harris cites â is reciprocal altruism: the Golden Rule. We can find it in most religions and most moral philosophies. We can also derive it as an evolutionarily stable solution in evolutionary game theory when models are parameterized even vaguely like human societies. I don't think that's an accident.

Living in large groups consisting of multiple family groups requires cooperation on some level, and kin selection alone can't get you the sort of altruism you need. It works fine for explaining why grandparents or siblings might provide childcare, but not why unrelated individuals should work together for the good of society as a whole, and if you can't explain that, you can't explain human society. My theory is that human moral systems exist to propagate rules which maintain stability and altruism within genetically heterogeneous populations.

Harris suggests that there may be "many peaks on the moral landscape," but doesn't really motivate that on any theoretical grounds. But it's totally reasonable to invoke evolutionary psychology here to argue that the human brain and human social conventions evolved in a way that promotes societal stability, and that the moral landscape is shaped by the evolutionary pressures on societal stability.

Having that in hand, and work like Haidt's and other social psychologists', it's possible to come up with a set of dimensions for the multidimensional moral landscape and a few of the major peaks and valleys in it. It should be possible to develop game theoretic models of how those values might interact, and whether the empirical landscape matches our model of societal stability, and to begin trying to account for variances between model and reality.

This would be an interesting exercise, and could even be informed by fMRI studies of one form or another. What it cannot do is tell us that desiring a stable society is right or wrong. As Douglas Adams observes in The Hitchhikers' Guide to the Galaxy, modern society has not instilled immense confidence in the merits of society as constituted: "Many were increasingly of the opinion that they'd all made a big mistake in coming down from the trees in the first place. And some said that even the trees had been a bad move, and that no one should ever have left the oceans." Whether we should have come out of the trees or not, whether we ought to live in society or not, whether society ought to be organized as it is: these are interesting questions, but they are not scientific questions. They generate no objectively falsifiable predictions. And trying to subsume them into science is just wrongheaded.

Marion Delgado: Science is usually not imperatively normative, which is why we have philosophy (and religion and morality and ethics).

This is because science (as philosophical discipline) is about how the world IS.
When you start talking about choices that "ought" to be made, to make something the way it "ought" to be, that's engineering, not science.

Part of the confusion is that the anthropological practice of "science" steps across that philosophically line regularly, in deciding which way an experiment "ought" to be set up, which kind of experiment "ought" be considered for a question, which question "ought" the researcher be investigating, et cetera. All of which are design choices â thus, engineering.

And to decide which choice you "ought" to make, you have to be able to have an ordering relationship defined for an enumerated set of choices â preferably, a relationship making the set into a join semilattice so that a "best choice" may be said to exist.

Which requires defining such an ordering relationship. Which â in a sense â requires a way to decide which definition you "ought" to use....

Josh Rosenau: And trying to subsume them into science is just wrongheaded.

Because it's not a question of science, it's an engineering problem. =)

(On the other hand, saying it is "wrong" or "bad" implies an ordering relationship on choices has been determined....)

Josh,

The only seemingly universal moral value I can think of â and the only one Harris cites â is reciprocal altruism: the Golden Rule.

I'm not sure the Golden Rule should be identified with reciprocal altruism. The Golden Rule is usually some form of "Do as you would be done by;" a moral code embodying reciprocal altruism would state, "Do as you are done by." In the latter case you have no moral obligations to others except insofar as they've honored their obligations to you, which isn't generally the case under the Golden Rule. The difference between the two approaches can be dramatic; under Jesus' interpretation of the Golden Rule in the Gospels, for instance, you're obligated to be generous and pacifistic even towards someone who has physically attacked you or taken your stuff. There's nothing reciprocal about it (although Jesus does ground it conceptually in reciprocal altruism, by saying it's God that will repay you rather than the other human.

OTOH, the Golden Rule might be a useful opening gambit for producing mutually-beneficial interactions in a society which is already dominated by reciprocal altruism. If your neighbors tend to do as they're done by, then doing to them as you would be done by will frequently lead to them doing you favors in return. But someone has to perform the first kindness "unprovoked," in order to start that relationship, and the Golden Rule could be used to motivate that behavior.

(Of course, if you do a kindness for someone who turns out to be a cheater, reciprocal altruist tendencies would lead you to stop following the Golden Rule...and indeed, I think people are pretty good at ignoring it when someone pisses them off.)

By Anton Mates (not verified) on 14 Apr 2010 #permalink

Josh, thanks for your comment. As you note, your April 1 post is certainly in the same direction as my own thinking. I wish I had seen it and commented it on it then.

I expect we agree that there is nothing in science that tells us we ought to do anything, where ought entails justificatory force beyond rational self interest for accepting the burdens of any morality. Therefore justification for accepting the burdens of a science based morality can have no basis in science except by rational self interest. Also, evolutionary game theory is a useful source of strategies and heuristics for self interested agents to maximize the synergistic benefits of cooperation by acting in ways that might be described as unselfish at least in the short term and therefore perhaps even âmoralâ (ignoring questions of conscious choice and that sort of thing). This is a good start.

âThe only seemingly universal moral value I can think of â and the only one Harris cites â is reciprocal altruism: the Golden Rule.â The following are arguably universal moral behaviors: kin altruism (as mothers nursing offspring), aversion to inbreeding, willingness to risk injury and death to defend family and friends, and willingness to accept penalties to punish wrongdoers. These are actions cultures universally feel people ought practice and in that sense represent values. But attempting to define a science based morality from assembling list of values seems to me an approach full of difficulties.

In the field of evolutionary morality, the focus is rather on how these universal values (as well as all the diverse and contradictory cultural values) could be the products of genetic evolution and cultural evolution. For them to be products of evolutionary processes there must be an identifiable selection force or forces.

The growing consensus in evolutionary morality is that almost all moral behaviors and cultural moral standards can be understood as the evolutionary products of a single selection force, the benefits of cooperation. (The only relevant benefit for genetic evolution is reproductive fitness. But cultural norms can also be selected for by the material goods and emotional goods produced by cooperation.) In this view, the diverse cultural moralities are different sets of strategies and heuristics for producing Harrisâ local peaks in morality.

Ok, so how could science tell us an act was objectively right or wrong? Obviously, it could only tell us an act was right or wrong relative to what morality objectively âisâ as a matter of science. Here that is proposed to be strategies and heuristics for exploiting the benefits of cooperation.

Why should anyone care about the objective moral judgments of this particular definition of morality? They might care if they believed it was in their enlightened self interest to practice and to advocate that others practice such a morality in preference to all alternatives.

I can argue there are good reasons many secular people could be convinced it was in their enlightened self interest to do just that. For at least these people, moral behaviors could be objectively determined to be moral or immoral as a matter of science. Here objective means independent of human opinions. So the fact that other people prefer other moralities is irrelevant to the objectivity of this definition of right and wrong. It seems to me that the only moral judgments that could be objective are those based in what moral behavior âisâ as a matter of science.

So it is not a question of âCan moral judgments be objective?â, but rather âWhy should anyone care what these objective moral judgments are?â I think the day will come when people care, based on enlightened self interest, in the first objective morality. And I think that will be whatever science eventually concludes moral behavior actually âisâ.

By Mark Sloan (not verified) on 14 Apr 2010 #permalink