So the other day Sam Harris was asked to speak to the TED conference, and he presented what he believes to be a basis for scientifically validating morals and values. This is interesting, as most people who’ve studied the issue have concluded, for pretty compelling reasons, that this is not the sort of thing that’s really possible.
Now, I haven’t seen Harris’s talk, as I don’t care for watching YouTube and it’s not a form well-suited for evaluating what would be, if true, a fairly radical discovery. The thing is, skepticism gives you a toolkit for addressing such circumstances. When someone makes an extraordinary claim, one that runs counter to the last few centuries of human knowledge, you expect them to be able to take on all comers. Extraordinary claims demand extraordinary evidence, so one shouldn’t make such claims without expecting intense scrutiny. People who make extraordinary claims are either correct (rare, but wonderful), crazy (out of touch with all of reality, including the subsection touching on the specific topic at hand; fairly rare), or cranks (generally sane but out of touch with reality on some specific issue; common).
Below the fold, I explain why I don’t think Harris belongs in the first two categories, but first I’ll just offer my Shorter Sam Harris:
Harris: Let me put it this way. Have you ever heard of Plato, Aristotle, Socrates Hume, Rawls, Nozick, and Parfit?
Man in Black: Yes.
It didn’t work for Vizzini, and I don’t think it goes to Harris’s credit either.
Sean Carroll kicked the discussion of Harris’s ideas off by pointing out that Harris’s talk mistakes is for ought, a useful distinction drawn by Hume and a range of other philosophers over the years. If Harris really did make that mistake, it’s a biggy. Sean makes various other points along the way, but as we say on Passover, dayenu (it would have been enough). When you screw up is/ought, things generally go pretty far astray, whether it’s just because you commit the naturalistic fallacy, or because you start thinking it’s OK to discriminate based on race because of pre-existing correlations between race and socioeconomic status.
The problems seem to have been so obvious that even creationists got in on the game with a shockingly coherent essay at Bill Dembski’s blog (and another of less grace at the same venue).
This is the first big test of our trilemma, to see whether Harris would take these criticisms in hand and present a well-thought-through reply, or reply like a crazy person or a crank. Things went downhill, with Harris first twatting “Please know that I will be responding to this stupidity,” and linking back to Sean’s blog. This, let me say, doesn’t speak well of Harris’s own values, as one generally shouldn’t respond to thoughtful criticism by calling a fairly obvious, thoughtful, and polite criticism of your earth-shattering idea “stupidity.” That’s what cranks do, but not what serious people do. At least, those are the values of the society I grew up in. I also grew up in a society where we don’t go around threatening to nuke other people because of their religion, so Harris’s and my values are, shall we say, different.
In any event, Harris followed his offensive tweet with a longer blog post that does nothing to disburse the air of crankery. He begins by treating YouTube commenters as his serious opponents. To borrow a phrase: “Youtube comments. You will never find a more wretched hive of scum and villainy.”
The he responds to Sean by
- continuing to call Sean stupid
- suggesting that he (Sam Harris) is a superior philosopher to David Hume
- claiming that the is/ought distinction is trivially wrong
- insisting without argument that his ideas are superior to all prior attempts at moral philosophy
Now, all of these could well be true, but it’s unlikely. And by putting himself into an analogy where Harris is to Hume as Carroll is to Robert Oppenheimer … well, that’s not what serious people do. It’s what cranks do. Sean is a recognized expert on the fairly fundamental question of Why Time Exists, while Harris is a bit of a dilettante with a flair for self-promotion and a habit of tweaking his enemies noses. Good skills in many situations, but not an instant justification of the claim to have outstripped all prior moral philosophers (and don’t get me started on the immoral philosophers).
Harris also maunders over the meaning of consensus, and how we compare scientific consensus to moral consensus. It’s hardly a new line of reasoning, and Harris’s proposal neither sufficiently novel nor so brilliantly argued as to merit his confidence in his own excellence.
Part of the problem is that moral consensus is rarely more than skin deep. It’s true that pretty much every religion and every moral philosophy seems to have some version of reciprocal altruism. But it turns out that “do unto others as you would have them do unto you” can mean many different things even if everyone agrees that it’s right. In our general conversation, most people don’t think that ants count as “others” for those purposes, but that their children do. Some people hold that all sentient animals deserve standing in such a moral calculus, while others do not. For no small stretch of time, this country’s moral consensus was that descendants of Africans did not deserve equal standing with other Americans. There’s no clear moral consensus in today’s society that gay people – let alone the transgendered, intersexed, and otherwise queer – deserve equal standing with those of more conventional sexual orientation.
I like to think, like Martin Luther King, Jr., that “the arc of the moral universe is long but it bends toward justice,” that is towards a broadening of the community of equals. Unlike me, King thought that this arc was the result of “a creative force in this universe, working to pull down the gigantic mountains of evil, a power that is able to make a way out of no way and transform dark yesterdays into bright tomorrows,” i.e. God. I don’t. Thus, even in agreement we lack consensus.
Even were we to find consensus, it isn’t obvious that it would tell us that there is some universal morality, let alone tell us what that morality would be. Indeed, Harris’s own analogy shows us why. Scientific consensus is not what tells us that truth exists in the world. A nonscientist – like other nonspecialists in a given field – must rely on the consensus of expert opinion, but that consensus arises from empirically testable reality. We know that gravity (or something very much like it) must be correct not because physicists have reached a consensus that this is so, but because relevant experts have conducted research that all points the same direction. The experts are weathervanes for the truth, but the consensus of their opinion is not itself that truth. The consensus is helpful to the rest of us, because we aren’t all expert physicists, and we have to place a certain trust in the people who are. Consensus subject to objective testing is worth a great deal, but consensus on matters not subject to empirical testing is not necessarily meaningful.
And that’s where the analogy to morality falls apart. In no small part, it fails on the naturalistic fallacy. But let us grant Harris’s claims about “Hume’s lazy analysis of facts and values,” and move on. Even suspending the naturalistic fallacy, moral consensus would only mean something if non-experts were to use it as a basis for deferring to experts who, in turn, developed that consensus from first principles. But On what basis do we declare some people to be moral experts and others not to be. Harris does suggest that we create such a classification, but not how. The Catholic Church believes it already has such a system, as do most other religions. These systems are generally contradictory and incompatible, and the people set up as moral experts often turn out to be deeply morally flawed. And turning to fMRI of average citizens as a basis for experts (Harris seems to think that the moral experts will be neuroscientists who measure some sort of brain state to evaluate which values people hold, or something) creates a circularity in which the experts to whom nonexperts are to turn must rely on the minds of nonexperts to achieve their expertise.
Here’s how Harris lays out his basic thesis:
When I speak of there being right and wrong answers to questions of morality, I am saying that there are facts about human and animal wellbeing that we can, in principle, know—simply because wellbeing (and states of consciousness altogether) must lawfully relate to states of the brain and to states of the world.
What he’s saying is that, in principle, an fMRI or some similar test might let us measure a person’s “wellbeing” and thereby determine what their values are, thus guiding us toward societal consensus about right and wrong, and onward to “right and wrong answers to questions of morality.”
But the first step here requires that “wellbeing” be a measure of values. And he may address this in his YouTube, but it strikes me as an unjustified leap. His essay just handwaves at the matter, asserting that those who disagree are “not really thinking about these issues seriously,” and that his point “seems rather obvious.” And of course it seems so to him, but the views of cranks often seem rather obvious to their authors. They too dismiss their critics as stupid, and past notables in the field as lazy, deluded, and deluding. This doesn’t make Harris a crank, but neither does it distinguish him from one.
The test of Harris’s claim is not whether it is obvious to him, but whether he can present a coherent and convincing body of evidence to support it. If it is science, it will generate testable predictions, and research will bear out those predictions. No doubt we can measure “wellbeing,” though it’s currently got no operational meaning that relevant experts agree upon. We may be able to show that some people’s wellbeing increases when certain values are fulfilled. But in what sense does this test that such values are reflections of morals, let alone that the morals are “right” or “wrong”?
Jonathan Haidt and others are already testing people’s moral intuitions, and finding remarkable consistencies. But they also find remarkable inconsistencies, or at least irrationalities. And even if we systematize such moral intuitions, how can we say that such intuitions are right or wrong? Harris seems to posit that maximizing “wellbeing” is the measure of right or wrong.
But this raises real problems. Wellbeing is not a notion without problems in philosophy, and which form of wellbeing a person values is itself a matter of personal and cultural values. For some, the maximization of personal happiness is the measure of wellbeing. For others, the maximization of overall happiness (weighted in various ways) is better. Some find self-sacrifice to be the highest form of wellbeing. Choosing one or the other is a value judgment, and it can hardly be conceived how one might test the claimed superiority of one over the other. If the test itself relies on value judgments, we can hardly use it as a basis for the claim that the merits of values can be scientifically tested. Harris is aware of these problems, but his attempts to argue them away simply lack substance.
Set that aside, though, and consider the question of whether, if we had an agreed-upon definition of wellbeing, such wellbeing would actually tell us right from wrong in any scientifically testable way. In particular, note that we lack any clear way to know what it would mean to empirically test whether a moral value is right. The claim that wellbeing is linked to moral merit is also a value claim, and lacks any obvious testable basis. Harris surely believes it true, but does the claim make any testable prediction? I can conceive of none, and thus conclude that several steps in Harris’s logic are dependent on untestable value judgments. He’s built a grand theory, but it seems to amount to turtles-all-the-way-down logic. In short, Harris’s logical positivist effort (for that’s what it is) is running into the problem that logical positivism always hits.
This is why moral consensus, whether it exists or not, does not tell us the truth. Scientific consensus is a weathervane pointing to an invisible but measurable wind. But the question of whether there is any such measurable phenomena underlying morals and values is not something subject to objective testing. As Harris notes:
Everyone has an intuitive “physics,” but much of our intuitive physics is wrong (with respect to the goal of describing the behavior of matter), and only physicists have a deep understanding of the laws that govern the behavior of matter in our universe. Everyone also has an intuitive “morality,” but much intuitive morality is wrong (with respect to the goal of maximizing personal and collective wellbeing) and only genuine moral experts would have a deep understanding of the causes and conditions of human and animal wellbeing.
But who counts as a “genuine moral expert”? In physics, we judge genuine expertise based on training and command of testable knowledge, and ideally the ability to formulate novel, testable, and correct predictions about as-yet unseen circumstances. What is the testable knowledge on which we judge the expertise of a purported moral expert? Each person possesses comparable experience exercising moral judgment, and while society may freely hold certain of those people to have made poor choices – even to be sociopaths with diminished or absent moral capacity – there’s no obvious scientific basis for that judgment.
Harris replies to this objection in turn. He cites Sean Carroll’s query “Who decides what is a successful life?” and replies:
Well, who decides what is coherent argument? Who decides what constitutes empirical evidence? Who decides when our memories can be trusted? The answer is, “we do.” And if you are not satisfied with this answer, you have just wiped out all of science, mathematics, history, journalism, and every other human effort to make sense of reality.
This is the sort of logic that Alan Sokal so brutally skewered in his 1996 hoax paper for Social Text. The issue is that, while certain aspects of science are indeed socially constructed, the enterprise itself is structured in a way that depends on a correspondence between claims and empirical reality. It is not that “we do” decide that memories are trustworthy (indeed, much research shows that memory is profoundly unreliable), nor can it be said that “we do” declare by fiat what counts as evidence. Science today is not what it was in Newton’s time in part because it was found that certain sorts of evidence and certain sorts of arguments work better at formulating claims which correspond with reality.
Harris claims that rejecting the above-quoted passage means I’ve “wiped out all science, mathematics, history, journalism, and every other human effort to make sense of reality.” I propose that his understanding of science is badly flawed, flawed as that of any crank with a new unified theory. To preserve his theory, he’d do what creationists and other denialists always try with their forms of crankery always do, redefine science to fit their preferences, turn science into a popularity contest rather than a system for testing claims against empirical reality, and take shelter in solipsism if anyone tries to challenge his views.
I don’t think Harris is crazy. I also don’t think he’s correct. The ultimate measure of a crank, though, is not that he be sane and incorrect, but that he be sane, incorrect, and unwilling to change his views in light of reasoned discourse. Harris’s consistent and unargued dismissals of all prior moral philosophers bodes poorly for his willingness to accept that he may have bitten off too much in one fell swoop, but if he’s a good scientist he’ll take the criticism in stride, break the problem down into smaller pieces, and work each up in turn. Sean seems not to think Harris is a crank, as he is still soldiering on. And maybe that dialog will turn something up of value.
I’ll offer a different angle for Harris to pursue, in the spirit of friendly discourse. Rather than relying on mental state as a measure of rightness or wrongness of values, look to evolutionary game theory.
The only seemingly universal moral value I can think of – and the only one Harris cites – is reciprocal altruism: the Golden Rule. We can find it in most religions and most moral philosophies. We can also derive it as an evolutionarily stable solution in evolutionary game theory when models are parameterized even vaguely like human societies. I don’t think that’s an accident.
Living in large groups consisting of multiple family groups requires cooperation on some level, and kin selection alone can’t get you the sort of altruism you need. It works fine for explaining why grandparents or siblings might provide childcare, but not why unrelated individuals should work together for the good of society as a whole, and if you can’t explain that, you can’t explain human society. My theory is that human moral systems exist to propagate rules which maintain stability and altruism within genetically heterogeneous populations.
Harris suggests that there may be “many peaks on the moral landscape,” but doesn’t really motivate that on any theoretical grounds. But it’s totally reasonable to invoke evolutionary psychology here to argue that the human brain and human social conventions evolved in a way that promotes societal stability, and that the moral landscape is shaped by the evolutionary pressures on societal stability.
Having that in hand, and work like Haidt’s and other social psychologists’, it’s possible to come up with a set of dimensions for the multidimensional moral landscape and a few of the major peaks and valleys in it. It should be possible to develop game theoretic models of how those values might interact, and whether the empirical landscape matches our model of societal stability, and to begin trying to account for variances between model and reality.
This would be an interesting exercise, and could even be informed by fMRI studies of one form or another. What it cannot do is tell us that desiring a stable society is right or wrong. As Douglas Adams observes in The Hitchhikers’ Guide to the Galaxy, modern society has not instilled immense confidence in the merits of society as constituted: “Many were increasingly of the opinion that they’d all made a big mistake in coming down from the trees in the first place. And some said that even the trees had been a bad move, and that no one should ever have left the oceans.” Whether we should have come out of the trees or not, whether we ought to live in society or not, whether society ought to be organized as it is: these are interesting questions, but they are not scientific questions. They generate no objectively falsifiable predictions. And trying to subsume them into science is just wrongheaded.