I suppose I’m going to preface this post with praise before I get to the criticism of Sam Harris that will follow, even though it seems odd that I should do that. I guess spending all your time in academia puts you in a certain kind of headspace, that being that you don’t take it as a personal attack when someone criticizes you. Moreover, you respond to the actual substance of the criticisms and not odd strawmen when you formulate your replies. That’s not a stab at Harris at all. I don’t know that he’d react that way to this, and I doubt he’ll actually read it. Still, I have learned all too well from doing this blog that, often, the people you critique pay less attention to the substance of your criticisms than the mere fact that you criticize some position they hold at all. Eh…Oh well. To Sam, and everyone else who reads this, I like your work a lot, and this post should not be seen as reaching beyond its stated subject matter.
Sam Harris recently did a TED talk. That’s it posted to the right. The title of the talk is “Science can answer moral questions.” In that talk he suggested said lots of things, but the main point was, as one can see by the title, that there is nothing to prevent science from addressing moral questions, that morality is not something different in kind from the other things that science examines. He also posted a response to some of the early comments on that talk, which can be found here. This post is a critique of that talk and commentary.
At the beginning of his talk, Harris says something quite strange. He is concerned, at this point, with why we feel the tug of moral obligations in regards to some things, like other people, but not to other things, like rocks. He says that we don’t feel obligations toward rocks because they can’t suffer. While that might be how someone would justify that, I don’t think that’s at all the right explanation for our not feeling sympathy for rocks but feeling it for other animals, and I think the better explanation we have comes from science, particularly evolutionary psychology. It just turns out that we evolved to cooperate with things like us. Sympathy fosters that kind of cooperation. For that reason we feel sympathy toward things we deem like ourselves. In that light, we feel the greatest sympathy toward our family, then our tribe, then other humans, then animals most like us, then less like us, and on down to rocks. One easy way to demonstrate this is to trick ourselves into feeling sympathy for something that cannot possibly suffer by playing on those evolved intuitions that guide said sympathies. The easiest way to do this is to anthropomorphize something, anything. I’ll point to the movie AI as an intuition pump, here. Put aside the main robot child, as the whole movie is concerned with whether or not that entity is a person, and think of the other robots in the movie. I’d suggest that that there were not many who watched the film who did not feel deep sympathy for the various robots destroyed throughout the movie. There is a particular scene where a robot that looks little like a human at all beyond a holographic female face is destroyed at the “Flesh Fair.” The fact that the robot is destroyed with a smile on its face only heightens the feeling the audience has that the robot is a person. While there is little to indicate anything like suffering going on there, I think most people feel sympathy for that robot, and, indeed, find the entire scene dark and slightly disturbing. But, of course, it would only be disturbing if the robots were, in fact, the kind of things capable of suffering, and there is little to nothing to indicate anything of the kind. The point there, of course, is that none of those objects were persons, could be said to suffer, etc. If you disagree, then you’re only proving my point. Put a smiley face on a toaster and have it talk to you, and soon you’ll worry about whether or not it’s in pain when it breaks. And that’s exactly because our sympathies here are the result of "gut feelings" rather than rational deliberation. We have evolved to feel sympathy toward things that look like us, and this is the source of that sympathy, not some rational deliberation about whether or not things have the ability to suffer in some recognizable way.
The fact is that suffering has long been a problem in philosophy of mind in general. How do we know anything suffers? We need to rely on behaviors. But, of course, the suffering is not to be found in the behaviors themselves. We can and have created machines that exhibit some of the same behaviors that generally indicate suffering, but no one genuinely thinks that any such thing is occurring. Mental states just can’t be cashed out in terms of behaviors. So what are they? Well, they’re brain states of some sort. But that does not really answer the question, either, as, were brain states identical to mental states, this would mean something that does not have a brain state identical to the ones people have when in pain were not really in pain at all. To highlight this, think of meeting an alien. As it slides down the walkway from its ship, it cuts its tentacle-like appendage. As it does so, it quickly withdraws that appendage, makes some loud exclamation that is interpreted as a curse, and, in English, says, “Wow, that hurts.” I think most people would think that creature was really in pain. And, yet, it surely would have nothing like the brain states we would recognize. There are ways around this, but the result is that, when we push on it, it is just really hard to say what is and is not in pain. Once we move to other species, if we are honest, it’s just harder to feel justified in such a claim. Do chimps feel pain? Almost certainly. Do dogs feel pain? Pretty sure. Do frogs feel pain? Uuhhh… Do flies feel pain? No idea whatsoever.
The point of all that is just we don’t really have a good way to talk about suffering in other animals. As such, suggesting we can get around the obvious problem of relying on our guts as guides to sympathy by attempting to ground it in a rational examination of suffering is incredibly problematic. For these reasons, I just think Harris gets this wrong.
Another big issue I have to raise with Harris is his assertion in the video that "there is no notion, no version, of human morality and human values that I’ve ever come across that is not, at some point, reducible to a concern about conscious experience and its possible changes." Man, that is just wrong. Really, only utilitarianism is concerned, at bottom, with that kind of thing. He is certainly wrong, contrary to his explicit assertion otherwise, about the morality that comes out of the big three monotheisms. Worse, I don’t know how he could think otherwise. I really don’t know where he’s coming from on this as it is prima facie not the case, and he does nothing to explain how things could be different from that sort of perspective. In those religions, what makes something good is that it is the result of a will that is in line with God’s. That is, in order for some action to be moral, it must be the result of a person intending to do what God has commanded. Of course, how that gets carried out in specific circumstances might be very complicated, but that is the core idea. Harris seems to miss that entirely as this is a radically different standard for what makes something moral than the description he offers.
So, what kind of changes in conscious experiences are relevant to morality? According to Harris, they are the ones that lead to increases or decreases in well-being and suffering. That is, it is good to increase well-being and decrease suffering, and it is bad to decrease well-being and increase suffering. We can think of these two things being aspects of something like happiness. In that way, for Harris, being moral increases happiness, and being immoral decreases happiness. But, again, I need to point out that this is just utilitarianism. While such a position might be popular, it is hardly the only ethical system out there, and, as many of its big proponents will admit, it is not clear that such a standard is objective in the way that Harris claims. Even people like Peter Singer have admitted that it’s hard to ground utilitarianism in something beyond tastes and preferences. (Russell Blackford, funny enough, discusses this in relation to Harris’ talk over at his blog, which you should be reading anyway.) That is, we can say what makes something moral once we accept acting morally as being something like increasing happiness. But as to why one should actually act that way, well, it is hard to say beyond the suggestion that it is likely to increase happiness, and that means the one acting morally is more likely to be happy. So, if you value happiness, promote happiness. But if you do not already value happiness, then it is tough to say why one should promote it.
Harris suggests what makes something good for a Christian is that it brings you happiness after death, but that’s not true at all. I know of no Christian thinker that thinks anything like that, nor any Muslim, and it does not even look like most Jews believe in any sort of afterlife at all, so it clearly cannot work there. It might be that getting one’s will in line with God’s results in getting into Heaven, but that is certainly not the goal, and it is not even the kind of thing that can be earned. That is strictly a matter of Grace. But, more than that, it does not look like moral systems other than utilitarianism worry about that, either. Look at Kantianism. I think it’s going to be difficult to find a notion of human happiness in the Categorical Imperative. In fact, Kant is clear that we can never be sure if an action that results in happiness is moral, even if it coincides with the Categorical Imperative! That’s just because what makes an action moral is that it is done out of respect for the moral law, and if something makes you happy, you might be doing it for the wrong reason. So, there, not only is happiness not the point, it actually gets in the way of moral reasoning.
He makes the same sort of mistake in his further comments regarding the responses his talk. He writes,
Imagine some genius comes forward and says, “I have found a source of value/morality that has absolutely nothing to do with the (actual or potential) experience of conscious beings.” Take a moment to think about what this claim actually means. Here’s the problem: whatever this person has found cannot, by definition, be of interest to anyone (in this life or in any other). Put this thing in a box, and what you have in that box is—again, by definition—the least interesting thing in the universe.
Kantian morality “has absolutely nothing to do with the (actual or potential) experience of conscious beings.” Kant’s morality is about adherence to a moral law for the sake of the law. Outcomes and experiences have nothing to do with it at all. They never factor into what makes something moral or immoral. And yet, contrary to Harris’ assertion, people are and have been intensely interested in Kant for 200 years. Again, it just turns out that his intuitions here don’t represent everyone else’s. While such a system might be uninteresting to Harris, such systems can clearly hold interest for lots and lots of other people.
Sean Carroll wrote one of the early responses to Harris’ talk. Harris takes issue with this quote:
But what if I believe that the highest moral good is to be found in the autonomy of the individual, while you believe that the highest good is to maximize the utility of some societal group? What are the data we can point to in order to adjudicate this disagreement? We might use empirical means to measure whether one preference or the other leads to systems that give people more successful lives on some particular scale — but that’s presuming the answer, not deriving it. Who decides what is a successful life? It’s ultimately a personal choice, not an objective truth to be found simply by looking closely at the world. How are we to balance individual rights against the collective good? You can do all the experiments you like and never find an answer to that question.
How does Harris respond to this?
Again, we see the confusion between no answers in practice and no answers in principle. The fact that it could be difficult or impossible to know exactly how to maximize human wellbeing, does not mean that there are no right or wrong ways to do this—nor does it mean that we cannot exclude certain answers as obviously bad. The fact that it might be difficult to decide exactly how to balance individual rights against collective good, or that there might be a thousand equivalent ways of doing this, does not mean that we must hesitate to condemn the morality of the Taliban, or the Nazis, or the Ku Klux Klan—not just personally, but from the point of view of science.
But this seems to miss Carroll’s point. He was not talking about there being no answer in practice. He was saying that there is no answer in principle to this kind of question that is not arbitrary and based on tastes. And no number of people with similar tastes will ever make it not merely tastes.
The point here is that if you do not already accept happiness as a value, then there seems to be no argument for why one should promote happiness. The kind of empirical data that Harris can offer assumes the value of happiness, but it cannot demonstrate that a person who is made happy by strange things or motivated by some value other than happiness has made some observational or cognitive error. This is a problem for Harris because he needs to assume objective value (i.e. the objective value of happiness) in order to make the claim that science is the foundation for objective morality. But, he can’t make that assumption! In the end, he wants to say that science can tell us what is and is not moral because it can tell us what makes people more or less happy. Even granting the latter is true, and I think one could argue that it is reasonable to be skeptical of such a claim, it still cannot tell us what we should do unless we already value happiness. As such, it just does not look like Harris’ claim that science can tell us what is and is not objectively moral can hold up under any close scrutiny.
I will write a part two to this in the next couple of days where I will examine further the notion of values as tastes, moral relativism, and how I think Harris has gone wrong in mistaking prudential imperatives for moral imperatives.