Sam Harris, Science, and Morality

I suppose I’m going to preface this post with praise before I get to the criticism of Sam Harris that will follow, even though it seems odd that I should do that.  I guess spending all your time in academia puts you in a certain kind of headspace, that being that you don’t take it as a personal attack when someone criticizes you.  Moreover, you respond to the actual substance of the criticisms and not odd strawmen when you formulate your replies.  That’s not a stab at Harris at all.  I don’t know that he’d react that way to this, and I doubt he’ll actually read it.  Still, I have learned all too well from doing this blog that, often, the people you critique pay less attention to the substance of your criticisms than the mere fact that you criticize some position they hold at all.  Eh…Oh well.  To Sam, and everyone else who reads this, I like your work a lot, and this post should not be seen as reaching beyond its stated subject matter.

Sam Harris recently did a TED talk.  That’s it posted to the right.  The title of the talk is “Science can answer moral questions.”  In that talk he suggested said lots of things, but the main point was, as one can see by the title, that there is nothing to prevent science from addressing moral questions, that morality is not something different in kind from the other things that science examines.  He also posted a response to some of the early comments on that talk, which can be found here.  This post is a critique of that talk and commentary.

At the beginning of his talk, Harris says something quite strange.  He is concerned, at this point, with why we feel the tug of moral obligations in regards to some things, like other people, but not to other things, like rocks.  He says that we don’t feel obligations toward rocks because they can’t suffer. While that might be how someone would justify that, I don’t think that’s at all the right explanation for our not feeling sympathy for rocks but feeling it for other animals, and I think the better explanation we have comes from science, particularly evolutionary psychology. It just turns out that we evolved to cooperate with things like us. Sympathy fosters that kind of cooperation. For that reason we feel sympathy toward things we deem like ourselves.  In that light, we feel the greatest sympathy toward our family, then our tribe, then other humans, then animals most like us, then less like us, and on down to rocks. One easy way to demonstrate this is to trick ourselves into feeling sympathy for something that cannot possibly suffer by playing on those evolved intuitions that guide said sympathies. The easiest way to do this is to anthropomorphize something, anything. I’ll point to the movie AI as an intuition pump, here. Put aside the main robot child, as the whole movie is concerned with whether or not that entity is a person, and think of the other robots in the movie. I’d suggest that that there were not many who watched the film who did not feel deep sympathy for the various robots destroyed throughout the movie. There is a particular scene where a robot that looks little like a human at all beyond a holographic female face is destroyed at the “Flesh Fair.”  The fact that the robot is destroyed with a smile on its face only heightens the feeling the audience has that the robot is a person.  While there is little to indicate anything like suffering going on there, I think most people feel sympathy for that robot, and, indeed, find the entire scene dark and slightly disturbing.  But, of course, it would only be disturbing if the robots were, in fact, the kind of things capable of suffering, and there is little to nothing to indicate anything of the kind.  The point there, of course, is that none of those objects were persons, could be said to suffer, etc. If you disagree, then you’re only proving my point. Put a smiley face on a toaster and have it talk to you, and soon you’ll worry about whether or not it’s in pain when it breaks. And that’s exactly because our sympathies here are the result of "gut feelings" rather than rational deliberation.  We have evolved to feel sympathy toward things that look like us, and this is the source of that sympathy, not some rational deliberation about whether or not things have the ability to suffer in some recognizable way.

The fact is that suffering has long been a problem in philosophy of mind in general.  How do we know anything suffers?  We need to rely on behaviors.  But, of course, the suffering is not to be found in the behaviors themselves.  We can and have created machines that exhibit some of the same behaviors that generally indicate suffering, but no one genuinely thinks that any such thing is occurring.  Mental states just can’t be cashed out in terms of behaviors.  So what are they?  Well, they’re brain states of some sort.  But that does not really answer the question, either, as, were brain states identical to mental states, this would mean something that does not have a brain state identical to the ones people have when in pain were not really in pain at all.  To highlight this, think of meeting an alien.  As it slides down the walkway from its ship, it cuts its tentacle-like appendage.  As it does so, it quickly withdraws that appendage, makes some loud exclamation that is interpreted as a curse, and, in English, says, “Wow, that hurts.”  I think most people would think that creature was really in pain.  And, yet, it surely would have nothing like the brain states we would recognize.  There are ways around this, but the result is that, when we push on it, it is just really hard to say what is and is not in pain.  Once we move to other species, if we are honest, it’s just harder to feel justified in such a claim.  Do chimps feel pain?  Almost certainly.  Do dogs feel pain?  Pretty sure.  Do frogs feel pain?  Uuhhh…  Do flies feel pain?  No idea whatsoever.

The point of all that is just we don’t really have a good way to talk about suffering in other animals.  As such, suggesting we can get around the obvious problem of relying on our guts as guides to sympathy by attempting to ground it in a rational examination of suffering is incredibly problematic.  For these reasons, I just think Harris gets this wrong.

Another big issue I have to raise with Harris is his assertion in the video that "there is no notion, no version, of human morality and human values that I’ve ever come across that is not, at some point, reducible to a concern about conscious experience and its possible changes."  Man, that is just wrong. Really, only utilitarianism is concerned, at bottom, with that kind of thing. He is certainly wrong, contrary to his explicit assertion otherwise, about the morality that comes out of the big three monotheisms. Worse, I don’t know how he could think otherwise. I really don’t know where he’s coming from on this as it is prima facie not the case, and he does nothing to explain how things could be different from that sort of perspective. In those religions, what makes something good is that it is the result of a will that is in line with God’s.  That is, in order for some action to be moral, it must be the result of a person intending to do what God has commanded.  Of course, how that gets carried out in specific circumstances might be very complicated, but that is the core idea.  Harris seems to miss that entirely as this is a radically different standard for what makes something moral than the description he offers.

So, what kind of changes in conscious experiences are relevant to morality?  According to Harris, they are the ones that lead to increases or decreases in well-being and suffering.  That is, it is good to increase well-being and decrease suffering, and it is bad to decrease well-being and increase suffering.  We can think of these two things being aspects of something like happiness.  In that way, for Harris, being moral increases happiness, and being immoral decreases happiness.  But, again, I need to point out that this is just utilitarianism.  While such a position might be popular, it is hardly the only ethical system out there, and, as many of its big proponents will admit, it is not clear that such a standard is objective in the way that Harris claims.  Even people like Peter Singer have admitted that it’s hard to ground utilitarianism in something beyond tastes and preferences.  (Russell Blackford, funny enough, discusses this in relation to Harris’ talk over at his blog, which you should be reading anyway.)  That is, we can say what makes something moral once we accept acting morally as being something like increasing happiness.  But as to why one should actually act that way, well, it is hard to say beyond the suggestion that it is likely to increase happiness, and that means the one acting morally is more likely to be happy.  So, if you value happiness, promote happiness.  But if you do not already value happiness, then it is tough to say why one should promote it.

Harris suggests what makes something good for a Christian is that it brings you happiness after death, but that’s not true at all. I know of no Christian thinker that thinks anything like that, nor any Muslim, and it does not even look like most Jews believe in any sort of afterlife at all, so it clearly cannot work there.  It might be that getting one’s will in line with God’s results in getting into Heaven, but that is certainly not the goal, and it is not even the kind of thing that can be earned.  That is strictly a matter of Grace.  But, more than that, it does not look like moral systems other than utilitarianism worry about that, either. Look at Kantianism. I think it’s going to be difficult to find a notion of human happiness in the Categorical Imperative. In fact, Kant is clear that we can never be sure if an action that results in happiness is moral, even if it coincides with the Categorical Imperative! That’s just because what makes an action moral is that it is done out of respect for the moral law, and if something makes you happy, you might be doing it for the wrong reason. So, there, not only is happiness not the point, it actually gets in the way of moral reasoning.

He makes the same sort of mistake in his further comments regarding the responses his talk. He writes,

Imagine some genius comes forward and says, “I have found a source of value/morality that has absolutely nothing to do with the (actual or potential) experience of conscious beings.” Take a moment to think about what this claim actually means. Here’s the problem: whatever this person has found cannot, by definition, be of interest to anyone (in this life or in any other). Put this thing in a box, and what you have in that box is—again, by definition—the least interesting thing in the universe.

Kantian morality “has absolutely nothing to do with the (actual or potential) experience of conscious beings.”  Kant’s morality is about adherence to a moral law for the sake of the law.  Outcomes and experiences have nothing to do with it at all.  They never factor into what makes something moral or immoral.  And yet, contrary to Harris’ assertion, people are and have been intensely interested in Kant for 200 years. Again, it just turns out that his intuitions here don’t represent everyone else’s.  While such a system might be uninteresting to Harris, such systems can clearly hold interest for lots and lots of other people.

Sean Carroll wrote one of the early responses to Harris’ talk.  Harris takes issue with this quote:

But what if I believe that the highest moral good is to be found in the autonomy of the individual, while you believe that the highest good is to maximize the utility of some societal group? What are the data we can point to in order to adjudicate this disagreement? We might use empirical means to measure whether one preference or the other leads to systems that give people more successful lives on some particular scale — but that’s presuming the answer, not deriving it. Who decides what is a successful life? It’s ultimately a personal choice, not an objective truth to be found simply by looking closely at the world. How are we to balance individual rights against the collective good? You can do all the experiments you like and never find an answer to that question.

How does Harris respond to this?

Again, we see the confusion between no answers in practice and no answers in principle. The fact that it could be difficult or impossible to know exactly how to maximize human wellbeing, does not mean that there are no right or wrong ways to do this—nor does it mean that we cannot exclude certain answers as obviously bad. The fact that it might be difficult to decide exactly how to balance individual rights against collective good, or that there might be a thousand equivalent ways of doing this, does not mean that we must hesitate to condemn the morality of the Taliban, or the Nazis, or the Ku Klux Klan—not just personally, but from the point of view of science.

But this seems to miss Carroll’s point. He was not talking about there being no answer in practice. He was saying that there is no answer in principle to this kind of question that is not arbitrary and based on tastes. And no number of people with similar tastes will ever make it not merely tastes.

The point here is that if you do not already accept happiness as a value, then there seems to be no argument for why one should promote happiness.  The kind of empirical data that Harris can offer assumes the value of happiness, but it cannot demonstrate that a person who is made happy by strange things or motivated by some value other than happiness has made some observational or cognitive error.   This is a problem for Harris because he needs to assume objective value (i.e. the objective value of happiness) in order to make the claim that science is the foundation for objective morality.  But, he can’t make that assumption!  In the end, he wants to say that science can tell us what is and is not moral because it can tell us what makes people more or less happy.  Even granting the latter is true, and I think one could argue that it is reasonable to be skeptical of such a claim, it still cannot tell us what we should do unless we already value happiness.  As such, it just does not look like Harris’ claim that science can tell us what is and is not objectively moral can hold up under any close scrutiny.

I will write a part two to this in the next couple of days where I will examine further the notion of values as tastes, moral relativism, and how I think Harris has gone wrong in mistaking prudential imperatives for moral imperatives.

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

Email to a friend

14 Responses to “Sam Harris, Science, and Morality”

  1. James Gray Says:

    I gather that Sam Harris says lots of little things he shouldn’t say, but the main idea that science can have moral implications seems reasonable. Kant said that right and wrong are determined by what rational people can will universally. That sounds like an empirical fact.

    Almost everyone has some utilitarian intuitions. Insofar as we all want what is best for us, we can use science to help us out. Scientists does have quite a bit to say about suffering, so it seems plausible that science can help us solve such questions in principle.

    It is common to define morality as a concern for human well being. That might not be entirely correct, but I think we would have a good reason to reject Kantianism if it did horrible things to people.

    Harris might agree that morality has to do with our preferences, but that’s a meta-ethical question that he does not currently have to argue for. He wants to change how we talk about morality to make it more interesting than merely what each person feels to be good.

    What if morality is just based on our preferences? Then people who talk about morality might just be talking about how to help people and it would be up to each of us to decide to actually do what is necessary to help others.

    • Jim Says:

      Did you watch the video? Did you read his commentary of that video that I linked? It seems you have not, and you really should. I do not think you would ask the questions you do if you actually watched and read Harris.
      I never said science can’t have moral implications, but that’s not Harris’ point, anyway. His point is that science can tell us what is and is not moral.
      It is not an empirical fact “that right and wrong are determined by what rational people can will universally.” What data points could you use to demonstrate that? Further, were this the case, it would contradict Harris explicitly. I get the feeling that you mean something else here, but I am uncertain of exactly what that is. Perhaps you mean that it is an empirical fact what people can will universally. That’s not what Kant thought, however, and, further, such is explicitly contrary to what Harris thinks, so it is irrelevant to this discussion.
      I never said that science could not, in principle, have anything to say about human suffering.
      It is not “common to define morality as a concern for human well being.” That is a utilitarian view, and utilitarianism is both fairly new and not the mainstream position as articulated by theists. Lots of people claim to use God’s Will (Christian, Jew, and Muslim) as their moral law, claiming morality is determined by such. The holy books of all these religions are full of that having horrible consequences for people, yet lots of people still hold that view, even going so far as to say that no one can be moral unless they follow God’s Law.
      Harris is absolutely arguing for a meta-ethical position. I strongly urge you to watch his video and read his commentary.
      As to your last question, I don’t get its relevance. I mean, ok, now you’re disagreeing with Harris’ explicitly stated position. You’re on my side.

  2. James Gray Says:

    Did you watch the video? Did you read his commentary of that video that I linked? It seems you have not, and you really should. I do not think you would ask the questions you do if you actually watched and read Harris.

    Yes I did. I admitted he said many things he shouldn’t. I took his main point to be to change how we talk about ethics. How would I know that if I didn’t watch it.

    I never said science can’t have moral implications, but that’s not Harris’ point, anyway. His point is that science can tell us what is and is not moral.

    Yes, it can tell us what is and is not moral insofar as what is moral is to benefit people. He doesn’t say that “utilitarianism is true because of scientific facts” does he? In other words, he doesn’t seem to talk about how to decide if Kantianism or utilitarianism is right.

    It is not an empirical fact “that right and wrong are determined by what rational people can will universally.”

    You read what I said differently than I intended. If right and wrong are determined by Kantian theory, then we can use empirical facts about rational people can will universally to decide what is right or wrong.

    You seem to be reading everything in meta-ethical terms, but meta-ethics is a very strange thing for people to talk about. If Harris is really saying something meta-ethical, then 99% of people will be confused by what he is saying because pretty much no one understands meta-ethics. I highly doubt Harris has even spent much time studying meta-ethics.

    Further, were this the case, it would contradict Harris explicitly. I get the feeling that you mean something else here, but I am uncertain of exactly what that is. Perhaps you mean that it is an empirical fact what people can will universally. That’s not what Kant thought, however, and, further, such is explicitly contrary to what Harris thinks, so it is irrelevant to this discussion.

    I don’t think we have to agree to 100% of what Harris believes to agree to the simple fact that I do see as relevant to the conversation: Do empirical facts have moral relevance?

    I never said that science could not, in principle, have anything to say about human suffering.

    I was making a simple point that I think this is his main point and his main point could be true even if he says many other false things.

    It is not “common to define morality as a concern for human well being.”

    This is an empirical claim. You disagree, but we could conduct a study to find out.

    That is a utilitarian view, and utilitarianism is both fairly new and not the mainstream position as articulated by theists.

    Even theists tend to think we should do things to help people. What moral system really disagrees? Utilitarianism is controversial because it is entirely consequentialist, but almost everyone agrees that (all things equal) benefiting people is good and harming people is bad.

    Lots of people claim to use God’s Will (Christian, Jew, and Muslim) as their moral law, claiming morality is determined by such. The holy books of all these religions are full of that having horrible consequences for people, yet lots of people still hold that view, even going so far as to say that no one can be moral unless they follow God’s Law.

    Do they agree that their system has horrible consequences?

    Harris is absolutely arguing for a meta-ethical position. I strongly urge you to watch his video and read his commentary.

    I said that he doesn’t have to argue a meta-ethical position to get his main point across. You disagree about what you think his main point is.

    However, there are anti-realist views that still seem consistent with his view that people can be right or wrong about “right and wrong.” (For example, right and wrong can depend on what ideal agents would prefer.)

    Additionally, he said the following:

    I believe that we can know, through reason alone, that consciousness is the only intelligible domain of value. What’s the alternative? Imagine some genius comes forward and says, “I have found a source of value/morality that has absolutely nothing to do with the (actual or potential) experience of conscious beings.” Take a moment to think about what this claim actually means. Here’s the problem: whatever this person has found cannot, by definition, be of interest to anyone (in this life or in any other). Put this thing in a box, and what you have in that box is—again, by definition—the least interesting thing in the universe.

    The view that what has value must be of interest to us is a position that many anti-realists agree with.

    • Jim Says:

      Harris is explicitly a moral realist, and I think you’ve wholly misunderstood him.
      He wrote, “My claim is that there are right and wrong answers to moral questions, just as there are right and wrong answers to questions of physics…” He explicitly thinks moral questions have real, empirical answers.
      He said, “there is no notion, no version, of human morality and human values that I’ve ever come across that is not, at some point, reducible to a concern about conscious experience and its possible changes.” That’s simply wrong, as I noted in my post.

      Yes, it can tell us what is and is not moral insofar as what is moral is to benefit people. He doesn’t say that “utilitarianism is true because of scientific facts” does he? In other words, he doesn’t seem to talk about how to decide if Kantianism or utilitarianism is right.

      Yes, he does, because he says that all moral systems are concerned with happiness, ruling out Kantianism along with any other system that is not grounded in happiness. Further, your quote at the end also shows that he dismisses Kantianism, as I pointed out in my post.

      If right and wrong are determined by Kantian theory, then we can use empirical facts about rational people can will universally to decide what is right or wrong.

      That’s not Kant’s view at all. His view is that the we know what is moral by reason, and our reason is what determines the empirical world, not the other way around. The empirical cannot determine the rational, and morality is rational, so no empirical data can determine what is moral. For Kant, only reason can tell us that.
      I think Harris is clear that he is making meta-ethical claims, and so are many other people. Harris says that the is/ought distinction is incorrect, and that the is can tell us the ought. The empirical world, then, is the meta-ethical justification for a morality based on happiness. I just don’t see how you can get around that.

      This is an empirical claim. You disagree, but we could conduct a study to find out.

      Or we could just read the vast amount of theology available to us and look at the articles of faith proclaimed by various religious organizations to which members must agree in order to attain membership. It’s all right there.
      Even theists tend to think we should do things to help people. What moral system really disagrees? Utilitarianism is controversial because it is entirely consequentialist, but almost everyone agrees that (all things equal) benefiting people is good and harming people is bad.
      Most theists think they should help people because God commanded it. That’s the justification, not the fact that it is helpful. And, again, there is nothing in any deontological position that says anything remotely close to the assertion ” benefiting people is good and harming people is bad.” Those systems hold that being moral is following the law for the sake of the law. No one’s happiness is relevant in any was as a justification for what is and is not moral.

      Do they agree that their system has horrible consequences?

      I think they would agree that the systmes increased suffering dramatically. It’s pretty hard to say that bashing in babies’ skulls is good for their well-being. And Harris agrees. That’s his whole point in saying that the laws in Islamic countries are bad, because they don’t increase well-being, and this is obvious.
      Again, Harris is explicitly a moral realist. He says so, and he says that science can tell us what is moral in the same way it can tell us the number of electrons of some element. I think you’ve misunderstood his position.

      • James Gray Says:

        Harris is explicitly a moral realist, and I think you’ve wholly misunderstood him.

        I agreed he is probably a moral realist.

        He wrote, “My claim is that there are right and wrong answers to moral questions, just as there are right and wrong answers to questions of physics…” He explicitly thinks moral questions have real, empirical answers.

        That could be taken to mean “I am arguing for moral realism” but it could also mean “We can find out what benefit and harms people through science,” which is what I think he is saying.

        He said, “there is no notion, no version, of human morality and human values that I’ve ever come across that is not, at some point, reducible to a concern about conscious experience and its possible changes.” That’s simply wrong, as I noted in my post.

        I agree.

        Yes, he does, because he says that all moral systems are concerned with happiness, ruling out Kantianism along with any other system that is not grounded in happiness.

        Being concerned with happiness is not the same thing as being grounded on happiness. All he needs to prove science has relevance is that happiness is a component to morality. That we believe that benefits are good (such as happiness.)

        Further, your quote at the end also shows that he dismisses Kantianism, as I pointed out in my post.

        Dismissing Kantianism doesn’t mean that science enables us to dismiss Kantianism. I take that to be part of his personal viewpoint rather than the substance of his argument.

        If right and wrong are determined by Kantian theory, then we can use empirical facts about rational people can will universally to decide what is right or wrong.
        That’s not Kant’s view at all. His view is that the we know what is moral by reason, and our reason is what determines the empirical world, not the other way around.

        I don’t know if that is quite right. Kant did not see himself as an idealist.

        The empirical cannot determine the rational, and morality is rational, so no empirical data can determine what is moral. For Kant, only reason can tell us that.

        I’m not convinced that a Kantian must be totally divorced from the empirical world. I think that empirical facts are relevant to what rational people can universalize. For example, to universalize murder would be to universalize an empirical situation.

        Harris does not talk about Kantianism, so I don’t know how he views it.

        I think Harris is clear that he is making meta-ethical claims, and so are many other people. Harris says that the is/ought distinction is incorrect, and that the is can tell us the ought. The empirical world, then, is the meta-ethical justification for a morality based on happiness. I just don’t see how you can get around that.

        By admitting that happiness can have relevance to non-consequentialist theories. It certainly has implications for Aristotle, the Stoics, Epicurus, etc. Happiness can have intrinsic value worth consideration even for non-consequentialists.

        Or we could just read the vast amount of theology available to us and look at the articles of faith proclaimed by various religious organizations to which members must agree in order to attain membership. It’s all right there.

        What an organization defines as morality is not necessarily how the members of the organization define morality.

        Most theists think they should help people because God commanded it. That’s the justification, not the fact that it is helpful. And, again, there is nothing in any deontological position that says anything remotely close to the assertion ” benefiting people is good and harming people is bad.”

        Yes, deontology can admit happiness is good. Even Kant said the greatest good is for everyone to be virtuous and happy. And part of “willing” happiness universally is to value happiness.

        Our choices aren’t consequentialism or “consequences don’t matter at all.” Almost everyone agrees that consequences do matter. Some people just think that’s all that does matter.

        Not all theists are divine command theorists either.

        I think they would agree that the systmes increased suffering dramatically. It’s pretty hard to say that bashing in babies’ skulls is good for their well-being. And Harris agrees. That’s his whole point in saying that the laws in Islamic countries are bad, because they don’t increase well-being, and this is obvious.

        I think the Islamic people who agree to such things think that something bad must happen for the greater good. They think their religion is beneficial to people. Some theists admits that we might have to suffer in this world because the next world is the one that really matters. It might be necessary to harm people if it can “save souls.”

        Nietzsche said the Spanish Inquisition was perfectly coherent with this view.

        Again, Harris is explicitly a moral realist. He says so, and he says that science can tell us what is moral in the same way it can tell us the number of electrons of some element. I think you’ve misunderstood his position.

        I already admitted that he might be a realist, but that isn’t what I take to be his main point. He talked mostly about how benefits and harms could be discovered empirically. He didn’t say we could find out whether utilitarianism or Kantianism is true through science. Where does he tell us how to find out which moral theories are true through science? Where does he even say we can find out what has intrinsic value through science?

        There is a naturalistic position in meta-ethics that claims that all moral and meta-ethical beliefs should be justified in naturalistic terms, which could be his ultimate belief, but there’s no way he could explain this position in a 20 minute talk. And there’s even less reason to think he could argue for such a position in that amount of time.

        • Jim Says:

          I don’t know what to say to you about the deontology stuff. I have little desire to get into some debate about whether or not following the law for its own sake can allow for doing something because it leads to happiness as this seems clearly mistaken. I’ve made that point; I feel no need to repeat myself. That applies to Kant as well as anyone who claims to get their morals from a divine source.

          That could be taken to mean “I am arguing for moral realism” but it could also mean “We can find out what benefit and harms people through science,” which is what I think he is saying.

          That is the kind of “banal” claim that Harris says he is not making. He wrote, “Rather I was suggesting that science can, in principle, help us understand what we should do and should want…” That’s what he said, and it’s clear from everything else he wrote that that is what he meant. That science can tell us what increases well-being is so trite as to be pointless to debate. I have yet to see anyone disagree with that. But such could, at best, give us prudential imperatives, not moral ones. I plan to address this in a follow-up to this post as I mentioned at the end. Regardless, the mere fact that science can tell us what increases well-being in no way suggests what Harris says, that science can tell us what we should do. Harris is just wrong, there.

  3. James Gray Says:

    On second thought he could say quite a bit about meta-ethics and people would understand, but I don’t think he is saying much that is very “deep” or highly significant about meta-ethics.

    For example, the fact that there are moral truths is a meta-ethical belief, but even anti-realists can agree that there are moral truths.

    I suspect that Harris is a moral realist, but I don’t think what I take to be his main point requires that we be one.

  4. James Gray Says:

    I don’t know what to say to you about the deontology stuff. I have little desire to get into some debate about whether or not following the law for its own sake can allow for doing something because it leads to happiness as this seems clearly mistaken. I’ve made that point; I feel no need to repeat myself. That applies to Kant as well as anyone who claims to get their morals from a divine source.

    I never said Kant would say we should do something “because it leads to happiness.” Happiness has relevance despite not being the justification.

    The fact that understanding science “helps us” understand happiness is why he says science can “help us” understand right and wrong rather than “tells us exactly what right and wrong is based on some simple formula.”

    That is the kind of “banal” claim that Harris says he is not making. He wrote, “Rather I was suggesting that science can, in principle, help us understand what we should do and should want…”

    That’s what he says and I read it as a non-meta-ethical thesis. I should want to make people happy and science can tell me how.

    That science can tell us what increases well-being is so trite as to be pointless to debate.

    I think he wants this trite point to help moral debate. He has a 20 minute talk basically about the fact that moral right and wrong aren’t just a matter of taste. That’s all that should be expected in 20 minutes. Moral criticism is taken to be taboo and “everyone has a right to their own opinion
    and “morality is just a matter of taste.”

    He wrote a book about how religion should be held to intellectual standards. Why? Because it’s taboo to criticize other people’s religion. I believe that he wants to say this is also true for morality. It isn’t “above criticism.”

    I have yet to see anyone disagree with that. But such could, at best, give us prudential imperatives, not moral ones.

    There are moral implications other than imperatives. I don’t think he is making a naturalistic fallacy saying, “this is healthy, therefore it’s right.” There is more going on than that.

    I plan to address this in a follow-up to this post as I mentioned at the end. Regardless, the mere fact that science can tell us what increases well-being in no way suggests what Harris says, that science can tell us what we should do. Harris is just wrong, there.

    It can “help us” know right from wrong by knowing the empirical implications to moral beliefs. Additionally, certain assumptions (Utilitarianism) would make science especially relevant.

  5. CW Says:

    I feel like there’s a lot going on here, that I can’t quite appreciate. I feel like I’m too busy reading the libretto and not watching the opera.

    I am probably missing the point, but when Sam Harris asserts that “science can answer moral questions,” my response is “really? can science test this empirically?” It seems to me that the arguments Harris makes are either confirmation bias, correlations, and retrofitting after the fact.

    I really like Sean Carroll’s line that states: “The project of moral philosophy is to make sense of our preferences, to try to make them logically consistent, to reconcile them with the preferences of others and the realities of our environments, and to discover how to fulfill them most efficiently. Science can be extremely helpful, even crucial, in that task.”

    I truly believe that morals is basically a set of preferences that a majority of a population holds at a given time.

    • Jim Says:

      I don’t think you’ve missed anything. Harris is just opposed to the sort of relativism your last sentence endorses. If that’s true, then he thinks we have no reason to say that people shouldn’t fly planes into buildings, eat babies, or whatever. For that reason, he wants an objective ground for morality, and he thinks science is it. Unfortunately, simply wishing something does not make it so.

  6. You Can’t Derive Ought from Is | Cosmic Variance | Discover Magazine Says:

    […] response, and his FAQ. On the other side, see Fionn’s comment at Project Reason, Jim at Apple Eaters, and Joshua […]

  7. Sam Harris is unscientific « Praj's Blog Says:

    […] Carrol (see here, with follow-up here and here) and Massimo Pigliucci.  Also check out this and this, and follow the endless links […]

  8. A Reaction to Sam Harris’ Scientific Morality | Harmonist Says:

    […] here, and here. And a further follow-up from Sean Carroll. Other criticisms of Harris’ talk: Apple Eaters, Josh Rosenau. AKPC_IDS += […]

  9. You Can’t Derive Ought from Is | Sean Carroll Says:

    […] response, and his FAQ. On the other side, see Fionn’s comment at Project Reason, Jim at Apple Eaters, and Joshua […]


Leave a comment