Jim’s last post on Sam Harris addresses a particular example of a more general problem that I see repeating in the skeptical/scientific community. There seems to be a trend among skeptics to endorse very naive version of utilitarianism as though it is not merely a theory about moral value but an objective principle similar to empirical theories. This trend is worrisome because many of the people who are endorsing it do not seem to be aware that they are doing this, or worse, they don’t get why this is a problem. For this reason, I’m going to to take a few minutes to explain why this is a problem, so none of our skeptical readers will make a similar mistake.
The basic assumption of every utilitarian ethical theory is that happiness (the definition varies, of course) is intrinsically valuable. Insofar as the definition of “intrinsic value” is understood in contrast to “instrumental value,” this observation is not controversial. We do not seek happiness as a means to some other end, we seek it as an end in itself. The value of happiness is also universal in the sense that nearly every person seems to value it. But there is a trick in moving from this accurate description of the intrinsic and universal value of happiness to the objective value of happiness that is necessary in order to make utilitarianism into an empirical moral principle.
Here’s the trick: It’s not really happiness qua happiness* that is intrinsically and universally valuable. It’s my happiness that I pursue as an end in itself, and it’s your happiness that you pursue as an end in itself. Utilitarians want to take the empirical fact that we each value our own happiness and derive a prescriptive imperative from it- “we ought to promote happiness universally.” Unfortunately, it just does not follow that simply because I value my own happiness I ought to promote the happiness of others. In order to make that step, the utilitarian must argue that I value happiness itself -not particular manifestations of it- so that my failure to promote universal happiness constitutes a mistake in my moral reasoning. And this argument fails because it is based upon a ludicrous premise: The overwhelming evidence is that we value human happiness selectively and with huge variation of intensity. I may strongly value the happiness of those I love, somewhat value the happiness of those I know, and slightly prefer the happiness of innocent strangers, but this does not mean that I value happiness independently of who manifests it. If I really valued happiness universally, I would easily relinquish the money I spend on personal comforts and comforts for people I loved because that money could make so much more of a difference in the happiness of people I do not know who are starving and suffering somewhere else.
Inevitably, when I point out to a naive utilitarian that his theory does not seem to accurately describe his own moral values, let alone those of others, he will respond by saying something along the lines of, “Yes, but if I were a better person it would.” No doubt, utilitarianism is appealing as a moral theory because it discourages selfishness, clannishness, racism, and all other manner of discriminatory practices. But unfortunately, this is irrelevant to its meta-ethical foundation. If utilitarianism were a truly empirical moral principle, then we wouldn’t have to explain away discrepancies between what we actually value and what we ought to value. Since those discrepancies exist, utilitarianism either hasn’t described the world accurately, or it is a moral postulation no more grounded in empirical science than any other theory of ethics. (Or, both. I think it’s both.) Either way, the utilitarians have failed to bridge the gap between actual moral sentiments (“is”s) and prescriptions about the way we ought to feel/act (“ought”s).
Of course, there is another method of bridging the is/ought that many utilitarians favor as well. It has the advantage of meaningfully distinguishing between empirical descriptions and practical imperatives but with one rather unfortunate caveat: It takes out morality altogether. The move is to say that prescriptive language only refers to prudential advice, not moral imperatives. In other words, the utilitarian would say “you ought to promote universal happiness because that will be likely to promote something you do value (a peaceful world, being seen as a good person, cooperation with others, personal fulfillment, etc.).” This move is problematic for two reasons: First, the premise that acting as a utilitarian is likely to promote personal value-satisfaction will frequently be false (i.e., There are lots of times in which selfishness, or even hurting others, is the best strategy for promoting personal values), and second, and more importantly, it entirely misses the point. As soon as we move from moral oughts to prudential oughts, utilitarianism goes from being an ostensibly defensible theory of moral foundations to a delusional program of self-help. There is no reason to take advice from utilitarians unless it is moral advice, so the move from morality to prudence is just silly.
All of that being said, I don’t want to give the impression that I have some sort of a personal vendetta against utilitarianism. I don’t think it’s absurd to postulate that happiness qua happiness is intrinsically valuable. It’s a perfectly defensible axiom, but it is not derived from empirical observation. This puts utilitarianism in exactly the same meta-ethical position as every other theory of ethics. You can’t bridge the is/ought gap, and the scientists and skeptics who don’t get this need a philosophy lesson.
*In the interest of clarity, the phrase “x qua x” is used to refer to any thing in the capacity or character of itself. So, “happiness qua happiness” means “happiness as itself” in contrast to “happiness for some particular person” or “happiness as it as seen by some particular person.”