From Condorcet and Comte down to their latter-day disciples like Sam Harris and Michael Shermer, rationalists have dreamed of turning ethics into a science. If only ethics could be turned into a quantifiable, data-driven exercise, then knowing the right thing to do in any given circumstance would be a simple matter of plugging objective numerical values into a mathematical formula, a technique that could be mastered and used by anyone, with none of this primitive, inefficient, peasant superstition about “wisdom” which can only be gradually acquired over time, through trial and error, and by listening to boring old elders and their interminable stories.
As it happens, though, ethics is more like an exclusive nightclub named Dunbar’s Number, guarded by glowering, musclebound bouncers. “The right thing to do” involves flesh-and-blood people in specific relationships based in particular contexts, not abstract people in an abstract world. There is no a priori answer to every moral dilemma, unless you’re a believer in predestination or absolute determinism.
Let’s stare in amazement as Adam Waytz attempts to square this circle:
In fact, there is a terrible irony in the assumption that we can ever transcend our parochial tendencies entirely. Social scientists have found that in-group love and out-group hate originate from the same neurobiological basis, are mutually reinforcing, and co-evolved—because loyalty to the in-group provided a survival advantage by helping our ancestors to combat a threatening out-group. That means that, in principle, if we eliminate out-group hate completely, we may also undermine in-group love. Empathy is a zero-sum game.

Absolute universalism, in which we feel compassion for every individual on Earth, is psychologically impossible. Ignoring this fact carries a heavy cost: We become paralyzed by the unachievable demands we place on ourselves. We can see this in our public discourse today. Discussions of empathy fluctuate between worrying that people don’t empathize enough and fretting that they empathize too much with the wrong people. These criticisms both come from the sense that we have an infinite capacity to empathize, and that it is our fault if we fail to use it.

People do care, newspaper editorialists and social-media commenters granted. But they care inconsistently: grieving for victims of Brussels’ recent attacks and ignoring Yemen’s recent bombing victims; expressing outrage over ISIS rather than the much deadlier Boko Haram; mourning the death of Cecil the Lion in Zimbabwe while overlooking countless human murder victims. There are far worthier tragedies, they wrote, than the ones that attract the most public empathy. Almost any attempt to draw attention to some terrible event in the world elicits these complaints, as though misallocated empathy was more consequential than the terrible event itself. If we recognized that we have a limited quantity of empathy to begin with, it would help to cure some of the acrimony and self-flagellation of these discussions. The truth is that, just as even the most determined athlete cannot overcome the limits of the human body, so too we cannot escape the limits of our moral capabilities.

We must begin with a realistic assessment of what those limits are, and then construct a scientific way of choosing which values matter most to us.
That means we need to abandon an idealized cultural sensitivity that gives all moral values equal importance. We must instead focus our limited moral resources on a few values, and make tough choices about which ones are more important than others. Collectively, we must decide that these actions affect human happiness more than those actions, and therefore the first set must be deemed more moral than the second set.
Once we abandon the idea of universal empathy, it becomes clear that we need to build a quantitative moral calculus to help us choose when to extend our empathy. Empathy, by its very nature, seems unquantifiable, but behavioral scientists have developed techniques to turn people’s vague instincts into hard numbers.

Basing our moral criteria on maximizing happiness is not simply a philosophical choice, but rather a scientifically motivated one: Empirical data confirm that happiness improves physical health, enhancing immune function and reducing stress, both of which contribute to longevity. Shouldn’t our moral choice be the one that maximizes our collective well-being? These data sets can give us moral “prostheses,” letting us evaluate different values side-by-side—and helping us to discard those lesser values that obstruct more meaningful ones. These approaches can help us create a universal moral code—something that can serve as a moral guide in all cases, even if we are not able to actually apply it to all people all the time.
As Arthur said via email:
My take-away is that the solution to our moral problems is to be happy. The only way to solve moral problems in a realistic way is to apply a data-driven hedonistic calculus — which is what a lot of amoral people do, anyway. There is something of an antinomy between morality as an absolute — “It’s the right thing to do, come hell or high water” — and morality as relativistic, based on trade-offs between consequences of this or that course of action. The antinomy between these two concepts of morality is itself a moral one. But it is also a philosophical question, and the problem with so many social scientists is their technocratic hubris. They assume science has solved or soon will solve the problems that philosophy could only speculate about, given that Kant and Plato, e.g., were cluelessly embedded in a primitive stage in history, bereft of the only means of testing philosophical hypotheses: lab testing and data-gathering. But philosophical questions keep coming back to bite them in the ass.
Utilitarian ethics are ruthlessly fixated on practical results — whatever is best for the greatest number of people. The problem with this position that it is not in itself necessarily moral: it is based on an unexamined assumption that everyone is a reasonable modern Liberal. and that what will make the greatest number of people happy could never be, for example, exterminating the Jews. Utilitarian and Marxist thinking converge here in consensus group-think, collectivist notions of happiness, and disregard or contempt for individual deviations from “the general good.” Both make claims to being scientific. Both are programmatically devoted to humane values such as social justice. And while it is Marxist “dialectical science” (along with Nazi “racial science”) that has produced totalitarian nightmares, there’s potential for a more laid-back dystopia in utilitarian thinking. Or perhaps we are going to end up with a dystopia that combines the best of 1984 with the best of Brave New World.
But who’s to say you can’t engineer efficient empathy-extension? And I’ll be interested to hear how that FBI-vs.-Apple dilemma is solved by neuroscientists and social psychologists. First, of course, they’ll need to poll the People using improved self-reporting techniques; image their brains to measure their anxiety-vs.-emotional security ratios; and use a software algorithm to produce a rigorous break-even analysis. The result will be a democratic (or at least demographic) moral decision, overseen by guess who? An elite cadre of scientists and social engineers. It’s not as if these disinterested people are motivated by any WILL TO POWER.
Where is Nietzsche when we need him?