Choose your preference utilitarianism carefully - part 2 (unpublished)

Part 1 - Part 2

Summary

Note: this part of the essay is mainly a reference for use against specific anti-valence-utilitarian arguments (recalling from Part 1 that 'valence utilitarianism' is a renaming of hedonistic utilitarianism). It doesn't actively attempt to establish valence utilitarianism, except inasmuch as undermining the positive case for its closest rival might do so. You can safely skip to Part 3 if you're looking for a positive argument in favour of VU.

In Part 2, I examine the open access essays I've found that include arguments for PU over VU with reference to the axes in Part 1. I show that none of these arguments attack the underlying logic of VU, and that though they might persuade people to react intuitively against it, the logic they use to do so typically contradicts the authors' claims elsewhere, often within the same essay. We can thus see that in practice, in their search for a less counterintuitive theory than VU, PU advocates have tended almost universally to produce inconsistent alternatives.

Introduction: The critical essays

While putting together Part 1 of this essay, I sought any reasonably well known open access arguments for preference utilitarianism over valence utilitarianism. I found relatively few.1 Here's the whole list:

'Morality and the Theory of Rational Behaviour', by John Harsanyi (section 9, pp56-60)

'Not for the Sake of Pleasure Alone', by Luke Muehlhauser

'Not for the Sake of Happiness Alone', by Eliezer Yudkowsky

The 'Robert Nozick: Political Philosophy' entry in the Internet Encyclopedia of Philosophy, by Dale Murray

'Choose Pain Free Utilitarianism', by Katya Grace

'Debate with Peter Hurford', Post 1, by Alonzo Fyfe

'The Consequentialism FAQ' by Scott Alexander

'Hedonic vs Preference Utilitarianism in the Context of Wireheading' by Jeff Kaufmann

'Taking Life: Animals' - from Practical Ethics Third Edition

'Hedonistic vs Preference Utilitarianism', by Brian Tomasik2

One might also include Eliezer Yudkowsky's 'Coherent Extrapolated Volition' on this list, but since that's at least nominally more of a research programme than a defined ethical system, I'll omit it from this series and perhaps return to it in a separate essay.

Between them, these essays cover by far the most common arguments for preference utilitarianism.

My aim in Part 2 is to show that virtually all the popular arguments for PU over VU reduce to a small handful of basic themes, all of which themselves express similar underlying reasoning – and that such arguments typically require inconsistency on the axes I discussed in Part 1 if they're to have any teeth at all.

Rather than analysing each essay in depth, for each critical theme I'll show where I believe it entails inconsistency (or why it's toothless without it). I'll give an example from one essay to illustrate it, then highlight in turn the passages from essays giving the same criticism, minimising the commentary specific to each.

Theme 1.1: Happiness is not what people want

This is probably the most common objection, and in some sense HU's can't complain we didn't have it comin'. Jeremy Bentham's opening remark in the first ever explicitly utilitarian text An Introduction to the Principles of Morals and Legislation is

Nature has placed mankind under the governance of two sovereign masters, pain and pleasure. It is for them alone to point out what we ought to do, as well as to determine what we shall do.

At best this an ambiguous remark, and could reasonably be interpreted as 'pleasure and pain are important because people respectively want/antiwant them'.

Several PU advocates have quite reasonably pounced on the claim in this interpretation. For example, in his above essay, Harsanyi states

[Hedonistic utilitarianism] presupposes a now completely outdated hedonistic psychology. It is by no means obvious that we do all we do in order to attain pleasure and avoid pain. It is at least arguable that in many cases we are more interested in achieving some objective state of affairs than we are interested in our own subjective feelings of pleasure that may result from achieving it.

Whether Bentham actually claimed this causal relationship is moot. It's not necessary to HU/VU, nor is it even sufficient. Any apparent valence utilitarian who predicated his ethical system on happiness because of the belief that happiness was what people wanted would no more be a valence utilitarian than someone who wanted everyone to eat Marmite because he thought they all liked it would be a Marmite utilitarian. He would be a preference utilitarian with bad data.

The suppressed axiom of this objection then, is that 'what people want' is the ultimate standard of ethics – but this is exactly the issue on which PUs and VUs differ!

Understood properly then, this criticism is only a restatement of the conflict in question in prejudicial language. We could easily invert the emotional framing: sometimes we suffer dearly for getting what we want.

Whence the inconsistency?

Let's disregard the above complaint for now, and imagine that we do accept 'what people want' as the ultimate standard of ethics. After all, we can at least conceive that there might be some independent argument that showed that we should do so.

So let's conditionally assume that such an argument is out there, waiting to be advanced – that authors like Harsanyi are right, and that 'what people want' is the ultimate criterion. Now let's revisit the PU axes from Part 1.

Giving people 'what they want' must be equivalent to choosing both F0 on the information axis and R0 on the rationality axis. Anywhere else on either, by definition, would not give people what they actually want.

Eschewing at least one of these options, that is, choosing ¬(F0 & R0)PU, is normally described as something like giving people 'their ideal preference' (see eg Harsanyi below), but this is essentially propaganda. A preference they would have in some different circumstances is not an actual preference. Instead, the PU needs to say this is a preference they should have in some slightly normative sense.

This opens the floodgates. If using such sleights were sufficient for consistency with actual preferences, an HU could as well say that their ideal preference was one which maximised happiness. Come to that, a proponent of any ethical system could define ideal preferences to adhere to their preferred norms and still claim to be a PU.

A PU might respond that there's something importantly different about an agent extrapolating desires from an existing person compared to reinterpreting his desires according to her own ethical values.

I'll argue in Part 3 that the distinction can't have any such significance (and, as phrased here, has again hidden the issue in PU-suited terminology), but for now I'll only observe that this is not the actual argument advanced in any PU essay I've seen. None of the essays in this section claim that 'extrapolation' is better than 'interpretation', only that respecting (actual) desires is better than not.

If readers aren't convinced by the argument I advance against this in Part 3, they might well try to argue for such a claim in response. Perhaps such an argument can be successful if I've made a mistake - but it doesn't seem to exist in the reasons people have given thus far for PU.

An extra ambiguity

So 'what they want' entails F0R0PU. That's not obviously all it entails – what about the creation axis? This is much less clear, since in theory this axis partially defines a preference, but someone might object to a PU's position either way.

For instance, an agent might say to a C1PU 'hey, just because you don't think I'm perfectly pursuing my stated goals doesn't mean I don't want them.' Conversely, we can imagine someone wishing they had the courage to throw themselves off a building being given a helping push by a C0PU, and feeling somewhat peeved as they fell. 'This isn't what I actually wanted!' they might understandably complain (assuming it was a long enough drop).

But let's disregard the creation axis issue, since it's one where inconsistency is harder to show. The clearer problem in the essays below is with F0R0PU.3

Not all of them entail contradictions, but all of them reject HU on the 'what people want' ground, so lock themselves into F0R<0PU, whose conclusions I suspect none of the authors would always endorse. In most cases, they explicitly reject it elsewhere or not in the same essay where they criticise HU.

Let's look at the relevant pieces, starting with Harsanyi, ibid, across the page from the above quote.

John Harsanyi's 'Morality and the Theory of Rational Behaviour'

Any sensible ethical theory must make a distinction between rational wants and irrational wants, or between rational preferences and irrational preferences. It would be absurd to assert that we have the same moral obligation to help other people in satisfying their utterly unreasonable wants as we have to help them in satisfying their very reasonable desires. ... All we have to do is distinguish between a person's manifest preferences and his true preferences. His manifest preferences are his actual preferences as manifested by his observed behaviour, including preferences possibly based on erroneous factual beliefs, or on careless logical analysis, or on strong emotions that at the moment greatly hinder rational choice. In contrast, a person's true preferences are the preferences he would have if he had all the relevant factual information, always reasoned with the greatest possible care, and were in a state of mind most conducive to rational choice. Given this distinction, a person's rational wants are those consistent with his true preferences and, therefore, consistent with all the relevant factual information and with the best possible logical analysis of this information.

Ultimately then, he advocates F1R1PU here, and relies upon F0R0PU earlier, so we must dismiss either his criticism of HU, his form of PU, or both.

Scott Alexander's 'Consequentialism FAQ'

From Alexander's piece:

#### What's wrong with Jeremy Bentham's idea of utilitarianism?

It suggests that drugging people on opium against their will and having them spend the rest of their lives forcibly blissed out in a tiny room would be a great thing to do, and that in fact not doing this is immoral. After all, it maximizes pleasure very effectively.

Alexander's position isn't quite clear. He initially seems to support 'preference utilitarianism', which he gives no reason to suppose isn't F0R0PU, but then he describes 'different forms of utilitarianism that try to get it more exactly right' (emphasis his). To wit,

Coherent extrapolated volition utilitarianism is especially interesting; it says that instead of using actual preferences, we should use ideal preferences - what your preferences would be if you were smarter and had achieved more reflective equilibrium - and that instead of having to calculate each person's preference individually, we should abstract them into an ideal set of preferences for all human beings. This would be an optimal moral system if it were possible, but the philosophical and computational challenges are immense.

I would guess that by 'philosophical challenges', Alexander has in mind the sort of difficulties Muehlhauser discusses in his post 'Meta-ethics: Railton's Moral Reductionism (part 2)'. I'll look at what I think is the kernel of these sorts of challenges in part 3.

Meanwhile, such challenges are independent of what I'm arguing here: the basic inconsistency of coherent extrapolated volition utilitarianism with his above anchoring to F0R0PU.

Luke Muehlhauser's 'Not for the Sake of Pleasure Alone'

In the last decade, neuroscience has confirmed what intuition could only suggest: that we desire more than pleasure. We act not for the sake of pleasure alone. We cannot solve the Friendly AI problem just by programming an AI to maximize pleasure.

As in the above examples, this presumes F0R0PU. In this essay Muehlhauser doesn't explicitly advance any claim that we should satisfy non-actual desires - though the final sentence (above) reads as though he hasn't recognised that he's assuming part of his conclusion.

Elsewhere though, he guardedly adopts a 'parliamentary model' of 'coherent extrapolated volition', according to which 'we determine the personal CEV of an agent by simulating multiple versions of them, extrapolated from various starting times and along different developmental paths.'

As in the examples above, inasmuch as this model involves doing unto people what they don't actually want done to them, it's inconsistent with his F0R0PU.

Yudkowsky's 'Not for the Sake of Happiness Alone'

For all value to be reducible to happiness, it's not enough to show that happiness is involved in most of our decisions ... it must be the only consequent.

This passage implicitly requires that 'our decisions' are the ultimate source of value. This seems equivalent to a C1PU view of 'what we want'.

If it is something different, just the same form of counter-criticism applies:

a) he's showing at best that whatever element of our decisions he's advocating as a source of value is different from HU, not that HU errs, and b) he's implicitly anchored this source of value to our actual decisions, ie F0R0(C1SX)PU (perhaps further specified on axes I haven't discussed).4

Like Muehlhauser, Yudkowsky doesn't offer anything to directly contradict himself here. But like Muehlhauser, he argues elsewhere for coherent extrapolated volition, whose precepts contradict his implicit premise above.

Theme 1.2: (An overly restrictive form of) 'happiness' is not what people want

Katja Grace's 'Choose Pain-free Utilitarianism'

This very short essay is the only permalinked example I know of a criticism I nonetheless frequently read and hear.

Pain can have no [normative] importance, as empirically it can be decomposed into a sensation and a desire to not have the sensation. This is demonstrated by the medical condition pain asymbolia and by the effects of morphine for example. In both cases people say that they can still feel the sensation of the pain they had, but they no longer care about it.

Unlike the critics above, Grace doesn't commit to a contradiction here or across any other writing that I know of.

The problem here is equivocation – the word 'pain', both in its dictionary definition and to utilitarians, has two distinct meanings: either i) 'the sort of sensation associated with physical damage' or ii) 'any aversive phenomenal experience' (sometimes called 'suffering' to distinguish from the first meaning, though this isn't perfectly clear either). Grace's remark relates to pain-i. Valence (and therefore hedonistic) utilitarianism is ultimately concerned with only pain-ii (and its complement, pleasure-ii) - ie, with valence.

As well as aversive physical pain, pain-ii by definition includes (and pain-i can exclude many of) the whole set of possible negative experiences: sleep deprivation, hunger, thirst, nausea, depression, anxiety, itching, and embarrassment, to name just a few that humans are familiar with, and any analogously negative experiences that humans happen not to have evolved.

But ii) is not a strict superset of i). Non-aversive physical pain - ie the phenomenal 'pain' experience of someone with pain asymbolia, and, come to that, the phenomenal experience of sexual masochism - is not a form of pain-ii.

On the other hand, any experience that sexual masochists or people with pain asymbolia find aversive is - by definition - a form of pain-ii.

So Grace's remarks, properly parsed, are wholly consistent with VU. Whether you call pain-ii 'suffering' or 'thwarted preference' is largely a linguistic dispute, or at least one that's currently circular: we can't easily define suffering except as phenomenal experience we want to avoid, but we can't conceive of that kind of 'wanting' without imagining negative phenomenal experience.

Consequently you could describe forms of utilitarianism that concern themselves exclusively with pain-ii as either 'HU' or 'PU'. But, for the reasons I gave in Part 1, I think the former is better - and calling it 'valence utilitarianism' to avoid precisely this source of confusion is better still.

Theme 2.1: HU condemns people to dystopian wireheading or utilitronium futures from which PU would spare them

This claims comprises two subclaims:

a) Valencist authoritarianism - HU agents would force an unsuspecting or even resistant population to be happy,

and b) Preferencist libertarianism - PU agents would respect5 the population's explicit or tacit wishes, leading to very different outcomes.

In many such scenarios as given, valencist authoritarianism is a dubious proposition for numerous reasons. In order for it to be true, we need to steelman the scenario in a few ways. I'll do so, and then look at why when valencist authoritarianism is true, preferencist libertarianism will almost always be false.

A typical example is from Alexander's essay above, continuing the initial passage I quoted:

It suggests that drugging people on opium against their will and having them spend the rest of their lives forcibly blissed out in a tiny room would be a great thing to do, and that in fact not doing this is immoral. After all, it maximizes pleasure very effectively.

By extension, any society that truly believed in [valence utilitarianism] would end out developing a superdrug, and spending all of their time high while robots did the essential maintenance work of feeding, hydrating, and drugging the populace. This seems like an ignoble end for human society. And even if on further reflection I would find it pleasant, it seems wrong to inflict it on everyone else without their consent.

It's far from clear that valencist authoritarianism would apply here. The work Alexander's robots are doing is time- and resource-intensive, and quite delicate. If they slightly over- or underfed the blissed out population, they could kill everyone off. If the sentient blissheads could do the work themselves it would allow many more of them to exist, and in greater safety (or to look at it from another perspective, if the robots themselves were some form of sentient blisshead, there'd be little need to have a separate species of couch potato lying around). And at least until the drug - and its users - reached their optimal form, effectively converting the users into utilitronium, it would help if they were also competent to research improved versions, improved manufacturing processes etc.

They could do this with a relatively slight modification to the drug's purported properties. The drug could allow just enough variation in bliss to retain conscious motivation, as in David Pearce's future-manifesto. If so, though, taking it wouldn't thwart or necessarily change anyone's existing preferences, so it's hard to imagine any PU (or anyone other than die-hard religious ascetics) really objecting to this scenario. If they don't object, a) would be false, since no-one would need to be forced into anything.6

We also need to presume that the drug is sufficiently potent that for better or worse, once it's taken a user will never voluntarily stop taking it. If they did, then it couldn't be that blissful a drug for them to opt for their pre-drug experiences. Moreover, they would have to be taking it because doing so genuinely felt better in the long term than not because it was addictive. In the latter case, just as with familiar modern drugs like heroin, no HU would advocate taking them, because the desire to give up a substance coupled with the inability to do so causes suffering, not happiness. And, in this particular scenario, some non-C0PUs probably any true C1 would be committed to advocating the drug where HUs would strongly oppose it.

So to steelman against the above rejections of valencist authoritarianism, we need to be clear that in our scenario i) society has reached a state where the long-term economic optimum actually is non-conscious agents serving helpless drug-users and ii) the drug is reliably so much nicer than normal experience that everyone will choose to remain on it because of the overwhelming happiness from doing so, not a miserable inability to quit.7

Just before I examine the argument against preferencist libertarianism in more detail, I'll observe that proponents of this argument often overstate its significance. If valencist authoritarianism and preferencist libertarianism are both true in any or even all key cases, it establishes only that VU and PU may differ.

Thus the underlying logic of rejecting VU on such grounds must be similar to that in theme 1.1 and so my first response is analogous to my response there: it doesn't discredit VU to show that PU disagrees with it. If a reasonable number of intelligent people still advocated VU while accepting both valencist authoritarianism and preferencist libertarianism, we would need some deeper premise in order to reject it - one which didn't ultimately presume what it was trying to prove.

Whence the inconsistency?

In this case neither of the essays contradict themselves, even implicitly. Nonetheless, if we assume that their authors would give the answer I think any sane person would to a realistic(ish)scenario, we can establish a contradiction.

The scenario in question is from Part 1, but let's lavish expanded detail upon it:

A boy comes up to you outside a newsagent, and asks if you'll buy him 50 boxes of cigarettes (since he's under age). He'll give you the money up-front, and a bit extra for your trouble. You ask why he needs so many, and he says all the other kids in his class want them, but are too afraid to accost strangers, so he can make a decent profit selling them at marked up prices.

This is an unusually ambitious (and forthright) child, but we can reason here as we would in the real world.

If we buy him the cigarettes, a number of children will have access to them who wouldn't otherwise. As a result, they'll satisfy any preferences they have for trying/continuing to smoke cigarettes. Very likely as a result some will get addicted, and later in life regret that they ever took up smoking, especially when they develop lung cancer and other associated maladies.

I assume that no sensible PU would accept the boy's request. That is, they would thwart his immediate preference and, by denying his classmates the chance the buy cigarettes, thwart any desires they had to do so.

In order to justify denying the boy his request a PU can use one of two approaches that seem functionally equivalent here.8

The first, and surely the simplest, is the same argument that a VU would use: future negative utility. In the short term, we could give the boy and his classmates a small-medium utility boost, but as a foreseeable consequence, in the long term their expected negative utility from smoking would outweigh it (regardless of whether 'utility' is valence or preference-satisfaction).

On this account, we only need to assume a zero or low discount on the discounting axis (one of the axes common to all utilitarianisms), such that the immediate utility gained by buying the cigarettes doesn't have a sufficiently higher multiplier than the later negative utility to make the trade-off worthwhile.

The second is to adopt DPF1PU on the duration axis, or DPFxPU with a sufficiently low decay rate that again, the present utility doesn't outweigh the negative utility of thwarting future past-looking preferences.

The parallel I'm going to draw won't shock anyone, but let's examine the relevant essays:

Alexander's 'Consequentialism FAQ'

Alexander's drug as he describes it, 'maximises pleasure', so in itself is neutral on PU values. But we need only tweak the scenario slightly to conceive a drug that both stimulates and fulfils intense desire as well, or instead.9 Indeed, modern addictive drugs do something like this. In modern drugs, fulfilment tends to decrease as tolerance to the drug increases, but in the steel-manned scenario, a superdrug need have no such shortcoming.

So why might a PU reject such a drug?

Reason 1: People might have a specific preference against taking it.

Reason 2: People might have several general preferences that will be thwarted if they take it by their own sudden loss of interest in them.

Using reason 1 is clearly incompatible with the cigarette scenario above. The drug we're talking about will generate and subsequently fulfil far more and far stronger desires than their initial desire to not take it, so far more PUtility will come from forcing it upon them than not. So if we would override the child's short term desires for their longer term benefit, we would have to do the same here.

One might counter that a child has less claim to autonomy than an adult, or that denying someone something they desire violates fewer rights than inflicting something they don't desire on them, but these are nonutilitarian arguments outside the scope of this essay. The point here is that reason 1 fails to establish the (PU-entailing) preferencist libertarianism hypothesis.

Reason 2 is not relevant to D0PUs (or rather, gives them no further grounds for objecting than HUs have), since as soon as they take the (steel-manned) drug, any such preferences would change.

If they sit elsewhere on the duration axis, it does offer some objection to the superdrug scenario. It just doesn't offer much of one.

If the drug is everything promised, the preferences it creates will dominate the preferences that preceded it. If they didn't, it would be of no interest to any form of utilitarianism, since a user could take it, gain all the benefits, and continue to pursue the dreams they'd had, and surely no utilitarian would object to that in principle. Or at worst, if the drug inhibited their previous desires and didn't replace them with stronger ones, they'd be able to wean themselves off it – and no well-informed utilitarian would have hoped to make them take it in the first place.

So in order to steel-man the scenario such that valencist authoritarianism is true, we have to assume that post-drug positive utility will overwhelm the negative utility of forcing it upon people. Since we can imagine a drug whose offered 'utility' is or includes preference satisfaction, consistent preference utilitarianism has to advocate exactly the same scenario - and preferencist libertarianism becomes evidently false.

Kaufman's 'Hedonic vs Preference Utilitarianism in the Context of Wireheading'

If what matters is happiness it's nearly unimportant whether someone wants to wirehead [for pleasurable sensation] ... Even if I accept the pleasure of wireheading as legitimate, I find the idea of forcing it upon people over their objections repellant. Maybe there's something essentially important about preferences? Instead of trying to maximize joy and minimize suffering, perhaps I should be trying to best satisfy people's preferences?

The argument here is almost perfectly analogous to the discussion above. Even if we can distinguish phenomenal happiness from phenomenal desire satisfaction, we could theoretically design technology to stimulate the latter.

Theme 2.2: HU condemns people to a false utopia that PU liberates them from

This is similar to the above argument, but takes a slightly different form:

Murray's 'Robert Nozick: Political Philosophy' and Fyfe's first post in his debate with Peter Hurford

These are identical arguments (both refer directly to Nozick's original in Anarchy, State and Utopia) and I include both only for greater comprehensiveness.

Murray:

(Nozick) has us imagine a machine developed by "super duper neuropsychologists" into which one could enter and have any sort of experience she desires. A person's brain could be stimulated so she would think and feel that she was reading a book, writing a great novel, or climbing Mt. Everest. But all of the time the person would simply be floating in a tank with electrodes attached to her head...

Nozick thought we would not enter, concluding that people would follow his intuition that such programmed experiences are not real...

Though this appears to strike a significant blow against classical utilitarianism, it doesn't seem to affect other forms of utilitarianism such as preference utilitarianism. Preference utilitarians could claim that people might not go into the machine not because they care about values other than happiness more, but because they prefer to experience happiness only via some means that involve actively pursuing happiness and not merely experiencing it.

Fyfe puts it more succinctly:

Brain-state theories fail Robert Nozick's experience machine test. Ask parents whether they would prefer to be put into an experience machine that will feed them sensations that their child is healthy and happy - while the child suffers great agony in fact - and the vast majority would refuse. They are not seeking a brain state, they are seeking a world state where their child is, in fact, healthy and happy.

I've left the experience machine until the end since it initially seems distinct from the above wireheading/superdrug examples. The machine is supposed to create a world recognisably similar to the real one, which (unbeknown to us), is a fake. It's populated by computer simulations of people, and is supposed to fulfil all our desires in intimate detail.

Whether (SX..1)PU opposes this much more than HU depends on a few assumptions, but in this scenario we can assume it does. Let's suppose people in the experience machine (EM) will be happy, while unknowingly having their preferences constantly thwarted.

It doesn't matter - because it's an irrelevant scenario. The entire EM scenario is a straw-man that cannot be steel-manned consistently with PU. All that the EM scenario ultimately entails is really bad wireheading.

Unlike a wire-stimulator, the machine has to work with your existing perceptions. Rather than constantly stimulating targeted areas of your brain, it just leaves you with the usual intensity of preference satisfaction experienced a bit more often than usual.

It would also cost far more to design and maintain a machine which had to simulate a massive and believable virtual world than it would to design and maintain one that just shot electricity through the key part of your brain.

So no clear-thinking utilitarian of any kind would ever advocate getting in one because a far cheaper and higher-utility alternative exists - and, per the section above, is a class of scenario which preference utilitarianism advocates almost as strongly as valence utilitarianism.

Theme 3: Preference utilitarianism more strongly prohibits killing

This argument holds that at least some animals, by virtue of having enough awareness of themselves as an entity, can conceive of themselves existing into the future - and would prefer to do so than to not do so. This allows us a stronger condemnation of killing.

This is again an argument I've seen and heard on numerous occasions, but I know of only one instance of it online.

Singer's Taking Life: Animals (from Practical Ethics Third Edition)

For a preference utilitarian, concerned with the satisfaction of preferences rather than experiences of suffering or happiness, rational and self-conscious beings are individuals, leading lives of their own, and cannot in any sense be regarded merely as receptacles for containing a certain quantity of happiness … (they) could have forward-looking desires that extend beyond periods of sleep or temporary unconsciousness, for example a desire to complete my studies, a desire to have children, or simply a desire to go on living.

(Singer doesn't explicitly say that this is to the advantage of preference utilitarianism, but it seems clear from both the fact that Singer at the time of writing explicitly identified as a preference utilitarian and from the tone and context of his piece. In a comment later on the same page, he says 'I acknowledge that [there] are differences of degree rather than a sharp cut-off, in the abilities of various beings to anticipate the future. Our judgments of the wrongness of killing should reflect this' - emphasis mine)

Like the previous arguments, this one, even if sound, only establishes that VU and PU differ, not that PU is superior. Unlike those arguments, this one isn't strictly circular. Rather it relies on Singer's (or his reader's) intuition that killing should be strongly prohibited (presumably moreso than valence utilitarianism would). I dislike reasoning backwards from intuited conclusions as a philosophical method, but it would be too much of a digression to argue against it here. For now I just want to highlight it, in case some readers also consider it dubious.

Whence the inconsistency?

To identify the subtypes of PU necessary for this argument, I'm going to make the simplifying assumption that few if any PUs would morally distinguish between painlessly killing a conscious person versus painlessly rendering them unconscious for ten seconds and then painlessly killing them. This helps avoid any confusion about the order of a) someone blinking out of existence and b) the resolution of preferences they'd formed while existent.10

Then for a form of PU to more strongly prohibit killing than VU, it needs to be a form of DFX..F1, or a future-and-past facing variant - since we're assuming at least some gap between the preference being formed and its being resolved.11

As far as I know, Singer, until he became a valence utilitarian, said nothing explicitly inconsistent on this theme within this essay or elsewhere. However, I think we can show that these traits of PU entail some strongly counterintuitive results. These aren't important for their counterintuitivity per se (per my comments above, I think this would be invalid), but because I think that Singer would have felt compelled to remark on such if he'd advocated them in other circumstances. Since he didn't, I think he (and other utilitarians who use this argument without remarking on them) very likely didn't actually believe them, or wouldn't if he'd considered them. I also would expect few other preference utilitarians to believe them (but if they consistently did so, I would consider that consistency at least as a plus for their view).

The first result (and to me, the hardest to swallow) of these results is that asserting any persistence of preferences over time decouples that form of PU from the physical universe. It turns preferences into 'spooky' metaphysical entities with no physical substrate, which float around outside of the brain (or behaviour) that created them, perhaps until they decay fully (assuming they aren't infinitely divisible, in which case they'd never quite become nonexistent); perhaps until some physically unremarkable confluence of events fulfils or thwarts them, which either switches their state or removes them from existence.12

The second result is that if preferences persist over time, we are beholden to some extent to the whims of the dead. If they have no decay rate (the preferences, not the dead), we need to take into account the persistent desires of everyone who's ever lived, including sapient nonhumans (and perhaps all sentient nonhumans, if we don't think sapience is necessary for preferences). It's hard to imagine what these would add up to, but undoubtedly a very different set of priorities than the modern sentience collective13 would have.

The last result is that we are also beholden to our past selves. If I had strongly wanted to be a boxer through much of my childhood and adolescence, but then discovered a passion for tree surgery, I would be doing something immoral by following my new, much shorter-held passion rather than respecting my previous self's wishes (even more so if I'd merely lost interest in boxing, and moved to tree surgery as the thing which I was now least unmotivated by).

A high decay rate would render the second and third results to be modest imperatives - but any process for setting the decay rate just so would surely need ad hoc justification, and thus seem counterintuitive in itself.

All of these are conclusions that a preference utilitarian could theoretically accept for the sake of a stronger prohibition of killing. But I don't think Singer, at least, shows evidence of having done so14; and if he would reject any of these results while claiming that PU more strongly prohibits killing than VU, then his position is necessarily inconsistent.

Conclusion

In summary, it turns out that many of the common arguments favouring PU over VU, via the very terms of their criticism, face a choice between inconsistency or an ethic far closer to VU than their authors intended.

If utilitarianism is to reject wireheading and the like, it can only ever be on instrumental grounds. If no such grounds exist, all non-negative utilitarianisms must advocate such outcomes.

In Part 3, I'll advance some broader arguments against the remaining types of PU - that is, those which seem both self-consistent and opposed to HU.



  1. If anyone contacts me to tell me of any I've missed, I'll mention them here and aim to include them in the discussion.

  2. I have removed discussion of Brian's essay from this post, since it's so much more extensive than in the other pieces and based so much more on reasoning from intuition that it requires multiple branching to deal with fairly, and doing so would probably double the length of this essay. As such it's also not really in the category of 'typical PU arguments against HU' that I'm focusing on here. I'd like to one day cover it properly, but for now I'd summarise my core criticism as being that exactly the property that makes it hard to criticise - its reliance on Brian Tomasik's own intuitions - is its biggest weakness. It serves more as a catalogue of moral instincts that Brian holds (albeit boldly explored and clarified for people who might share them) than as a case for a coherent moral framework.

  3. Theoretically in this following section, the authors in question could escape the charge of inconsistency by claiming that by 'what people want' they were referring to the constitution axis, ie that (since VU entails C0PU), 'giving people what they want' specifically refers to respecting (to some degree) what their behaviour shows what they want rather than only respecting what they think they want. I think this would be disingenuous though (and I don’t mean to imply the authors of any the pieces would say it). Firstly because I don’t think they did mean this, secondly because if that was what they meant it would put them at odds with many other PUs, which would make little sense of the claim that this reasoning somehow makes PU simpliciter better than VU simpliciter, and thirdly because it would be odd to have treated such a controversial issue as settled without even remarking that they’d done so

  4. The way he describes it here doesn’t sound like PU: ‘my decision system has a lot of terminal values, none of them strictly reducible to anything else. Art, science, love, lust, freedom, friendship…’

    But as discussed in the main text, he takes the source of these things’ ‘value’ from his own preferences (talking about ‘my decision system’, non-plural - so it can’t have inconsistent algorithms). He also describes elsewhere that he takes others' preferences (‘volitions’) – and ultimately only their preferences - to be a source of value. So I think we can treat his view as being functionally equivalent to some form of preference utilitarianism.

  5. As in Part 1, I use 'respect' to mean only 'seek to fulfil, all things being equal'.

  6. The PU might object at this stage that HUs would still favour forcing ascetics to take the drug where she wouldn’t. I don’t think this is a strong consideration, since asceticism is already very rare, and religion is declining, particularly the sort of extreme belief needed to choose the ascetic life.

    Also, in typical versions of the blisshead scenario, we assume that once someone takes the drug they’ll never come off it regardless of what their prior preferences were. But in this version, the religious ascetic will still be a religious ascetic, and so even if forced to take it, might then choose to stop. If so, the cost of continually forcing it onto them would be far higher, so it’s hard to make such a scenario sound at all plausible - the costs of policing them 24/7 would likely be better spent elsewhere.

    Brian Tomasik objects (in correspondence) that forcing them to take such a drug will change their preferences. For this objection to have any force, I think it commits us to some form of (F0 v R0)PU, since if either giving the ascetic more information or a change of perspective would so drastically change their desire, we surely wouldn’t consider it a fully informed and perfectly rational desire to begin with.

    Subject to that proviso, it seems fair to say that PU and VU make slightly different claims in this edge case - but the argument of this theme’s ‘Whence the inconsistency’ subsection would apply as much here as to the examples I give within it.

  7. Kaj Sotala raises a third possibility - that the ‘drug’ effectively turns its users into (or that we simply replace them in the thought experiment with) orgasmium (or, less ambiguously, hedonium) - a semisentient substance that undergoes constant bliss without having the cognitive capacity to experience ‘preference’.

    For this to be a coherent scenario, we need to make certain assumptions about the difference between preference and happiness - for eg, we surely have to assume ¬(C0)PU, and perhaps also ¬(S0)PU.

    But even having made these assumptions, it’s not hard to conceive of an analogous form of ¬(C0 v S0)PU utilitronium, call it desirium (because ‘preferencium’ is just too hideous a word). Desirium is less well defined than hedonium, qua its derivation from a less well-defined class of utilitarianisms, but for any specified subtype of PU we can conceive a corresponding desirium which optimally matches its goals by generating and optimally fulfilling desires with maximal efficiency.

    This might be a more complex substance than hedonium, but unless the PU advocating it specifies a value system of truly cosmic ad-hoccery complexity, it will differ so wildly (and more unpredictably, the more complex it is) from anything modern humans want from our daily life that people would surely be as reluctant to take it as they would any pleasure drug.

    Rather than explore the huge space of possibilities this offers without any sense of whether PUs will be convinced, I’ll offer a challenge: if any preference utilitarians reading this feel unconvinced, then I ask them to submit a well-specified notion of preference-satisfaction, either in the comments or by any medium by which they can contact me, and I’ll write a short piece demonstrating what its associated form of desirium would look like.

    Assuming one accepts the claim that any form of PU has somesuch ‘undesirable’ desirium, then as in the previous note, the arguments in this section's ‘Whence the inconsistency’ subsection apply.

  8. We could also invoke the rationality axis, and claim that no rational agent would want to adopt an addiction he'd later regret. But if we do so, we're making the rationality axis equivalent to the discounting axis, and giving ourselves just the licence for paternalism that PUs typically decry.

  9. Arguably no tweaking is needed at all, since if the original thought experiment's drug makes them so much happier, they'd be unlikely to want to come off it (ie they would be likely to have a preference to continue taking it). For this to both render preferencist libertarianism false and valencist authoritarianism true, the drug would have to make its users blissfully happy while still leaving them with a desire to come off it, and they would have to be continually forced to stay on it against their will (since valencists would have little to gain by forcing them to take it just once if they would then stop), and this combination of being physically forced and taking the happiness drug would have to leave them happier than any other outcome.

    It seems hard to imagine that the global optimum for valencists would look anything like this scenario, but since we can just as (probably far more) easily conceive a drug that triggers desire rather than (or as well as) happiness, it keeps things simpler to use that concept for the main argument.

  10. This simplifying assumption, as we'll see in a moment, ensures the stronger prohibition on killing necessitates ¬D0PU. But the assumption might be unnecessary, depending on whether you think an agent's 'existence' is fundamentally an all-or-nothing (binary), scalar, or degenerate state.

    If existence is all-or-nothing, as it seems to be in the way we normally talk about it, then there must be some marginal arrangement of matter and energy such that it constitutes a ‘living agent’ which, with some single Planck-level change to a single particle or energetic state, would no longer constitute one - ie it would be dead. Let’s call this sort of essential change to some conceptual state a Planck update on that state.

    This either updates the agent’s preferences or it doesn’t - specifically: The Planck update also constituted a Planck update on both the formation and resolution of one of the agent’s preferences. That is, it updated their behaviour/phenomenal experience such that it caused or continued a preference and updated the external circumstances/phenomenal experience such that it resolved the same preference.

    Or

    Either the formation or resolution (or both) were not affected by the Planck update to the agent’s existence.

    In the second case we have a clear ordering that necessitates nonzero duration of a preference if it’s to be relevant - either it was formed some time before death, or it resolves some time after death, and the simplifying assumption is unnecessary.

    In the first case, since the relevant updates are simultaneous, there’s no requirement for the preference to persist over time. If it didn’t, Df0 might still more strongly prohibit killing than VU if it’s also some form of SX..1 that means death thwarts their preference (hence generating negative utility) rather than just preventing positive utility.

    I don’t want to dwell further on this possibility though, since it weds the prohibition of death to the idea that the agent was both conscious at the exact point of death and actively preferring something entailing their existence; and I don’t think PUs who support their theory on the ‘stronger prohibition’ grounds conceive it this way. There’s nothing in Singer’s essay to support this interpretation, for example.

    If existence is scalar rather than binary - ie if whether one exists or not is a question of degree - then we raise a bunch of new issues that I’d also rather not delve into here, since a) I don’t know of any PUs who’ve taken this possibility seriously in constructing their views, and b) I don’t think its issues are significant in the context of killing someone outright (ie making them 100% dead) as we’re discussing here.

    Lastly, if the existence of an individual agent is a degenerate or meaningless concept, as would be implied for eg by open or empty individualism (the latter of which I believe), then any preference relating to or conditional on the continued existence of the individual would be irresolvable - so a PU’s response to it would depend on where they fell on the irresolvability axis. If this response were distinct from VU in practice, it would necessarily involve a gap between the preference’s formation and resolution, therefore necessitating ¬D0PU, as in the example in the main text, so the discussion there would apply.

  11. It also needs to be a version of SX..1PU, where only external circumstance is necessary - since the preferrer will not exist to experience resolution of their preference. This raises some strange questions like 'do preference satisfactions travel faster than light?' (eg if I want the sun to flare it subsequently does, is my preference satisfied when it does, or when the light from that flare reaches me on Earth?) and 'how do we gauge whether or how much external circumstances fulfil a preference?' But it doesn't directly imply anything I would necessarily expect Singer or other preference utilitarians to reject.

  12. The concept of spookiness is inspired by Mackie, in Ethics: Inventing Right and Wrong. Some non-utilitarians (probably including Mackie) might object at this stage that any normative theory - eg any form of utilitarianism, preference or otherwise - involves such metaphysical spookiness.

    The claim that utilitarianism is necessarily spookier than some purportedly amoral alternative seems intuitive, but is hard to demonstrate. This is a very large discussion that I would like to explore elsewhere - and IMO, one in which the crux of naturalistic ethics resides - but I think it’s helpful to briefly consider the following purported (classes of) alternatives to utilitarian spookiness:

    • Egoism - that we should all stop telling each other what to do and just do whatever we want
    • Moral antirealism/nihilism (in some forms) - that everyone will do whatever they want, so normative theories are wrong and/or useless

    The first is itself clearly a normative theory, and so at least equally susceptible to spookiness objections as utilitarianism (or any other ethos).

    The claim that ‘everyone will do whatever they want’ is either a tautology, in which case it tells us nothing at all, or it’s an empirical prediction, in which case it’s evidently false - a significant minority of the population actively base their life choices on their ethical views, and make different choices if they’re persuaded of different ethical views (many members of the effective altruism movement, or religious converts, for example).

    Thus this sort of dismissive moral antirealism seems to leave important naturalistic issues unexplained - eg. what is the source of our behavioural algorithms? (They’re clearly not as simple as ‘following our evolved imperatives.’ Less glibly, distinctly utilitarian behaviours strongly anticorrelate with prospects for both natural and sexual selection - cf these papers re sexual selection disadvantages and the books Moral Tribes and The point of view of the universe' for the argument that utilitarianism is uniquely disadvantageous in natural selection). Why do any people at all seemingly change those algorithms in response to intellectual stimuli? What intellectual stimuli are most effective at changing them in what ways - and why?

    Giving an answer like ‘our behaviour is ultimately determined by deterministic/quasi-random physics, just like everything else’ seems (directly) akin to answering questions about the stock market with reference to particle physics - it’s probably accurate, but predictively useless. We might think that so-called normative ethics has a bad track record (having got hung up on normativity), but it still contains important questions that no other field seems to address.

    See my mini-essay on moral exclusivism for the beginnings of a suggested methodology for finding an alternative to both the incompleteness of antirealism and the inconceivableness of moral realism.

  13. Trademarking this band name.

  14. I think this is a bullet Singer might bite, given that he explicitly argued (and still argues) for a fairly strong form of moral realism; but most people I've known who identify as preference utilitarians are proud of the non-'spookiness' of their moral theory, and so couldn't consistently advocate it on those grounds while claiming this anti-killing property.


Thanks (again!) to Brian Tomasik, Kaj Sotala, and Joseph Chu for all their feedback on the original draft of this.

Topic

Preference utilitarianism

Tags

theoretical ethics preference utilitarianism


Published