If you hang around utilitarians for very long, you will learn the following jargon.
·"
Hedonium" is the hypothetical substance that is optimized for feeling good. Whatever your brain is on a beautiful day, when you’ve just won a Nobel Prize, and you’re on drugs, the substance of your brain is then a bit “closer to hedonium” than usual.
·“Dolorium” is likewise the hypothetical substance optimized for feeling bad.
·When you are in dreamless sleep, or having a boring day you’re not particularly enjoying or disliking, your brain is at “hedonic zero".
·The best possible future, from a total hedonic utilitarian perspective, is presumably a universe filled with hedonium. This might be achieved by launching robot-rockets programmed to convert other planets and stars into hedonium—and, presumably, into more robot-rockets capable of doing the same. The grand vision is ultimately a sphere of hedonium, centered at Earth, expanding ceaselessly until the end of time. This process is sometimes called a “hedonium shockwave”. The sooner the better.
·One implication of total hedonic utilitarianism is that a universe with very many experiences just slightly better than hedonic zero is better than a universe with fewer experiences each of which is in bliss. Parfit (
1984) called this the
repugnant conclusion. In illustrating an experience just slightly better than hedonic zero, his memorable (
1986) example is the experience of listening to muzak and eating potatoes.
Suppose creating a good experience comes with some fixed costs (e.g. the energy it takes to mine the substance to be turned into the experience), plus some variable costs which are convex in the hedonic intensity of the experience. This assumption seems very weak and reasonable, to me—at least as reasonable as the assumption that we will one day be able to launch self-replicating hedonium-producing robot-rockets. It is simply the production function we face with respect to experiences today. It costs some roughly fixed amount to create an additional human or animal, and it costs some amount to make that creature happy; and happiness is concave in consumption, meaning that the second dollar spent on making a creature happy doesn’t provide them with as much extra happiness as the first dollar does.
Under this arrangement, spreading half as quickly—i.e. spending twice as much time and energy on optimizing the experiences into which the robots are converting a given planet—will eventually fail to produce twice as much welfare from that planet as would have been produced by the rough-and-ready experience-creating job. And this limit will not generally be reached before the planet has been fully optimized, just as having a second child often produces more welfare than spending twice as much on one’s first child, even though the first child is not in a state of perfect bliss. The robots will then maximize total good feeling by only somewhat optimizing the stuff of a given planet, or whatever, before moving on to the next. More precisely: if the fixed cost of creating an experience is
F, and the variable cost in hedonic intensity is some smooth, convex function
V(
h), the robots will maximize welfare by creating experiences not of maximal hedonic intensity but of hedonic intensity
h :
F +
V(
h) =
hV′(
h). This will be positive, but as far as I can tell there’s no strong reason to think it will be high.
To paraphrase the great Jack Handey: I hope God likes potatium, because that’s what he’s getting.