One of the appeals of utilitarianism is how fundamental utility is to nearly everything we do. All of our beliefs and actions can theoretically be measured in terms of welfare. Some may even argue that the measure of a country’s success shouldn’t be indicators of human development or gross domestic product, but a type of global happiness report. Rather than measuring production or literacy, we might be better off measuring laughs and smiles. Isn’t happiness the ultimate purpose of wealth and other well-being measurements anyway?
Any conception of the good can be translated into utils: wealth, justice, community, respect, family, traditions, environment etc. We know this when we make daily trade-offs between these goods. No value is ever so sacred that it can never be exchanged for another value. We can create extreme scenarios where the world will end unless an innocent person is required to pay a fine, or someone is touched without their consent, or a tree is cut down. What can justify these trade-offs of sacred values (justice, liberty, and respect for nature) except for human well-being?
We all give different values different weights. And this hierarchical priority has to be based on something. Welfare seems to be the ultimate ranking principle. Utilitarianism serves what Joshua Greene describes as a meta-morality that accommodates and underlies all preferences. It is as if the purpose of existence and the decisions we make is to discover the best combination of certain values and actions to increase our welfare.
This is why I believe that a rule-utilitarian ethic should guide the development of the social contract. The parties in the original position may not seek to maximize community values like religious practice or civic engagement, but they would want to maximize goods like wealth and freedom that serve welfare generally (what John Rawls called primary goods). But it’s not the desire to maximize welfare that grounds the social contract, as Thomas Hobbes and David Gauthier have argued (critique coming). It’s freedom.
The goal of this post is to show that freedom is a foundational value independent of utility. I’ll also be arguing in a future post that utility’s value rests in its service to freedom, rather than the reverse as utilitarians argue.
The thought experiment below is clearly inspired by Robert Nozick’s “Experience Machine” where participants can choose to enter into a simulated reality to maximize their own welfare—a critique of hedonistic pleasure.
However, I don’t think that the experience machine gets the point across. People don’t consider pleasure obtained from a virtual world to be a valid substitute for pleasure obtained from the real one. Where is the sense of self-affirmation and accomplishment? No one would want to consciously live a lie. The rejection of hedonism isn’t the only reason people don’t choose to spend all their waking hours playing World of Warcraft. But we can still get Nozick’s point across by sticking to the real world.
Imagine that you are approached by the utility coach and he offers you to sign-up for “the lifeplan.” It works like this: The coach has a machine to examine your mind, discover your preferences, and will create the perfect utility-maximizing lifeplan for your unique self.
Since you come equipped with certain desires, the first part of the plan would be using a “desire finding” machine to discover them. The machine will find out everything about you that would affect your welfare. This includes a complete understanding of your preferred activities, goals, social groups, career, music, food, and every other aspect of your life that requires a choice.
Based on this dataset of desires, the coach would create a utility-maximizing lifeplan.1 Every decision you can ever make would be made for you in the plan, all targeted to maximizing your welfare.
So if the machine finds that your utility is best raised by becoming a surgeon, having three kids, and fishing on the weekends, then it will create the best plan possible to optimize on those desires. Or, if it says that you are happiest pursuing a life of travel and adventure, even though you weren’t aware you had these desires yourself, then a lifeplan would be made based on this recently discovered aspect of your personality.
For the sake of the hypothetical, imagine that the coach’s machine and the lifeplan he develops from it are 100% accurate. You can be guaranteed to receive the greatest amount of utility.
The coach’s machine can understand your desires better than you ever could and the coach’s lifeplan can maximize those desires far, far better than you could ever hope to. All the trade-offs you can ever make in life between money, family, community, appreciating art, quiet contemplation, etc. will all be made for you to keep you at the very edge of your utility-possibility frontier.
Sounds perfect, right? The one catch is that the coach doesn’t tolerate half-assing. If you sign-up, the coach will implant an electrical node in your brain that will issue a small shock whenever you deviate ever so slightly from your lifeplan. The shock would be just enough to keep you from performing the suboptimal task, but not enough to reduce your welfare substantially. On net, you still come out far ahead on the utility score with the coach’s lifeplan. By the time you draw your last breath, rest assured that you have expanded your utility as far as it could go.
On its face, it sounds like the shocks defeat the whole point of the lifeplan. Part of your mind’s utility function would include a desire for freedom, particularly from pain. But the lifeplan would adjust for this. If you have a strong enough desire to refrain from being shocked and would rather spend some time living freely, that desire would be considered and applied. Since freedom is accounted for in your utility function, you are only getting shocked when exercising freedom doesn’t efficiently serve your welfare.
All of your values in life, including your very freedom itself, would be included in the welfare calculus that makes up the coach’s lifeplan. You can basically play life on cheat-mode.
And what would be the rationale for not accepting the lifeplan? So you can pursue a life of freedom, where you are sure to obtain only suboptimal utility? If the point of life is to maximize pleasure and minimize pain, then rejecting the utility coach’s offer is an objectively wrong decision. It’s so wrong, that forcing people to accept the coach’s lifeplan may be right. For their own good, of course. Our general legal and moral code has plenty of rules made to protect people from their own bad decisions. Requiring people to agree to the utility’s coach lifeplan is just one rule that is certain to increase people’s welfare
My question isn’t whether you would choose to sign-up for the utility coach’s lifeplan. That’s your decision. My question is whether you would force other people to sign-up for the lifeplan. Or would you allow them to make that choice themselves? What is it that you value about other people: their welfare or their freedom?
You might believe that welfare is all that matters. If so, then you aren’t only permitted to force others to accept the utility coach’s lifeplan, but are morally obligated to do so.
However, if you believe that it’s wrong to take away someone’s free will, then freedom is valuable even beyond welfare. You may conclude that freedom is valuable for its own sake and deserving of intrinsic respect. This latter view is clearly the one I’m partial to.
The point of this hypothetical isn’t about your choice between utility and freedom. It is about your ability to make that choice yourself. Whether or not we agree to submit our freedom to the utility coach is ours alone. I hope this post has shown that we don’t have the right to make that choice for others and others don’t have the right to make that choice for us.
As fundamental as utility is for our values, utility can’t account for freedom. Free will is a good in itself that doesn’t rest upon utility. Rather, it is utility that rests upon free will.
However, if you disagree and believe that accepting the lifeplan is a moral duty, I’d like to hear your rationale.
For the sake of the hypothetical, the lifeplan would maximize welfare in a way that does not harm others.
"No one would want to consciously live a lie. "
With Nozick's experience machine, at least as I imagine it, once you go in you no longer know you are in the machine, have a perfect illusion of being out in the real world. You are living a lie but not consciously, since you believe the lie.