TL;DR: Morality is a set of principles that people would not reasonably reject behind a veil of ignorance, as legitimized by valuing reason over freedom. This procedure would ensure that any restrictions on one another’s freedom are grounded in reason, rather than power. Any issues that cannot be resolved with these principles are not moral questions.
What is morality? Some say that morality doesn’t exist. Instead, they say what we call moral statements are only preference expressions. Some say morality does exist, but only as a description of these preferences. Some say morality exists as certain natural facts about the world. Some say morality is something that not only exists objectively but can be understood and discovered through reason. Others, meanwhile, have no concrete thoughts on what morality is but still have a sense of right and wrong.
Surprisingly, after thousands of years of ethical philosophy, there is still no agreement on which of these conceptions of morality is correct. More than that, it is still disputed whether any significant progress has been made in ethics since Aristotle beyond “slavery and sexism are bad.”
And it seems hopeless to create moral progress without even first defining ethics. How can anyone create an ethical system without a meta-ethical foundation? What do you mean when you say something is “ethical?” Moral discourse has been plagued with an undefined key term. No one is quite sure what ethics means, but they still cluster around their favorite camps. We skipped the first step, and as a result, everyone has been talking past one another. It’s time to go back to step one.
Yet our bypassing this initial step has been understandable. We want to do the right thing, even without having a comprehensive theory of the right.
Amartya Sen provides an illustration of this fact. A man is locked in a hot sauna. He then calls out to a friend outside to lower the temperature. However, the friend responds that he can’t because he first needs to know the ideal temperature.
We know that the friend’s response is mistaken. He doesn’t need to know the ideal theory of the right to do something right. And this was the case for major ethical problems throughout history. We now know that murder, slavery, sexual abuse, and war are wrong. And we didn’t need to have a complete ethical theory to get to those conclusions. So meta-ethics wasn’t very necessary for human improvement.
However, humanity has picked the low-hanging fruit. Any small child can tell you the basic moral rules we all agree on. Unfortunately, this also means that we can all do morality only as well as small children.
Now, we are dealing with much harder cases. Population ethics, animal ethics, environmental ethics, and criminal justice are much more difficult issues that require more than “slavery and sexism are wrong.” So we need to start having concrete ethical answers. And the first step is to have a concrete definition of ethics.
I think it’s time for us to establish common ground on the basis of morality. This article will hopefully argue that morality are those principles that cannot be reasonable rejected, among parties who value reason over freedom.
Morality Defined
If we want to have a dialogue, we need to define our terms. The SEP defines normative morality as "a code of conduct that, given specified conditions, would be put forward by all rational people." This is a pretty good definition.1 The fact that it's a system that is "put forward" by rational people means that those people can reasonably accept it.
However, in some sources, morality is only defined as a distinction between right and wrong, failing to note that it must also be “reasonably accepted.” Without the acceptance condition, the role “freedom” plays in morality is ignored. And as will be discussed below, we can’t get to morality without freedom. However, it’s surprising that people have problems with the SEP definition.
They may argue that morality is what is right or wrong, or even what is good or bad, regardless of whether people would agree to it.
For example, utilitarians may define goodness as pleasure. Even if no one would agree to these duties behind a veil of ignorance, we are still bound by them. Pleasure simply is goodness, as they would say. If this idea of goodness somehow has unacceptable implications, like the repugnant conclusion, so be it.
Yet this view fails to properly value a moral agent’s acceptance of a moral view and therefore disregards their freedom. Moral views and dilemmas must be reasonably acceptable for them to be ethical duties that we can impose on others. This will be discussed in further detail, but let’s think about this intuitively.
What does it mean to truly resolve a moral dilemma? It means putting forward an answer that is reasonably justifiable to all parties through public reasons. If your answer to abortion or the “problem of dirty hands” is not reasonably justifiable to people, then it hasn’t been resolved. Your answer is wrong, and forcing others to accept it is trying to fit a square peg in a round hole.
And further, when we impose duties onto parties that would never agree to them, we act immorally. Forcing someone into an experience machine is morally wrong, regardless of what our religion or favorite moral theory says. Biting the bullet on a moral problem isn’t being consistent; it’s a failure to admit that your moral view is incorrect.
If that’s the case, you must move on to other potential answers. But if your answer can be justified to free people, then the dilemma is resolved, and we have a better sense of our actual ethical duties.
And being acceptable does matter. Not only in ethics but in any sort of objective field, like science, logic, and history. If you cannot reasonably justify to others that the moon landing was faked, that the earth is flat, or the world is ruled by lizard people, then the statements lack truth value.
No statement of scientific fact or mathematical truth can be justified if others cannot reasonably accept it. If you can’t show your work or defend your premises, people can reasonably reject your claim. What we call objective reality is just a reasonable agreement to public reasons. But more on that in a separate post. For purposes of ethics, its reality similarly requires reasonable acceptance.
Yet given this condition, if your theory keeps requiring you to bite bullets and produce outcomes no one could reasonably accept (the RC), then it’s probably best to move on to another theory, ethics or otherwise.
Some might also ask why this hypothetical agreement should bind us. I’m surprised this argument is still given weight. I’ve addressed this complaint, which I call the “magic words theory of consent,” here, showing that actual consent is grounded in hypothetical consent.
With our initial SEP definition of morality presented, let’s figure out what ethics is to know if we really need the “reasonable acceptance” condition.
The Ontology and Epistemology of Ethics
How does ethics exist as? And how can we know?
In summary, morality exists as a consequence of holding certain values—its a “Hypothetical Imperative” where a certain rule applies given a certain end. Moral commands exist as “should” statements that sensibly result from possessing certain moral values (what these moral values are will be discussed below). For example, “Because Abigail values X, she must do Y.”
This is the form of morality’s existence, its ontology.
And to know the truth of a moral command, we assess the connection that X (moral value) has to Y (moral command). If the command has no relationship with the necessary moral values, the moral command is false. And if there is a relationship, where having X value necessarily indicates that one should do Y, the moral command is likely true.
This is how we obtain knowledge of morality, its epistemology.
Let’s illustrate through example. Take the statement: “Because I want to be healthy, I should exercise regularly.”
Some might argue that “I should exercise regularly” is not an objective command. There is no ancient authoritative text where “I should exercise” is written that we must obey.
Moral anti-realists (who argue that morality is not an objective concept) may say that the command “I should exercise” is not part of the furniture of the universe. Therefore, we cannot examine it, and the statement is nonsense that lacks any potential for truth value.
At best, it’s an expression of someone’s preferences. Moral subjectivists and nihilists might argue that the statement is still sensible but only as a statement of something subjective rather than objective. Yet that’s not how the statement should be understood.
Once you recognize the statement “I should exercise” derives from “I value health,” the statement can be objectively true, depending on the necessary connection between valuing health and exercising regularly. The command exists ontologically as a consequence of a particular value.
And what about epistemology? How can we know the statement that “If I want to be healthy, I should exercise.” is true?
So long as “I should exercise” is a valid implication of “I value my health,” the statement is equivalent to 1+1=2.
Yet how do we know that it’s a valid implication? Couldn’t “I should exercise” also be the wrong implication of “I value my health,” given certain facts and circumstances? Yes, because whether the command is the valid derivation of the value is falsifiable. To make this determination, we can look at research showing the effect of exercise on health, one’s actual ability to exercise, and whether there are any alternative methods to safeguard one’s health that make exercise unnecessary.
“If I want to be healthy, I should eat more fast food” is very likely wrong for the same reason. It’s falsifiable and can be proven wrong. Given the available evidence, if I were to eat more junk food, I’d be less healthy. Assuming this, this “should” statement is equivalent to 1+1=3.
Ethics are a collection of “hypothetical imperatives or “should” statements that result from certain values. And what moral values supposedly get us to these moral should statements? Freedom and reason. You can read why here, but I will summarize.
Given the values of freedom and reason, you must value other people’s freedom. Freedom is an objective state where there can be no reasonable basis for discrimination between parties possessing it. Therefore, if you have an X that you deem valuable, and other people have that same X, you must value that X of others to be consistent.
This is intuitive as well. We know we shouldn’t sit on people since they are free and wouldn’t want to be sat on. And we also know that we can sit on chairs since they lack freedom and have no potential to even care about being sat on. So we rightly treat free beings differently from non-free beings and we would recognize someone as acting wrongly for failing to recognize this important moral distinction. We recognize that freedom is inherently valuable.
Valuing reason leads us to recognize and value the freedom of others. And reason also takes priority over our freedom to deny others freedom. So long as we value reason, we aren’t free to deny that other people have some sort of valuable freedom. Therefore, reason has authority over our freedom. And because we value other people’s freedom, we would create and obey a social contract that is the product of our collective freedom.
We would not only agree to the social contract, but we’d obey it, thus treating other free people as ends by respecting the product of their freedom. And because reason is valued, even more so than freedom, the social contract would be made of principles that could not be reasonably rejected.
This is the substance of morality—it is the social contract whose authority derives from our values of freedom and reason.
So if you value freedom and reason, you should be moral. And you become moral by treating free people the way they want to be treated, under principles they would reasonably agree to.
And know that there are principles that everyone can reasonably agree to. All societies that give due regard to their members have some form of rule against wrongs like theft and murder. Given these human universals of what morality entails, morality is objective.
Another note on freedom. What separates freedom from subjective states like happiness is its objectivity and moral implications.
First, freedom (or consciousness) is not a subjective agent-relative state like utility is. Valuing your own happiness doesn’t necessarily mean you value the happiness of others equally well. Reason can’t get you there.
Freedom, meanwhile, is an objective property. It is a scientific truth whether or not there is “something that it is like to be” a certain being. So valuing freedom necessarily implies valuing the freedom of others. There’s no source from which to reasonably discriminate against an objective property.
And second, freedom (or consciousness) is morally relevant. Possession of freedom gives one the ability to have desires that can be pursued and the ability to obey reasonable commands. We couldn’t make a normative social contract without freedom.
And it’s not only ethics, but every “should” statement depends on freedom. “Should” statements are reasonable justifications for restrictions on one’s freedom to do otherwise. Normativity exists as: “You should do X, as opposed to Zs, because Y.”
Here, X is the normative claim, Zs are the freely available alternative actions, and Y is the reason. Normativity asks what reasons should trump freedom. Morality is just the application of this question to the treatment of others.
Moral Dilemmas?
What is great about conditioning morality on reasonable acceptance is that there are no moral dilemmas. With reason as the foundation of morality, all potential dilemmas can be resolved through reason.
However, let’s say that there are sufficient reasons on both sides of a certain decision to the point that there can’t be an objective resolution since reasonable minds disagree. In that case, free people can reasonably reject the resolution, and it’s not a moral rule. Ethics then passes the buck to existentialism.
This is why the Trolley Problem is not a moral dilemma. No answer would be justifiable to all involved parties since they would all have a reasonable claim to not being killed.2
While in the hypothetical agreement, people would agree to a general duty to rescue, an exception to that duty would include when rescuing requires killing an innocent party. No one would agree to be used as a means—therefore, any moral duty to rescue wouldn’t apply to the Trolley Problem.
This doesn’t mean that pulling the trolley lever wouldn’t be excused, but it wouldn’t be a moral duty either. “You should pull the lever” doesn’t arise from valuing freedom and reason. The Trolley Problem is Sophie’s Choice, a problem for existentialism but not for ethics.
And just because the problem involves human life doesn’t make it an actual moral question. It’s the same way the question “Would you rather fight one horse-sized duck or 100 duck-sized horses?” isn’t a mathematical question because it has numbers. They’re both questions of free personal preference. See existentialism.
However, the drowning child scenario is a moral dilemma, but it doesn’t have the same implications as Peter Singer thinks it does. In the hypothetical agreement, we’d impose a general duty to rescue one another so long as the costs of doing so are sufficiently low.
Some responsibility for self-care and self-protection would be imposed on other free agents so that we don’t have a duty to be personal caretakers to people who habitually make bad decisions. And we accept that if it’s too dangerous to rescue someone or if the rescue requires killing others, the duty to rescue won’t apply. Yet we would still be morally liable for failing to save a life when the costs of doing so are marginal. So it would be a moral duty to rescue the drowning child.
However, if we were to change the facts of the drowning child case, we might not agree that there would be a duty to rescue. For example, say you replace the drowning child with a capable adult. And say you warned this adult beforehand that you wouldn’t rescue him if he were to swim across the pond because you’re wearing your nice suit, to which he responds that he doesn’t even need your help. And say he swims across the pond and finds that he’s about to drown.
Utilitarians might calculate that the risk of you drowning to save him (adults are far more dangerous to save than children, given their weight) isn't high enough to justify not saving him.3 Yet more reasonable people would say you are not morally responsible for rescuing someone from their own free, but very bad choice, where rescue presents a danger to yourself as well.
There will be grey area cases between the drowning child and the drowning adult. However, with the correct application of justifiable reasons to specific circumstances, contractualism can create rules that everyone can reasonably accept.
Yet why should we value freedom and reason? What room do other values even have under this moral system?
My next piece will address these questions as well as discuss what the definition of morality means regarding the repugnant conclusion, libertarianism, and utilitarianism. Now that we’ve discussed what morality is, we need to discuss what morality isn’t.
Although I would say “a code of conduct that, given specified conditions, would be accepted by all reasonable people.”
People would agree to a duty to rescue when it is Pareto Efficient or even Kaldor-Hicks efficient. In the Trolley Problem, any action would not only lack Pareto Efficiency but Kaldor-Hicks efficiency as well. You cannot compensate the dead. Therefore, there will be no agreement, and no duty to rescue will be imposed.
They might say that there is a 10% chance you drown from trying to rescue him, in which case 2 people die. But there is a 90% chance you successfully save him, where 2 people live. And if you don’t save him, only 1 person lives. So the expected value (-2*0.10 + 2*0.90) of saving him is 1.6, versus only 1 from not saving him.
I think this is completely off the mark. There have never been low-hanging fruit in morality, you are judging moral issues of the past with a presentist bias, but they were contentious issues in their time. Just like present moral issues are going to be viewed as obvious in the future.
You say any moral theory has to be accepted by rational agents, but what if the people rejecting the theory are not being rational? The theory is still valid regardless of what other people say. Many theories are rejected by their contemporaneous judges, only to be accepted in the future (when the person proposing it has already died).
I also don't think freedom and reason are the only values of importance. Freedom and reason are valued because of something much deeper: they improve the well-being of societies that embrace them. But it's the well-being of agents that morality should attempt to maximize.
Because of all this, the theory of Sam Harris which he explains in The Moral Landscape seems accurate to me.
To me morality is simply the discipline that explores what is good. And it would be a net positive if we could agree on an objective morality based on first principles, but your proposal needs more work.