In which repugnancy functions as the catalyst of an agenda for further ethical investigation
Total, aggregate utilitarianism has an unfortunate implication that I'd like to avoid, the repugnant conclusion. Somewhat gloomily, it seems like a tough implication to avoid.
A short statement of how to arrive at the repugnant conclusion:
- Definition: A "moral agent" (shortened to "agent" to save words) is someone or something worthy of ethical consideration.
- Definition: "Total utility" is a measure of the compiled happiness of all moral agents.
- Premise 1 (P1): Maximizing total utility should be the goal of all moral agents. [1]
- Premise 2 (P2): Total utility can be determined by aggregating ("adding up") the utilities of all moral agents.
- Assign each moral agent a default utility of 100 utils, and consider an agent with 100 utils to be happy.[2] From P1 and P2, we can conclude that two agents have a higher total utility than one agent (200 utils is greater than 100 utils). As we are maximizing total utility, we should thus pursue two agents instead of one.
- If an agent is unhappy, let's assign zem a weight of 70 utils. If an agent is extremely unhappy, assign zem a weight of 30 utils. Now, following P1 and P2, note that for any number of default agents, there is a number of unhappy agents which produces a higher total utility, and thus ought to be pursued to maximize utility (e.g. instead of one default agent, we would rather have two unhappy agents [100 utils compared to 140 utils]). Similarly, for any number of default agents, there is a number of extremely unhappy agents which produces a higher total utility, and thus ought to preferred in the pursuit of maximum utility (e.g. instead of one default agent, we would rather have four extremely unhappy agents [100 utils compared to 120 utils]).
And that seems very wrong. When presented with an option between a world of one happy person and a world of four extremely unhappy people, it seems clearly better to pick one-happy-person world. (For a slightly more realistic case, consider choosing between one world of a billion happy people and another world of four billion extremely unhappy people).
I'm not sure what to do with this. The repugnancy could speak against utilitarianism, or it could speak against just the total, aggregate flavor of utilitarianism, or it could speak against only the rigid straw-man util framework I set up to represent the total, aggregate flavor of utilitarianism, which might be represented much more robustly. All of these are plausible, and I haven't separated out which I actually believe.
Unfortunately, many ethical theories have problems like this – odd implications in extreme cases that I don't know what to do with.[3] Other ethical theories don't have crazy implications at the extrema, but instead are less well-defined, or less coherent, or provide less actual guidance on how to act. Clarity, consistency, and rigor seem to be tied to disturbing edge-case implications. Conversely, palatable "common-sense" morality becomes mushy and contradictory when pushed into a rigorous framework.
Happily, we don't often have to deal with the repugnant conclusion head-on. There aren't many opportunities to choose between hive worlds packed full of unhappy masses and desert worlds pocketed with enclaves of the enlightened few.
However, there are some topics that draw out the issue. If we are total utilitarians, a question like: "what would be the best trajectory for humanity over the next 1,000 years?" leads to an analog of the repugnant conclusion. Perhaps it's right to pursue a course of action that leads to many planets being colonized, even if most colonists would be quite unhappy. Pushing further, perhaps it's right to create as many moral agents as possible, even agents that are miserable much of the time, provided that each agent produces some amount of utility towards the aggregate.
The total utilitarian can come back with a two-pronged reply. First, ze says, there is no reason to take these edge cases seriously. In practice, most moral agents consider their existences worth having ("lives worth living" is a favored term), and as long as this is the case, we should work to maximize the number of these existences (which is approximated by total utility). In practice, the repugnant conclusion just isn't that bad.
Second, the total utilitarian notes that there is a dearth of competitive, consistent alternatives. "I understand the objection," ze says, "and I agree it's problematic. But what alternative framework do you propose for thinking about this question? How would you go about determining the best trajectory for humanity over the next 1,000 years?"
I don't have a strong alternative framework yet.[4] I do have a strong, intuitive reactions against total, aggregate utilitarianism, and some thoughts about how to approach future-facing repugnant conclusion-type problems.
I'm not going to dive into these thoughts in this post – I haven't read enough yet, and this thing is long enough as stands. Instead, I will close by outlining a couple of topics I might explore further in future writing.
Question: Is aggregation the right way to think about morality across people? Considerations:
- How do contractualists think about morality across people? Virtue ethicists? Kantians? Judeo-Christians? Buddhists?
- If not some form of adding-up, then what?
Question: How important is consistency in an ethical framework?
Considerations:
- Can an inconsistent framework be considered valid?
- It seems like most people live with some degree of moral inconsistency. If inconsistency can exist in a workable morality, should it be worked against? On what grounds?
- Is the earlier-stated relationship between consistency and extreme edge cases accurate? If this relationship is accurate, how should extreme edge cases be handled? Ignored? Accepted head-on?
Question: Should we consider future people in our moral calculus (i.e.
are future people moral agents?)
Considerations:
- For an action to be judged bad, must it be bad towards someone?
- When thinking about morality, should temporal distance be thought about in the same way as spatial distance?
Question: If future people are moral agents, how should we consider them? How should we compare the well-being of future agents to the well-being of present moral agents?
Possible approaches:
- Consider future persons as each of equal value to present persons.
- Apply a discount rate to future generations, such that the further removed from the present a person is, the less consideration ze receives.
- Apply a recursive rule that each generation follows, such that each generation's moral consideration only extends to a certain future point.
- Some combination of a discount rate and a recursive rule.
[1]: I feel obligated to note that I probably disagree
with this premise, but it is necessary to hold for the purpose the
investigation.
[2]: I feel again obligated to
note that I think utils are a silly unit that shouldn't be taken
seriously, but are useful for explaining this problem in a simple
way.
[3]: Average, aggregate utilitarianism (in which the total utility
divided by the number of agents is the major consideration) has similar
difficulties with its calculus: for example, it is better to have two
unhappy people, one at 11 utils and one at 10 utils (average utility =
10.5), rather than having just one unhappy person (average utility =
10). It seems odd for a theory to encourage the creation of unhappy
agents, which can happen in both total and average flavors of aggregate
utilitarianism.
[4]: Possibly no one
follows a single framework consistently after being exposed to a mass of
them.
[rereads: many, edits: reworked into a shorter version, cut out final section and replaced with a list of questions, changed title, fixed list formatting]