Dismembering the Mystique of Meta-Ethics
There are no solutions, only trade-offs. — Thomas Sowell
I. “Is Morality Objectively Real?”
I would argue no, since it’s not perspective-invariant. As far as I can tell, morality consists of game theory and preferences. The game theory keeps you in a social equilibrium, and the preferences keep you in a chemical disequilibrium. What I mean by “social equilibrium” should be apparent by now, to those who’ve read Game Theory: The Force that Binds Us. As for “chemical disequilibrium”, I mean that preferences are what keep you alive. E.g. a preference for things that taste sweet is evolutionarily adaptive (at least to a first approximation).
The game theory component of morality covers things like “murder is bad”. And sure, if we can all agree that killing each other is bad, that’s a very nice social norm to have. But I wouldn’t call it “real” or “objectively true”. Because it’s also possible to image a set of norms like that in say, The Purge, where society is a free-for-all. Social norms that have never been questioned often have an air of “this is the one true way”, due to the myopia of linear-inference. Historical inertia can be a hell of a gravity-well. So it might seem “objectively true”, in the sense that it’s pareto optimal. But really, there’s a ladder of escalation, depending on what ground rules all parties are willing to settle on via negotiation.
The preferences component of morality covers every thing else. Such as things like “I prefer to drive the trolley over 1 individual rather than 5 individuals” or “I prefer to live in a world where society rewards merit, as opposed to distributing goods equally”.
I feel like viewing meta-ethics in this way helps resolve a lot of confusion.
II. The Meaning of Ethics
For example, consider the question: what does ethics mean? The orthodox theory closest to my understanding is probably Emotivism:
Emotivism is a meta-ethical view that claims that ethical sentences do not express propositions but emotional attitudes.
My main quibble1 is that I would drop the emphasis on emotions entirely.
In my view, preferences are useful because they determine attitudes. When most people think of “attitudes”, they naturally think of it in an emotional context. But I think there’s an argument to be made that attitudes aren’t necessarily emotional. It simply implies a bias in a direction of hypothetical motion. I.e. an orientation. E.g. consider aeronautics. Planes are said to have an attitude. The attitude is simply the direction toward which the nose is pointing. This doesn’t tell you where the plane is currently going, since the velocity vector often becomes unaligned with the attitude vector when executing a turn. But it does give you a rough sense of which direction the plane is going to accelerate toward if you hit the gas.
Likewise, I think it’s useful to attribute “an attitude” to something like an amoeba. An amoeba doesn’t have a nervous system, so I strongly doubt that it experiences emotions. But it does, presumably, have ways of navigating its environment via chemical signals, light, etc. It will have preferences about what types of environments are suitable for its continued existence (e.g. warm, moist, abundant food), and will naturally orient itself along the gradient of signals (almost as if it had it’s own Theater). Thus, I think it’s a useful construct to attribute the labels “good” and “bad” to properties or actions of the amoeba.
Emotions, on the other hand, are a privilege granted to organisms with nervous systems. It allows for more sophisticated attitudes. Although the general purpose is probably the same, which is to orient behavior toward good decisions. Where “good” is just an abstraction over a multitude of preferences, and usually gets reified to “whatever helps an organism live long and prosper”. This is deliberately vague, since the specific preferences depend on the organism, and since there’s a multitude of ways of going about achieving this.
III. The Big Three
Ethical strategies are often classified into Deontology, Virtue Ethics, and Consequentialism. Deontology is based on concrete rules (e.g. do not kill), virtue ethics is based on abstract attributes (e.g. act courageously), and Consequentialism is based on consequences. Philosophers sometimes try to argue that one particular way is the One True Way. But I don’t think of any of these as being “objectively true”. They’re simply different strategies. Alternatively, you could call them policies.
As an analogy, let’s consider tic-tac-toe. The game is considered “solved”, because the number of possible moves is small enough, that players can map out the entire decision tree. Chess on the other hand, well, it’s soluble in theory. In practice, the space of moves is large enough that nobody has fully mapped out the decision tree yet. In lieu, players often rely on heuristics/tactics/strategies to guide their decisions. These are often imperfect.
Bobby Fischer loved to open with 1. e4. Opening with 1. e4 was not a guaranteed win. There’s also lots of other openings he could have played that were just as viable. But in Fischer’s mind, 1. e4 was the best opening for white, presumably because he thought it led to the greatest chance of victory.
Sometimes, players make a distinction between a results-oriented approach, vs a process-oriented approach. “results-oriented” means you base your decisions on the last result you saw. “Process-oriented” means you base your decisions on a large sample of results. In a game which in unsolved, it’s important to be “process-oriented”. Because losing occasionally is inevitable. But that doesn’t mean you should ditch your entire strategy after any one, individual loss. I.e. when evaluating the strength of a particular decision, it’s necessary to examine whether the decision made sense in the context of the information available to you, and decide whether a decision was statistically optimal despite losing in that specific scenario.
Likewise, the universe is a big, complicated, messy place. We all want more good, less bad. But good outcomes are not always guaranteed, no matter the type of strategy you follow. Any one strategy is likely to fail, and fail in ways different than other strategies. So instead of searching for “the one, true infallible strategy”, it makes more sense to look at each strategy’s track-record, and evaluate whether the trade-offs are acceptable to your preferences.
IV. Utilitarianism
The Big 3 may not be objectively right. But I’m pretty sure that Utilitarianism is objectively wrong. I don’t believe that preferences are commensurable across individuals. Not even in theory. Preferences being compared only make sense in the context of some sort of utility function. An individual agent might contain a coherent utility function. However, the interstitial aether between people does not contain a utility function. It’s anarchy, no-man’s land, the DMZ.
Utilitarians might protest “yeah, but the brain is probably using some sort of currency-esque thing to make decisions, and this should correspond to something objective and measurable. So it should, in theory, be possible to crack open each person’s skull and just count the currency distributions for each individual.”
No. Because whatever objective currency-esque thing the brain is using to represent its metaphorical weighing scales, the Platonic Essence of the abstraction that the currency represents is not commensurable. E.g. Consider the word “dog”. The word “dog” is bound to the context of the English language. “Dog” doesn’t mean anything in french. As soon as you remove “dog” from the context of English, it loses all meaning and relevance. It’s possible that the French language could adopt “dog” as a loanword. But that just means it’s now also bounded to the context of French, not that “dog” has also become a universally recognized signifier in Spanish, Swahili, etc.
Similarly, preferences are bound to the decision-maker. As soon as the context is removed, it loses all meaning. E.g. Alice likes pancakes. Bob does not like pancakes. It’s possible to imagine Bob in Alice’s position of choosing whether or not to eat pancakes. It’s possible that, if Alice is a kind and caring person, her preferences will indirectly reflect the preferences of Bob. But it’s nonsensical to suggest that Bob’s preferences are directly relevant to Alice’s utility function, regarding Alice’s decision to eat pancakes. Because Bob is not the one making the decision. And preferences only meaningfully exist with respect to a particular decision-maker.
V. Politics, the Final Frontier
Utilitarianism’s flaw of incommensurability is perhaps best demonstrated by The Mere-Addition Paradox.
Consider two complex numbers, (5, 3i) and (4, 4i). Which is greater? I would say, it’s a dumb question because complex numbers are not well-ordered. You could impose order, by picking a norm. But there’s nothing objective about this. Sans additional context, doing so would be a completely arbitrary decision.
But in the Mere-Addition Paradox, Parfit asserts the equivalent of “well, (5, 3i) > (4, 4i) since (5) is greater than (4). But also, (5, 3i) < (4, 4i) since (3i) is lesser than (4i). Therefore, (5, 3i) <> (4, 4i).” From this thought experiment he concludes that
“For any perfectly equal population with very high positive welfare, there is a population with very low positive welfare which is better, other things being equal.”
And calls this the repugnant conclusion, because it implies that a huge population of humans with barely tolerable Quality-of-Life (QoL) is preferable to a smaller number of people experience a decent QoL. But again, it’s not really paradoxical. It only seems that way because Parfit is trying to have his cake and eat it too.
Imagine living under Fully-Automated Luxury Gay Space Communism (FALGSC). I.e. a post-scarcity utopia. Why yes, it’d be just dandy if the universe were filled with infinitely many people, each of whom were infinitely happy. Unfortunately, FALGSC has not arrived yet. Which means there’s more or less a finite amount of resources to go around. You can go tall, or wide, but not both. This is simply the reality of the Pareto Frontier.
Parfit seemingly imagines that, his interpretation of utility-maximization logically demands that happiness be spread as thinly and widely as possible. I would argue that, no, “hyper-wide” is simply another point on a pareto-frontier. And that where you land on the pareto-frontier is merely a matter of preferences. I, for one, prefer to err on the side of taller distributions of happiness. Maybe others would prefer to err on the side of wider distributions. But I recognize that it’s simply a preference, and not a moral dictum.
What Parfit is really doing, but perhaps doesn’t realize, is failure-mode analysis. “Failure-mode” is an engineering term for how a system breaks, when you put it under extreme stress. For example, civil engineers generally want concrete to fail slowly and visibly, as opposed to violently and silently. If some concrete element fails over a long period, and with visible cracks, then civilians can evacuate the area before it collapses entirely, and request repairs. If some concrete element fails suddenly and without warning, then civilians die. Obviously, engineers don’t want the concrete to fail. But outside of a certain envelope of conditions, failure is inevitable. And nothing in the universe lasts forever. So it’s prudent to bias the concrete to fail in specific ways.
Therefore, it’s more productive to see the paradox as choosing between the least of several evils when the economy is pushed to duress, as opposed to interpreting the repugnant conclusion as a moral dictum and therefore feeling compelled to push the economy toward failure deliberately. (Where “failure” arguably includes both the “ultra-wide” repugnant-conclusion, as well as the “ultra-tall” utility monster scenario.)
VI. Finance, the other Final Frontier
As the sagacious philosopher Zach Weinersmith points out, ethics becomes quite straightforward if you just take the Ethical Fourier Transform. And that’s what we’ll be doing here. Analogous to engineering’s “pareto-frontier” is finance’s “efficient-frontier”. The efficient-frontier is a particular type of pareto-frontier which specializes in modeling the relationship between risk and return-on-investment (ROI).
For anything in life, there’s downside-risk and upside-risk. conservative investors prefer to err on the side of low-downside, low-upside. Aggressive investors prefer to err on the side of high-downside, high-upside. What nobody wants is high-downside, low upside. As for low-downside, high-upside, this is a unicorn. A fairy tale. A $20 bill lying in the street. If you find one, jump on it quickly before the Invisible Hand scoops it up.
Some ventures are riskier than others. I liken Consequentialism similar to being a high-risk strategy. It’s analogous to writing software in assembly code. When writing assembly code, there are few guardrails. The programmer is granted wide latitude to include as many go-to’s as he wants. But with great power, comes great responsibility. It also becomes easy to shoot himself in the foot with unhandled exceptions, memory leaks, etc. These sorts of things are less likely to happen in a dev environment which holds your hand.
Or perhaps you could liken it to surgery. When you invade someone’s innards with a scalpel, this is obviously quite risky. Which is why you only do this in scenarios where the situation absolutely demands it. Like when your heart is failing. Or your colon is knotted. Or your appendix is exploding. The specificity and complexity of the moral apparatus you’re operating under at any given moment should be context-dependent. Which is why some people argue for things like Two-Level Utilitarianism.
Fundamentally, there’s two reasons why Consequentialism is risky compared to deontology and virtue ethics.
Firstly, because Deontology and virtue ethics are simpler and lower-resolution. As I detailed in Magic Runes and Sand Dunes, complexity is the price of specificity. The complexity is needed to distinguish particular elements from the rest. In the case of Consequentialism, the complexity is embodied in the accounting ledgers when weighing the pros and cons. Of course, the downside is that the additional complexity introduces new and exciting ways for things to go wrong. In comparison, Deontology and virtue ethics use simply heuristics such as “be honest” and “don’t kill”. They are simple and robust, and they work well for many cases.
Secondly, there’s an element which compounds the risk. Which is the fact that humans frequently engage in motivated reasoning. The complexity of consequential accounting gives a motivated reasoner much leeway to reason themselves into doing what they wanted to do anyway. And as Aleksandr Solzhenitsyn points out, does not the line between good and evil run through the heart of every man? So when Dostoevsky complains about utilitarianism, I suspect that what he’s really complaining is that utilitarianism is not only high-risk, but also something of a moral hazard.
Well, beside the fact that truth is not a property of propositions, so much a property of mental models.
(Originally, I was simply going to link to Magic Runes and Sand Dunes because I thought I’d covered this there. Alas… upon skimming the post, I’ve discovered that I’ve infact elided this point. Perhaps I’ll revisit this in the future.)