💾 Archived View for nicksphere.ch › 2023 › 02 › 23 › making-sense-of-metaethics captured on 2024-05-10 at 10:50:24. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2024-02-05)

➡️ Next capture (2024-05-26)

🚧 View Differences

-=-=-=-=-=-=-

 _  _ _    _        _              _     _                      
| \| (_)__| |_  ___| |__ _ ___  _ | |___| |_  _ _  ___ ___ _ _  
| .` | / _| ' \/ _ \ / _` (_-< | || / _ \ ' \| ' \(_-</ _ \ ' \ 
|_|\_|_\__|_||_\___/_\__,_/__/  \__/\___/_||_|_||_/__/\___/_||_|

🔗 Return to homepage

📆 February 23, 2023 | ⏱️ 6 minute read

Making Sense of Metaethics

Motivation

It's been over two years since I wrote my journal entry, metaethics¹. Looking back, that entry is much longer than it needed to be. I said way too much and didn't explain myself well. And I no longer agree with everything I wrote in it.

This entry supersedes that one. I aim to make this one shorter and more comprehensible.

Laying The Foundation

Let's start with facts everyone can agree on.

First, there are things that we value. For example, having enough food to stay alive, achieving a high social status, being around people we like, etc. Even if we have value structures that can't be easily codified into statements, it doesn't matter. By definition of what a value is, we want to navigate towards a future where our values are fulfilled.

So far so obvious. Now let's move into the most controversial section of this entry.

Interpreting Moral Language

Moral language like "should", "ought", "good", "evil", "right", and "wrong" should be interpreted as signalling either alignment or misalignment between values and actions. So if I say "You shouldn't play the lottery." what I mean is that you playing the lottery runs contrary to my values somehow. One plausible reason is that I care about your well being. I know your well being will probably be worse if you have less money and that playing the lottery will probably cause you to lose money.

There are other ways to interpret moral language which I do give some credence. Emotivism² suggests that moral statements express feelings. Prescriptivism³ proposes viewing moral statements as imperatives.

I agree that moral language can also express emotions and imperatives. My interpretation is fully compatible with that. But the problem with interpreting moral statements entirely as expressions of emotion or entirely as imperatives is that people use moral statements as statements of fact. My claim is that the facts moral statements refer to are facts about how certain actions affect one's values.

Refuting Hume's Guillotine

At this point, I'd like to address some likely criticism, namely Hume's Guillotine⁴. For those who don't know, Hume's Guillotine is the idea that you can't derive an "ought" from an "is". In other words, no description of how the world is tells you how it should be. It's a strict separation of facts and values.

Under my interpretation of moral language, values are facts⁵. There's no distinction. As long as I value my health, I should exercise. That's a fact. If anyone disagrees, I'd be happy to reference the science that shows exercise improves health. "But science can't prove that it's Good to be healthy." I value my health. No further justification is necessary.

Hume says "You can't get an ought from an is." I say "The is is the ought." Values are facts about what we care about and moral statements are facts about the effects of different actions on those values. How else could moral statements possibly be interpreted while also lining up with how they're actually used?

Intrinsic and Instrumental Values

Now that I've defended how I interpret moral language, I'd like to elaborate on values a bit. Values can be separated into two categories: intrinsic and instrumental⁶.

Intrinsic values are things we care about in and of themselves, like pleasure and happiness. Instrumental values are things we care about because they bring us closer to satisfying our intrinsic values. For example, I care about exercise because I care about my health. I care about my health because I want to feel good. And I want to feel good just because.

Let's assume that the lottery player in my earlier example values playing the lottery solely as an instrumental goal. Their real goal is having more money. Let's also assume that they value having money because they think it will make them happy. The playing the lottery and the having money are instrumental to the happiness. But if I could prove that playing the lottery doesn't lead to having more money or that having more money doesn't make one happy, then those instrumental values would disappear, assuming the lottery player is rational.

It's important to note that a rational agent cannot be reasoned out of their intrinsic values, but they can be reasoned into and out of instrumental values. This is where science comes in. Science can tell us what we should do given our values and what we should instrumentally value given our intrinsic values.

I won't bother going into more detail on that since a very good book has already been written on how science can inform human values. If you're interested, the title of the book is The Moral Landscape⁷, authored by Sam Harris⁸.

Well Being

In The Moral Landscape, Sam starts with well being as his ethical foundation. So let's talk about how that works in my moral semantics.

First, I'll start with the observation that any level of intelligence is compatible with almost any intrinsic value/goal⁹. Humans though, as products of evolution by natural selection, share anthropomorphic goals. Generally speaking, we want to promote well being for ourselves and others.

As a matter of convenience, we make the (usually correct) assumption that whoever we're dealing with has similar intrinsic values to us and thus we can attempt to reason with them. In the case of psychopaths, this may not be true. But they're rare enough that it doesn't matter. So I take no issue with Sam's starting with well being as the entry point for thinking about ethical questions.

However, as we continue to make technological progress, we must keep in mind that artificial intelligences do not necessarily share anthropic values like well being. Unless we make sure that AI values are aligned with human values (a very difficult unsolved problem), then it's not possible to reason morally with AI the same way we can with humans.

Being Wrong About Instrumental Values

In terms of human values, the problem isn't differences in intrinsic values. It's differences in instrumental values. Many people possess instrumental values which harm their own intrinsic values. They're just too unwise to see it.

If we agree that well being is what we're after and some actions, attitudes, cultures, political ideologies, and systems of government are better than others at promoting it, then moral statements are equivalent to facts about which states of affairs bring about well being.

The Importance of Moral Semantics

Even though people aren't perfectly rational, their intrinsic values might change over time, and they're often wrong about their instrumental values, we still need a way to at least try to reason with each other about morals. That's what my moral semantics is meant to do. Coming together on intrinsic and instrumental values is the foundation of cooperation. The more we agree on our instrumental values, the better things will be for us.

Getting clear on what we mean by moral language is more than just a philosophical exercise. If we can't agree on what moral statements mean or whether they have meaning at all, how can we reason with each other about morals? We'll be more likely to talk past each other and resort to violence. That's why moral semantics has practical importance.

References

🔗 [1]: metaethics

🔗 [2]: Emotivism

🔗 [3]: Prescriptivism

🔗 [4]: Hume's Guillotine

🔗 [5]: values are facts

🔗 [6]: intrinsic and instrumental

🔗 [7]: The Moral Landscape

🔗 [8]: Sam Harris

🔗 [9]: any level of intelligence is compatible with almost any intrinsic value/goal

Copyright 2020-2024 Nicholas Johnson. CC BY-SA 4.0.