💾 Archived View for midnight.pub › posts › 591 captured on 2024-08-25 at 03:51:32. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2023-05-24)

-=-=-=-=-=-=-

Midnight Pub

An Ethical Concept I Can't Grasp

~mellita

(This post was inspired by ~zampano's recent post, 'On the barstool'.)

My biggest source of confusion in contemplating ethics is that I don't understand, on a fundamental level, how states of well-being can be added up. What I mean is, if you asked me should ten people or ten thousand people be saved from horrible torture, and I had to choose one group or the other, I would almost certainly choose the ten thousand...but I don't know why. I can't explain why this actually makes sense. The individual experience of each of the ten or ten thousand will be a world of that pain unto itself. In what sense are these fundamentally subjective experiences additive? Is it a powerful, moral intuition, or have I simply been trained in this viewpoint by our culture? Or is there a better explanation besides?

Two strange applications of this logic—that moral values add up—come to mind. The first is the concept of a person who does great amounts of harm as well as good, but ultimately more good, so that we might call them a net-good moral actor. But this doesn't seem right at all. Are we really supposed to think of right and wrong as inverted currencies which naturally eliminate one another? To do something categorically wrong surely remains impermissible, no matter what other good a person may do, in which case, what benefit even comes of tallying up the moral values of their deeds? The second is the concept of one person experiencing the worst pain possible, compared with many, many people, each suffering the pain of stubbing their toe. To say that states of well-being add up would suggest that, at some massive quantity of people stubbing their toe, their suffering would equal or exceed that of the individual in tremendous pain. This also suggests that, as in the example of ten versus ten thousand people above, you could morally inflict this tremendous pain if it were to save the massive quantity of people from stubbing their toes. But this I can't take seriously at all. The pain of stubbing one's toe comes nowhere near that of this tortured individual. The fundamental experience of well-being is isolated to the individual consciousness. If you gave me the choice of making an infinite number of people stub their toe, or subject a single individual to the greatest conceivable pain, I would select the former without even thinking. Is it a question of consent? Part of my intuition, here, is that I would more than happily endure many small pains if it meant that just one person didn't suffer terribly. (This strikes me as very much related to the debates surrounding mask-wearing over the past year and a half.)

In consideration of such examples I return to the quandary of the first paragraph. It feels not a little obvious that a smaller, rather than larger, group of people should suffer the same ill, if the choice must be made. But why? Not to mention the issue of quantifying utility, suffering, well-being, etc., in the first place. Perhaps it's not because of well-being adding up, but because of the resources necessary to account for wrongs once they've been committed—i.e., if one person is made to suffer or die rather than five, as in the famous trolley problem, amending the wrong committed against one person is, sometimes, easier than amending that against five. (There are typically fewer people who will grieve, for instance, and thus fewer people who will require greater care.) But this idea seems to hit a brick wall if we assume we have infinite resources for comfort and care, and I wind up at my original position of blind groping through a popular moral quandary.

This is also naturally relates to the 'Repugnant Conclusion' mentioned in ~zampano's post, but that appears to have more to do with what intrinsic value, if any, you ascribe to an individual human life, so I'll leave off from commenting upon it.

How does this sound to you all? Have I made the problem clear, or am I a crackpot? If anyone feels that states of well-being do add up, and further feels that they can explain how this is possible, please do so. I'm sure many utilitarians have tackled the problem; I'm simply unaware of their explanations. I would very much like to understand it.

Write a reply

Replies

~edisondotme wrote:

The first is the concept of a person who does great amounts of harm as well as good, but ultimately more good, so that we might call them a net-good moral actor.

AKA the utility monster"

You pointed out a lot of classic problems with utilitarianism. I agree with marginalia on this, these questions have stumped ethics for ages! I don't think states of well being add up and to think that they do introduces a calculus of morality for any moral action which isn't helpful.

~tskaalgard wrote:

The needs of the many outweigh the needs of the few - I would accept this as an axiom.

The best way to tackle these questions is while examining both the number of people suffering (obviously two people suffering is twice as bad as one person suffering the same ill) and the amount of suffering (if I can stub my toe to keep you from being tortured, I should). That's the closest you'll get to an "answer" to this problem. There are a billion obscure citations I could make to make one point or another, but this is the best way of looking at it if we're talking about real human suffering.

~tiernan wrote:

I am not well read enough in utilitarianism to answer the main question, but at the beginning you stated, "Is it a powerful, moral intuition, or have I simply been trained in this viewpoint by our culture?" I would contend that it would have to be an intuition, since culture is a product of people, who, in order to instill their morality into the culture they created, would need to get it from a previous culture, ad infinitum.

Your paragraph discussing the possibility of infinite care and comfort is revealing. It seems like you might be torn on consequentialism here, because if the death of the five and the one will both cause no suffering in others, the consequences are negligible and thus neither moral action is "better". You are right that it has to do with the intrinsic value of humans, which determines whether the act of killing them is intrinsically wrong, regardless of the outcome. If human value is merely subjective, and the five/one both cause no grief, then neither moral action is better.

I don't know if you think humans have intrinsic value, or that actions can be right/wrong regardless of outcome. (For full disclosure, I do). Settling these 'first principle' issues will probably give you more consistency tackling these problems.

I would also rebut zampano's first commenter on the other post who said consistency is not important. It is. A philosophy is LIVED. Those 'insufferable nerds' are challenging what is essentially a lifestyle put into words. An inconsistent lifestyle (with confusing/arbitrary decisions) will lead to chaos.

~marginalia wrote:

Yes. I this entire part of the field of ethics is plagued this fundamental misconception about emotional states: They aren't numbers you can tally up. You would think, living in a world today with historically unprecedented degrees of material wealth, access to culture, social equality, safety, comfort, pretty much any external goods you can think of; that we would be happy all the time. After all, these are things that when we are given access to them, we are happy. Our ancestors would think we were royalty, angels even. But we are not happy all the time. That's not how happiness works. Aside from temporary disturbances toward happiness or discontent, we're all mostly at some sort of baseline. You win the lottery and you are happy for a while, but then it's just the new normal; you still have problems. You lose a loved one, and you are sad for a while, but then it's normal again; you can still be happy. If you're depressed that baseline is shifted toward the dour, but it's still a baseline.

The discussion of ethics in the western tradition originally started out as a discussion on which manner of living was the best, from a purely selfish point of view. Plato pointed out the inherent contradiction in doing things you think are bad--they are the things you *don't* think should be done. People who do bad things are essentially confused about what is good. (As an aside:" do|lorem ipsum dolor sit amet" is Cicero basically repeating a similar notion a few hundred years later -- in paraphrase: [nobody] loves [or seeks] pain for the sake of pain itself.)

The discussion of ethics has gradually shifted toward a discussion of civic virtues, i.e. which manner of citizen is the best for society, and I think viewing happiness as a social problem is where we tripped up.

The only person who has the means to truly affect someone's well-being is that person itself, through their judgements and actions. Anything else is the textbook definition of a codependent relationship. That's bad if it's just two people, but even worse if it's on a societal scale. You cannot take responsibility for other peoples' happiness; not only will you fail, but you will make yourself miserable in the process.

That doesn't go to say you shouldn't contribute to society or help other people, there are purely selfish reasons to do that. Contributing to a common goal feels really meaningful and good. It makes you feel like you belong, like you live in society rather than on it. Have you ever heard someone complain about how proud they feel? Have you ever heard some celebrate feeling like their life lacks meaning? I haven't.

~ns wrote:

I've never studied ethics or morals or philosophy or anything like that, so maybe I'm missing something. But it seems easy to me. Just add up the long term net damage.

Save 10 people or 10,000? Well that's easy, 10 is less than 10,000.

Have one person experience the greatest conceivable pain or everybody else stub their toe? Well, I certainly don't remember every time I've stubbed my toe, and it's definitely not traumatic. The "damage evaluation" of that is practically 0, and our sacrifice to the toe-stubbing gods is experiencing something greater than 0.

Of course, there's a limit to how well we can calculate long-term damage. Would you rather give a dollar to a homeless person or save a baby from a burning building? Surprise: the baby grows up to lead the biggest genocide in history. _Those_ sort of ethical quandaries never particularly interested me, we can't know the future that far ahead.

~zampano wrote:

I'm pleased I could serve as inspiration!

To answer the question, I think the only answer (and this is something I was trying to get at in my ramblings from the barstool) is that we can't think about ethics logically. It's intuitive or bust.