On population ethics, the development of Derek Parfit's thought, and the origin of Parfit's "repugnant conclusion"

https://sjbeard.weebly.com/parfit-bio.html

created by drinka40tonight on 13/01/2020 at 23:28 UTC

524 upvotes, 9 top-level comments (showing 9)

Comments

Comment by spacecadet84 at 14/01/2020 at 10:22 UTC

68 upvotes, 5 direct replies

Maybe I'm just not 'philosophical' enough, but I feel comfortable dismissing "the repugnant conclusion" as a conceptual plaything.

To me it seems obvious that an ecologically intact world of 500 million people living rich happy lives is immeasurably preferable to a ravaged dystopia of 50 billion miserable wretches, in the same way I prefer kids not to get leukemia. If anybody disagrees on philosophical grounds, all that demonstrates to me is that an intellectual obsession with abstraction causes some people to lose touch with reality.

Not denigrating philosophy here, just saying it has a few rabbit-holes that don't lead anywhere useful.

Comment by [deleted] at 14/01/2020 at 14:04 UTC*

4 upvotes, 1 direct replies

A fundamental problem I have with utilitarianism, in general, is the choice of metric. Often simplified to "the most good for the most people", and generally in the context of happiness or pleasure, the metric hinges on something particularly unattainable. The human desire for happiness or pleasure is insatiable, and so there's no way to optimize the utility function.

On the other hand, if the utility function were to reduce as much evil as possible, this creates a baseline at zero. While it may, in fact, be impossible to eliminate pain altogether, at least 0 is a tangible point on the number line, something we can at least measure against.

In a conversation with a colleague, he asserted that Mill's view was to do both, provide the most amount of good for the most people WHILE eliminating as much evil (i.e., pain) as possible for the most people. I contend that's a misreading of Bentham, that Bentham asserted you can do one, or the other, but not both. My colleague had a difficult time understanding that, when you attempt policies that strive to do both you inevitably end up with problems of nonlinearity that are not readily solved.

In the case of Parfit, the "repugnant conclusion" stands so long as we attempt to attain, in absolute value, "the most good". However, if we reverse the metric, to attain the least evil, the problem with absolute values is still a problem. Is attempting to eliminate evil for 100,000 people not better than attempting to eliminate evil for 100? The more people there are, the more evil you're preventing/eliminating, in absolute terms. This conclusion serves to reinforce continuous population growth, as well.

What, then, is the solution? Well, it seems particularly obvious to step away from absolute values. Attaining lower ratios of people suffering evil/pain should be the metric by which we measure utility. Using the attainment of good/pleasure, even as a ratio, still falls prey to the problem of insatiability.

Comment by LocatedEagle232 at 14/01/2020 at 01:34 UTC

4 upvotes, 4 direct replies

Realistically, none of the main population concerns will apply to our lives, only those of generations of descendants. However, by the time they are born, we will be dead and we wont know them or their circumstances. Basically they become strangers and we lose some emotional attachment that would be bound to our own children.

We try to find universal solutions and equations to everything, but this problem, to me, seems like asking the purpose of life. Everyone has a different answer. It depends on their background and experiences.

Implying that any of these philosophers ideas is the ideal solution means morally leaning towards them. Morales are generally subjective and that means there is no morally correct way of dealing with population control. We simply have to do what we believe is the best option. It's humbling and almost humiliating to accept that at our best, we are unique individuals that will accomplish things through conflict and reaching for the best scenario, while, at our worst, we are silly animals trying to list out the entire sequence of pi.

Comment by BernardJOrtcutt at 13/01/2020 at 23:39 UTC

1 upvotes, 0 direct replies

Please keep in mind our first commenting rule:

**Read the Post Before You Reply**
Read/listen/watch the posted content, understand and identify the philosophical arguments given, and respond to these substantively. If you have unrelated thoughts or don't wish to read the content, please post your own thread or simply refrain from commenting. Comments which are clearly not in direct response to the posted content may be removed.

This subreddit is not in the business of one-liners, tangential anecdotes, or dank memes. Expect comment threads that break our rules to be removed. Repeated or serious violations of the subreddit rules[1] will result in a ban.

1: https://reddit.com/r/philosophy/wiki/rules

--------------------------------------------------------------------------------

This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.

Comment by [deleted] at 14/01/2020 at 16:07 UTC*

2 upvotes, 1 direct replies

[deleted]

Comment by Caconym8 at 14/01/2020 at 10:46 UTC

0 upvotes, 0 direct replies

I suppose the difficulty of defining persons that Parfit explored was not taken so far as to view animal happiness as worthwhile?

Has life other than human been explored in this problem?

Comment by ribnag at 14/01/2020 at 03:37 UTC

-6 upvotes, 3 direct replies

Am I the only one who will admit the *really obvious* solution to this version of utilitarianism?

It's not *really* about maximizing happiness; it's about maximizing **my** happiness. And if that means everyone else gets to benefit, hey, cool.

There's no "repugnant conclusion" required. An infinite number of people who are one red whisker shy of committing suicide don't improve *my* happiness, so it's not actually within the universe of desirable outcomes.

Comment by Demz_Boycott at 15/01/2020 at 01:14 UTC

0 upvotes, 1 direct replies

This was one of the most infatuating articles I have read in quite some time. While I appreciate and understand his stance I am still #teamthanos

Comment by [deleted] at 14/01/2020 at 02:12 UTC

-13 upvotes, 0 direct replies

[removed]