12 upvotes, 1 direct replies (showing 1)
Realistically, none of the main population concerns will apply to our lives
The study of artificial intelligence is an example of a practical issue that is impacted by this philosophy. There are theorems that show that anyway we program an intelligent system it will be equivalent to (or eventually become through self-modification) an approximately rational utilitarian agent.
Therefore, how do we build AIs that do not fall prey to some form of the repugnant conclusion? There are some suggestions in the literature, but no slam-dunk solution.
We try to find universal solutions and equations to everything, but this problem, to me, seems like asking the purpose of life.
I agree in principle, but if we don't at least concretise some idea of our meta-ethics, or meta-meta-ethics (etc.), we could be in trouble.
Comment by LocatedEagle232 at 14/01/2020 at 03:11 UTC
2 upvotes, 1 direct replies
If it's being treated like thought analysis for a program, that is trickier. All in all though, I think it would still boil down to the programmers morals.
I remember hearing this debate when it came down to self-driving cars and whether the driver or the most people should be saved. I think it's interesting though that a big concern would be ethics rather than the cars themselves, but it does bring up a new issue; If we had to decide between randomizing the robots choices more like a humans, or choosing a morally guided decision that some might not agree with, where would we go?
Trying to think of ethics and morals like a computer would is immensely complicated, so I see how we would need to make the general decision and let the program execute. I really like this take because I've never considered benefits behind philosophy besides personal interest.