3 upvotes, 1 direct replies (showing 1)
If it's being treated like thought analysis for a program, that is trickier. All in all though, I think it would still boil down to the programmers morals.
I remember hearing this debate when it came down to self-driving cars and whether the driver or the most people should be saved. I think it's interesting though that a big concern would be ethics rather than the cars themselves, but it does bring up a new issue; If we had to decide between randomizing the robots choices more like a humans, or choosing a morally guided decision that some might not agree with, where would we go?
Trying to think of ethics and morals like a computer would is immensely complicated, so I see how we would need to make the general decision and let the program execute. I really like this take because I've never considered benefits behind philosophy besides personal interest.
Comment by [deleted] at 14/01/2020 at 04:09 UTC*
0 upvotes, 0 direct replies
depend slap naughty numerous decide repeat direful childlike wide smoggy `this post was mass deleted with www.Redact.dev`