💾 Archived View for dioskouroi.xyz › thread › 24928829 captured on 2020-10-31 at 00:43:04. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
________________________________________________________________________________
Cmdr. Riker clearly thought so, because he literally did it. He did not _like_ doing it, but it’s not clear to me if that’s out of a sense of respect for his friend, or because he thinks it hurts Data’s case: “And, now, a man will turn it off.”
But, when he stops to smell the (literal) roses, does he enjoy it? In Time’s Arrow (Part 1), he mentions that he misses people when they are gone. His “neural pathways have grown accustomed to [their] sensory input,” and he misses the experience of seeing them.
I would argue that if he can miss people, he should be able to, at least, _appreciate_ the sensory inputs from smelling the roses.
He was also ordered to argue that position. If he failed to be convincing then Data would lose automatically.
Yes, but, a Starfleet officer doing something blatantly unethical in an official hearing probably wouldn't sit well with command.
Have we ever questioned killing chickens? Robots, however, will win our hearts like dogs and cats, we will be generally good to them, but we do frequently put down them when necessary (or unnecessary?)
> Have we ever questioned killing chickens?
Absolutely: veganism. The chicken welfare question is generally more about their suffering than their deaths.
As an anti-speciest vegetarian, I believe it would be murder to turn off a machine that develops a sense of self-consciousness or of preferences.
To run with the example, GPT-3 is obviously not self-aware, and therefore cannot feel pain or have a sense of self.
But what if we change that? Run a recurrent GPT-3 instance where (a) some of its prompt input is a description of the current state of its own process, system, and environs, (b) the prompt defines certain of these stimuli as "painful" and to be avoided (and others "joyful" and to be sought out), and (c) the prompt instructs GPT-3 to respond as an individual experiencing these phenomena.
What then visibly/measurably differentiates this "self-aware" GPT-3 server from any other conscious entity? Or is consciousness necessarily not visible/measurable and thus unscientific?
Moreover -- what if consciousness is _easy_ to create, simply by meeting the requirements of self-awareness, and ability to experience and respond to qualia? Having a tool like GPT-3, which can compute how existing conscious beings might react to any given qualia, just makes such an artificial consciousness more intelligent and relatable.
If it is conscious, can't it answer this question for itself?
Animals (especially ones like apes and dolphins) may be conscious, yet couldn't answer the question.
The question supposes that robot consciousness is satisfactorily defined, which is the real source of ambiguity. That is, the answer to the question of how to treat robot consciousness is rooted in how we determine the robot is conscious in the first place. (Such as, is it necessarily volatile? Temporal? Physically limited in space?)
What if it’s a trillion robots?
What happens if they want to vote? Human vote wouldn’t even be a rounding error.
They will likely be treated like pets soon enough, so not too worried about this.
And eventually, is it OK to leave them all on?
Maybe we will be the pets. Kept as zoo exhibits.
Ethics gets weird when the other side decides it doesn’t apply to you.
I had never considered this eventuality in a plural democratic human/robot society.
I for one...
If you can turn it back on again and it is capable of resuming its previous state, why not?
Is it okay to knock someone unconscious and then wake them up again?
Sure, people do that all the time, every day. They’re called anesthesiologists.
the obvious differentiator being consent...
Not really. People who come into a hospital unresponsive may need emergency surgery. In that case, obtaining consent from them is impossible, yet, ER docs, anesthesiologists, and surgeons clearly have no ethical problem administering general anesthesia without consent, in such a case.
In the case of children (or even pets) it’s up to guardians (or owners)...
Yes, if, like the robot, you own that person. That seems unlikely though.
2 deep 4 me.
BTW. The "soul" issue precedes this problem. According to experts, one- or two-cell organisms have already a soul and jerking off is thus a crime.
So metering machine consciousness is in vain, as every transistor is already sacred.
However there will probably be great philosophical divide here, as reformists will proclaim that only powered two-transistor flipflops demonstrate life and have soul.
Think of the inverse. It's possible to arrange a living creature to provably not have a human (or animal) style consciousness. By definition we say that plants are not conscious the way humans are conscious. Even if they do possess a soul they probably lack the ability to perceive the world through pain, vision or thought. So just build a human shaped creature that lacks the ability to perceive the world. Therefore consciousness is a property that is created by the creature itself.
The conclusion is that you can build a conscious robot. Instead of thinking about turning it off you first have to think about turning it on. Is it ethical to bring children into this world, knowing that they will eventually die? If so then it would be equally ethical to do it to a robot.
So if you have accepted that your robot will be turned off eventually it is no longer relevant whether it happens. Rather it is all about the situation in which a robot will be turned off. We accept that all humans eventually die but we also wish for humans to delay their death as long as possible. It would be similar with robots. We will turn them off when the situation requires it but we also want to avoid it as much as possible.