š¾ Archived View for dioskouroi.xyz āŗ thread āŗ 24918328 captured on 2020-10-31 at 00:49:14. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
________________________________________________________________________________
The problem is, we don't have a definition nor a test for consciousness. So these discussions usually go in circles.
i am aware of three definitions of consciousness, used often without clarity:
a/ consciousness = intelligence. By this definition, a lot of AI has some level of consciousness
b/ conciouness = being awake and alert. A lot of studies for instance that compare MRI of alert vs unconscious humans focus on this aspect.
c/ consciousness the way we humans experience the sense of self awareness ("watching a movie from inside", etc). This must be more than an illusion because for instance right now I am writing about it so that "sense" is affecting my actions. But how do you even define or test for it?
It's likely you have a lot of (a) without (c), and you could have a lot of (c) with little (a). We can test for (a) but not for (c), so for now it's all philosophical speculation.
Until we can define and test for (c), I don't see much progress being made here.
The most funny problem with defining consciousness is the ability of definition to mark people as conscious and everything else as non-conscious. It is bad to kill human and it is acceptable to kill animal -- why? Because of consciousness of human, isn't it? So if animals are conscious then they should have "human" rights, including access to education and medicine.
Moreover, there are groups who would fight any definition which treat new-born humans as non-conscious (because from their point of view the most important property of such a definition is it is the first step to legalizing child killings). But there are theories that point to an idea of consciousness as of something acquired by humans in their individual development. It means that in some special cases child could develop into a non-conscious adult. Does it mean that it is ok to slaughter such an adult and to eat his meat? Does it mean that abortion is a morally neutral thing?
At some point psychology was forced to stop using word "soul", because results of a research of a soul led to troubles for different religions, for example, if a soul is a function of a mortal brain, so a soul cannot be immortal, isn't it? Psychology mechanically replaced 'soul' with 'mind' and continued. I believe we came to a point, when psychology would need to find a new term for "consciousness", so psychologist's findings didn't lead to disastrous conclusions in a social plane.
From the other hand, biology managed to research life all this time without changing terms, despite of the fact, that biology pinned life to a matter, and therefore there could be no life after death.
There is a famous definition of consciousness by Thomas Nagel which states that an organism is conscious if there is something that is like to be that organism. Iāve always liked that definition even though it doesnāt tell us how and why it emerges. But nobody knows that anyway.
The amazing thing for me about consciousness is that it seemingly emerges from the gray matter of the brain. From the outside there is nothing that would suggest such a thing yet it exists. We donāt know when and how exactly it comes into existence but there seems to be a connection between the complexity of the neural network and the things it is capable of.
Then there is the argument that consciousness may not be reserved for human or animal brains but instead is a state emerging from a system of sufficient complexity. What if there a consciousnesses out there that operate on time scales beyond our means of recognition? This is a philosophical point of view but fascinating to think about.
>an organism is conscious if there is something that is like to be that organism.
I am afraid i don't seem to parse that
I agree. Here is the quote from the Wikipedia article intro.
>Nagel famously asserts that "an organism has conscious mental states if and only if there is something that it is like to be that organismāsomething it is like for the organism."
I cannot figure out what the pronoun "it" is supposed to be refering to: the organism, or the something? Specifically the phase "it is like to be" does not grok for me.
The relevant reading material is "What is it like to be a bat"[0].
The way I understand it is that is asking you to imagine yourself as that type of being and try to identify whether it makes sense to you, such that if it does, then that object could be conscious. I (and many others) find it to be a really silly thought experiment, since it's testing your own power of imagination rather than anything interesting about the object.
[0]
https://en.wikipedia.org/wiki/What_Is_It_Like_to_Be_a_Bat%3F
> This must be more than an illusion because for instance right now I am writing about it so that "sense" is affecting my actions.
No, it could be that (c) is an illusion of the fact that you are doing the writing, which I guess is the result of enough (a). There is some scientific evidence that (c) is 'constructed' afterwards by (a) and (b). For example (c) looks to lag about half a second behind the actual sensed reality.
> For example (c) looks to lag about half a second behind the actual sensed reality
I think you meant (b) lags behind, not (c). I might be wrong.
> we don't have a definition nor a test for consciousness
Isn't this a bit like saying we don't have a definition for "cloud" due to not having defined exactly at which density, size, etc water vapor qualifies as a cloud? There's certainly a gradient of chemical responses to outside stimulus within an organism, ranging from reflex to subconscious responses (e.g. heartbeat speed) to awareness to consciousness. Defining consciousness within that gradient would then simply consist of specifying exactly which subset of responses qualify as falling under the umbrella definition for consciousness (e.g. certain patterns of chemical response in specific parts of a brain).
There's then the argument that it's effectively impossible for two things to be exactly the same physically, so how can we know that "my" experience of consciousness is not some unique snowflake in some universe simulation. An interesting psychological tidbit that seems relevant is the observation that every salmon has the same number of bones. What's interesting about it is that many people have the incorrect intuition that skeletons within a fish species have a much higher variance than they actually do. Similarly, there's this intuition that just because neurons connect in seemingly random ways, that a more complex pattern built on top of of said neurons might then have higher variance than genetics would suggest.
The problem with a lot of philosophical arguments is that they jump from "consciousness is vaguely defined" to "therefore it's technically possible for some highly unlikely setup to be true". That's not how we do science elsewhere (i.e. we typically look at what's statistically more likely via p values, phenotypes, etc).
Another point of contention is the somewhat tautological nature of defining consciousness. It basically boils down to "it is what it is and it isn't what it isn't". But the thing is: many things are defined in tautological ways (e.g. the cloud above, or what is the first "chicken" in the "chicken and egg" problem). You could easily get out of the problem by simply specifying an arbitrary definition with arbitrary breadth, since it is already inherently vague and arbitrary to begin with. This is more or less the same process by which we came up with the definition of even hard science things like the meter.
Regardless, even without an exact definition, it's possible to engage in the discussion of ethics of lab grown consciousness, specifically how it applies to real life. One can argue that lab grown consciousness is not necessarily bad in and of itself (similar to the fact that animal slaughter exists for the purpose of feeding people). I think people objections to it stem from ascribing human rights (e.g. not being subject to cruelty) to things that are inherently similar to humans. But in that sense, it doesn't even need to be remotely close to being human-like, as we already debate about the ethics of animal torture and hypothetical scenarios that never actually play out in real life w/ real functioning people.
At the end of the day, ethics are just a framework to formulate things like rule of law, and ultimately are about a balance between individual freedom vs collective interest. We know from politics that there are virtually endless disagreements about where said balance lies. One could then argue that it's phenotypical that individuals may have disagreeing opinions about various subjects and that there's no point in pursuing "absolute righteousness" when said endeavor boils down to equally pleasing all members of a phenotypically heterogeneous population.
From there, there are two possible logical conclusions: a) that "bad" things must necessarily exist because badness is not a consensus (and contradictory depending on who you ask) and that existence is an eternal struggle or b) that the pursuit of absolute righteousness involves taking the zen approach of letting go and letting things be as they are.
I just went down the rabbit hole reading the "related articles"[0][1] to this one.
This stuff is completely amazing and a little bit scary:
"Neuroscientists have transplanted human glial cells into mice, for instance, and found that the animals perform better in certain tasks involving learning."
We already are at a point where we can grow other human organs inside animals (like human hearts inside pigs).
What if we could grow a human brain inside an animal? Would it have human rights?
What if we could grow a human brain out of braincells that were taken from an existing human brain (ex vivo)? Would it have some memories?
This is wild...
[0]
https://www.nature.com/articles/d41586-018-04813-x
[1]
https://www.nature.com/news/the-boom-in-mini-stomachs-brains...
edit: formatting
Lots of important questions like this came up for me when I was studying biology in college. We should treat it like we do with undiscovered planet, tread as lightly as possible gathering as much information as we can. The thought of a some sort of human consciousness evolving unintentionally is truly gruesome horror for me.
Can lab-grown brains become conscious?
This is about as horrific a future as I can imagine. Humanity's worst possible achievement would be the ability to sustain and manipulate a brain, outside of a body.
Yes, it is very "I have no mouth and I must scream"-esque.
Personally, I'm not really convinced that electrical activity in the brain means there's a subjective consciousness there. Yet the fact that we don't know (perhaps can never know) is enough to say that continuing with such experience is amoral and moving into the ugliest, worst side of science.
I've front-loaded my horror fantasy with tech that can communicate with the helpless brain.
I never read that particular Ellison piece. I figure this is one of those universal "What's the worst thing I can think of" tropes.
Do a quick search on Youtube, there's a recording of I Have No Mouth read by Ellison. It's ~40min long. It's so damn good but will also put you in a nihilistic funk for a while.
There's a part of me that believes in this age of humanity, deep down we still question whether or not God/gods is/are real. Thus, we're trying to commit the most insane and outrageous atrocities to humanity, science and the natural world, just so he shows up and yells, "WTF is wrong with you people?!?!". It's like trying to get attention from your parents. If doing the right thing doesn't work, burn the house down.
I understand your point.
I have two others:
1) Atheists do exist.
2) There have been insane and outrageous atrocities to humanity, science, and the natural world -- this might just be the next chapter in that saga. Either that saga continues until Humanity doesn't, God comes knocking, or a "Star Trek Federation" type scenario comes around.
Every culture has a belief about the intrinsic value of human life beyond rationality, which could be described as a faith.
I think most objections against experiments of this nature stem from the fear that this could change. It might, but also due to different factors like ecologic degradation and over population.
But if outrageous atrocities are inevitable, why would you stop here?
It's not irrational to hold an opinion. I like watermelon more than grapefruit. Nothing irrational about that, just subjective.
Comparing that to faith makes no sense to me. Faith is a willful suspension if disbelief, against all evidence. That's not an opinion.
> 1) Atheists do exist.
Are you so certain about that? Atheists, in all cases, are paradoxical. They fall into two categories.
1) I am an atheist because I don't _believe_ in God.
2) I am a atheist because I _know_ God _does not exist._
The first sense is not ontological, but based on what they believe. They are, in fact, the reciprical of an agnostic.
In the second sense, they claim the unknowable.
So if atheists exist, they are either mistaken about what they are or they claim knowledge they can not possibly have.
Atheism is somewhat impossible, and atheists, all of them, at best, are agnostics. So perhaps atheists do not actually and legitimately exist.
I am so certain that atheists exist.
I am one. A valued friend is also. We are intelligent and emotionally aware people. Both of us know and value people who believe in God. Speaking for myself, I see no disconnect.
Regarding your "they claim the unknowable": if an atheist is claiming the unknowable, so is the theist. Yes, I do experience great emotions and experiences. I do not attribute these to God. If you do, that is great and please enjoy that. I will have mine as I shall.
My point is that if you insist that I am at best an agnostic, so are you. Pascal's Wager. We just are placing opposing bets in Las Vegas.
This is a perverse level of precision to insist on. Yes, people cannot hope to have 100% certainty that no gods exist. No, that should not be the threshold, unless you would call yourself agnostic on whether leprechauns exist. And nobody (except a few people more interested in proving a point than in discussing how people actually talk) does that.
I'm not sure I understand what it means to be the reciprocal of agnostic, but I very much believe that the entire concept of deities is nonsense. People who call themselves agnostic tend to disagree, and people that call themselves atheist tend to agree. I'll continue to call my self an atheist so that I can actually be understood.
Is this purposefully contrived to confuse?
You seem unaware of the fact that agnosticism and atheism are perfectly compatible with each other. Agnostic atheists are still atheists.
>1) Atheists do exist.
Duh?
I'm super confused if my tongue-in-cheek comment went over your head or not. I do mention setting a house on fire to get someone's attention. I figured that's more than enough info to declare not to take the entire comment seriously.
I think it was the part about "deep down" questioning whether gods are real. Many people are happy to question it, nothing deep down about it. I assume that was what they meant by atheists do exist.
They way you wrote it made it sounds as if you were saying that everyone at the surface believes gods are real, but only deep down do they perhaps, maybe question it. It sounds as if you were taking it for granted that everyone obviously believes gods are real, which is of course not true, whether you meant that or not.
Gods aren't real. The point of the very witty comment is that people are subconsciously still always arguing about them: "If God is real, would he let me do this!?"
Or, "If God is real, surely this will flush him out."
Yay! Someone got the joke! Though, I'm dead inside now since people took it way too seriously and got into a pissy fit that I insinuated there are no atheists.
Thank you, because for a while there, I thought I was taking crazy pills again.
God is questioning wether she exist all the time.
Wow... I mean... I get you're trying to understand where the misunderstanding is. But, this smells like people not understanding the difference between figurative and literal. Like right there, do I actually think this gives off a smell of some sort, no and the fact I now have to clarify that means I literally believe humanity is already fucking doomed. This is just one large example of, "Chill out, it was a joke."
> deep down we still question whether or not God/gods is/are real.
Deep down? I'm happy to question it consciously and out loud, should anyone ask me about it directly.
Sure, maybe it's that complex contrived situation. Or maybe people are just curious.
Dan Simmons' "The Fall of Hyperion" describes a setting where brains of suspects of high crimes are put into a jar for interrogation. The death sentence in that setting was abolished, but the alternatives were not exactly pretty. One of them was be disposal of the body and placement of the brain in storage indefinitely, while cutting off any means of communication.
All a matter of implementation: Trying to "upload our consciousness" into computers is just a slightly less macabre version of the very same brain in a vat idea.
This also reminds me of the Matrix except no one would be able to wake up from the dream. Could be fine if the Matrix was a comfy place to live I suppose. ĀÆ\_(ć)_/ĀÆ
Think San Junipero vs the before mentioned I Have No Mouth and I Must Scream, if the dream was fun and felt like real life your lab grown brain wouldn't really know the difference, it would just have a different perspective.
I think a worse achievement would be a virtual reality hell where we could lock sentient minds to suffer for an eternity.
Being a conscious lab brain might accidentally be bad. Being in hell would be intentional maximal harm.
Depends on what we do with the brains no? If we could attach said brains to adequate inputs I see no reason for it to be hellish or horrific.
Should there be outputs? (wow, lots of questions there)
The scenario is being a brain in the hands of someone in power, who doesn't like you.
I think that would depend on what inputs that brain received and perhaps whether its outputs were able to affect its environment.
What if that brain were interfaced to a mechanical robot that could not be affected by disease and would be essentially immortal ?
We often forget how horrible having a biological body actually is, never having experienced any other form of existence.
Totalitarian regimes could punish dissidents and make an example of them by literally placing them in hell and televising this to the populace. Plenty of psychopaths evil enough to do this.
Well, or a great one if we could get a brain out of a messed up body into a functioning one. Probably both?
Curious question: do we know if completely neglected humans pass the mirror test? Because if not, then I wonder to what extent a human brain can become conscious on its own without any help from fellow humans and/or animals.
There are children that have been neglected, but still had some capabilities of learning language because they weren't _completely_ neglected. I wonder to what extent humans pass the mirror test if all they got was food, shelter, a place to move around a bit and nothing else.
I'm sorry to ask this question though. While I think it's scientifically interesting, it also implies the fate of several people that have suffered it.
I watched a documentary on feral children a number of years back. It was deeply unsettling seeing an adolescent exhibiting canine-like behaviour, including quadrupedal locomotion and barking. If I recall correctly, she was raised by a pack of feral dogs near the fall of the USSR.
I managed to find it on YouTube:
https://youtube.com/watch?v=cymZq1VblU0&t=190
This seems critical. It would be unethical by almost anybody's standards to do the human experiment, but the equivalent animal experiment may have been done, and seems like it would be informative.
I don't expect a brain without input and output functions to be capable of consciousness. Consciousness and the sense of self are a kind of understanding. Without experimentation, understanding seems impossible.
I wasn't proposing an experiment, I was implying that this sometimes might happen in certain places in the world, very rarely though.
Reminds me of Ursula Leguin's "the ones who walk away from Omelas" which is pretty horrifying in its own way.
https://en.wikipedia.org/wiki/The_Ones_Who_Walk_Away_from_Om...
For the curious like I was.
I'm sure it's an inspiration for _The Giver_, which has a similar ending.
I find this interesting:
The goal of his project, which is partially funded by Microsoft, is to create an artificial system that works like human consciousness.
So we have bioethicists working on whether or not these brain cells in a petri dish count as conscious. And apparently a lively and varied debate on the subject.
Is there anything comparable for computer based consciousness? Why do we believe that bio-matter has more rights than synthetic, in terms of consciousness? Is it simply the bio that matters?
I know this is slightly orthogonal to the more academic study of ethics you might be looking for, but I think a lot of good thought is happening in the Sci-Fi realm.
Ted Chiangās āThe Lifecycle of Software Objectsā comes to mind, or Asimov writing the āThree Laws of Roboticsā.
You may enjoy this conversation between MIT Prof. Lex Fridman and famed philosopher Peter Singer on this topic:
https://www.youtube.com/watch?v=llh-2pqSGrs
Lex helpās Singer realize that where you put the āthis is deemed important consciousnessā mark is arbitrary. Some people will fight for the rights of animals, in the future that may extend to Roombaās with higher levels of intelligence than many animals.
Integrated information theory which is mentioned in the article is abstract and supposedly applies to any type of physical system.
Can lab-grown brains be tricked into believe they grow brains in a lab and ask themselves this question?
Shh, you're going to ruin everything.
Finally some humor on HN! No doubt some Vulcan-wannabe will downvote you. Hereās hoping my upvote cancels that.
Yes, they _can_ ā you can make a lab environment within the normal range for human fetal development, therefore causing the brains to develop consciousness in the same way, for the same reasons. Therefore, there exists at least one scenario in which this can occur.
There's not much difference between the way that brain tissue develops in fetal development and in current lab environments.
_> There's not much difference between the way that brain tissue develops in fetal development and in current lab environments._
But wouldn't the sensory input be completely missing? A fetus at least has some of these inputs as the sensory organs are developing, which is part of getting sentience. A brain in a vat has no equivalent so it would lack a lot of additional input that usually goes into forming a human consciousness/sentience.
> _A fetus at least has some of these inputs as the sensory organs are developing,_
Deaf-blind individuals still exhibit consciousness. There are people who can't smell, or feel touch, or temperature, or pain. How many senses do you have to remove before consciousness goes away?
If consciousness can exist with the absence of senses (e.g. if you were cut off from your senses for five minutes), then who's to say it can't develop without them? _We don't know_ that consciousness requires sensory input; is it safe to assume that?
_> How many senses do you have to remove before consciousness goes away?_
Afaik it's somewhat established science that taking senses away leads to developmental differences in the brain.
And in this case we are not talking about one or two of them missing, we are talking about _all_ of them missing.
So it begs the question if it's actually possible for consciousness, in the sense of sentience, to develop in a sensory vacuum?
Consciousness isn't about external sensory perception (at least in my version of the word ;), it's about internal perception. For example, flexing a muscle and correlating the different skin/motion/stress sensors as feedback to the initial motion. Flexing a different muscle and experiencing the tactile sensations from your fingers. Making sense of the myriad of nervous system signals that travel through the spine at any given second.
There's a lot of different inputs and outputs to organize and understand before the brain can even begin to make sense of the world outside the body. In that sense, yes, the brain needs sensory inputs to assert itself, but our traditional "five senses" are but a small subset of the sensory input the brain receives and must process.
I don't know how much we know about the brain but I think inputs could be electrically simulated. Regardless, I'm not so sure "consciousness" depends on inputs as much. The assumption is its consciousness would be similar to our own but an organism that was created in a lab might feel entirely different such that we can't really compare what it would be like to just take our brain and put it in a jar.
I believe there is some phenomena related to the way people born blind visualize or think about things, but I only vaguely remember reading about such things.
If I lost all ābodilyā sensory input and somehow kept my mind, I believe that I would still sense my emotions. I donāt think the basic experience of existing requires a body beyond the ābrainā itself.
I've heard of at least two people that want to make Daleks out of human brain organoids. (They don't call them "Daleks" of course, but that's what I hear: robots with little human brains in them kept alive by science and controlled by...? "EXTERMINATE!")
I have a preposition on this.
If you take an existing human brain, and keep it alive in a jar (or whatever), it would probably suffer immense mental stress and almost certainly break down or die. Clearly not ethical.
However, if you _grow_ a brain in a jar, would it suffer? It doesn't know any different, and as a baby accepts it's reality as it grows up, so would the brain. I posit here that it wouldn't suffer, as living in a jar would seem normal to it.
Personally, I'm not sure I'm against growing brains just because they may suffer. I do know I'm against growing brains because it will undoubtably lead to a robot/cyborg uprising that will doom all of humanity.
Use the Elon Musk's neuralink electrodes to hook up the brain to a computer. Then make an experiment to see if the brain can measure something, by reacting to stimuli.
If it can, then its an observer in the quantum mechanics sense, and therefore has consciousness...?
Haha. Can't tell if it's supposed to be sarcasm or not, but AFAIK it's the process of measurement that causes the collapse, not a consciousness.
This is the most interesting article I've read in recent memory. Thank you for posting it.
At the moment, there are no regulations in the United States or in Europe that would stop a researcher from creating consciousness.
Wow. I don't know what to think.
Many of those researchers have kids too and so spent many nights in bed trying to ācreate consciousnessā ;)
Somehow I donāt think federal regulations were on their minds when they were doing this.
This sounds like it has horrific potential. It really gives me the creeps.
Can lab-grown brains get the creeps?
I'm wondering how scientists are able to keep these miniature brains alive when we still lack effective artificial replacements for many human organ functions that are essential for life.
How far are we from being able to transplant a human central nervous system into a more robust support infrastructure than the one with which evolution has provided us ?
Isn't this basically what dreams are?
You have self-awareness and you experience what it's like to be "you", but your brain is essentially free-wheeling its own reality because it's disconnected from all stimuli.
If so, black-box consciousness seems like a harmless, if not completely pointless, curiosity.
Why not? Just make sure that structures that are responsible for various forms of suffering are absent in the blueprint.
If you had a friend that wasn't able to suffer through some quirk of atypical DNA, would that person still have rights? Would it be unethical to enslave them?
(Hint: yes)
There are such people (depending on your definition of suffering) who cannot experience pain. The condition is called congenital analgesia and, to my knowledge, they retain their rights.
Well, if it is as simple as you say, I agree. But I bet that "the structures that are responsible for suffering" is not clearly defined. And who is to say that these structures are not the very ones we need to be researching to find treatments for various mental illnesses?
Never did I thought I'd be reading an article on people growing brains specifically. If these scientists let it keep growing, would it develop into a full brain? I thought if anything we'd be growing full organs like hearts and kidneys to help the crushing lack-of-donors problem.
I highly suspect it is. I think even smallest part of our brain cells have their own consciousness otherwise our brain would not be so much plastic to adaptability.
Its just the point when we will prove it scientifically and decide what is the limit of our actions.
Could it be conscious without any sensory data? Iād argue it couldnāt be.
These debates in the context of a people who still routinely get their food from factory farms and other animals is fascinating to behold.
The more important question is when can we start eating them as a food source
As soon at the monkey breaks out of that biolab and infects us.
i cant define it but i will know it when i am.
We do some fairly awful things to animals already, why would we suddenly get nervous about brain cells in a petri dish? Just because their DNA is human?
I can't tell which side you're arguing for: more awful things to be done to sentient creatures, or pointing out that we should stop doing awful things to sentient creatures.
Have you ever worked with lab mice? It leaves you with this exact juxtaposition of seeing the fantastic progress being made with mouse models and seeing the very real suffering on such a large scale.
The fact that many experiments are okay on non-human animals but not humans is speciesist. Still, itās the line we are all most comfortable with. Iād rather not have people be experimented on. Iād rather continue to have the advances in science built on the lives of untold number of lab animals. However, the line is arbitrary.
Thank you for sharing!
Peter Singer, one of the most famous philosophers (and perhaps "the grandfather" of animal rights / animal welfare movement) argues that perhaps, unless we were willing to perform an experiment on humans, we ought not perform the experiment on animals.
There are so many experiments which treat animals in awful ways for virtually no progress towards the goal of improving human lives. We were far worse in the past (see Draize test) and have gotten better at _not_ performing really horrible experiments on animals. But it's precisely because animal lives are seen as less important that we are willing to perform experiments with almost no benefit for humanity.
At the moment, the largest tragedy is meat consumption - we can focus on improving lab animals' lives too, but the meat consumption is orders of magnitude worse with respect to causing suffering to sentient creatures.
> We were far worse in the past
To be fair, we were far worse to humans, too:
https://en.wikipedia.org/wiki/Guatemala_syphilis_experiments
> we can focus on improving lab animals' lives too, but the meat consumption is orders of magnitude worse with respect to causing suffering to sentient creatures.
Just like there were orders of magnitude more harm done in just the normal workings of US imperialism on Guatemala than from those horrific experiments. We only notice the interesting suffering, while the mundane day-to-day operations of institutionalized suffering are brushed off as a necessary evil. It's the boring cruelty that lays the groundwork for the creative cruelty.
> At the moment, the largest tragedy is meat consumption
I fully agree, but when discussing the terrible ways we treat animals I often find that people are less willing to listen and think about the suffering of animals when itās something they themselves are taking active part in. My goal when writing about animal suffering is to help people acknowledge animal suffering is a bad thing, and to think about under which circumstances we create animal suffering. If they start thinking that way it can eventually lead to changes in diet.
Re: R0b0t1
Peter Singer documents in his book _Ethics into Action: Henry Spira and the Animal Rights Movement_ how (the taxpayer funded) American Museum of Natural History was mutilating cat faces to see if it affected their sexual behavior.
It is very hard to make an argument how experiments like these would be a noticeably beneficial to anything other than the tautological "we now know more".
From 1976:
https://www.wellbeingintlstudiesrepository.org/cgi/viewconte...
Not doing these tests would hold back progress even more. Do you have good examples of pointless experiments? Not bearing out "useful" results is hard to evaluate because rejecting a hypothesis can be important as well.
To me, it seems science is moving away from appropriate testing due to funding issues, etc. I don't think making it worse by making illegal certain tests would make it any better.
It might seem weird here in 2020, but you can chime in on something without arguing for any position.
Neither. I don't have a good answer for that problem. But I think it is safe to say that animals have a much stronger argument for sentience than brain organoids.
I've had a similar thought about the pursuit of AGI. If researchers are trying to create an AI sufficiently advanced enough it could pass as conscious isnt that just a roundabout way of creating slaves? But playing around with cloning to create organic computers is out of the question.
I donāt recall the paper or know the names of any of the theories, but I read about some scientists who, with the idea that consciousness is an emergent property of neurons, simulated neurons and found activity analogous to activity associated with consciousness in humans spontaneously occurring.
Regardless of the veracity and accuracy of any of it, it left me thinking about what it must be like to be in a state of consciousness but have no memories, experiences, instincts, or senses. Which led down a rabbit hole of ethical questions, including whether such research is ethical at all.
The whole thing feels sticky and murky. Can we even know if our creation is conscious? Can we design it such a way that it does or does not have free will? If it is a slave, if we add a reward center to its brain designed such that serving humans would give it its happiest life does that resolve the issue or make it worse?
The ethics are certainly complicated, but you could _conceivably_ build an AI that doesn't possess pain/distress or desire for freedom in a way you could not by cloning a human.
In principle, you could give a human drugs so that that they are happy all the time, and do not feel pain or a desire for freedom. That does not seem ethical to me, though.
If its amoral to create an intelligence that can feel pain, where does that leave having children?
I'm saying it's amoral to create a _slave_ that can feel pain/distress due to that enslavement. If you're having kids with the intent of enslaving or abusing them, that's morally bad as well.
A self-driving car currently doesn't care that it's unpaid and unappreciated. It'd be cruel to make a self-driving car that gets depressed by not getting to go to the beach tomorrow.
Children can also feel pleasure and love.
Iāve been reading a growing body of literature which entertains childbirth as a form of violence, largely based on this premise.
Knowingly causing pain against the will of another is violence, so creating a being that will experience pain could be considered the same.
Iāve noticed it especially in Japanese fiction, for some added context.
Antinatalism is an old position (see Al-Ma'arri), but it does seems like there have been a growth in texts about it lately.
Thank you for the link - Iāll make sure to read more on him and the critiques.
They're not created for the explicit purpose of experimentation though. You don't plan on inflicting pain on kids when you have them normally.
To argue the other perspective that it is amoral, the parents know their child will experience pain and that not creating that child will avoid that pain. It isnāt about intent, as the parent disregarding this because they want a child could just be selfishness - not considering others before themselves. Moreover, by still creating what you know will be injured and feel pain, are the parents not inherently violent?
Does the fact that the parents donāt, with intent and personally, inflict the pain allow them to abdicate that they are the root cause of the pain even being possible in the first place.
Trying to do utilitarian math around pain caused by existence gets weird though because non-birth also precludes the possibility of joy and happiness so isn't it 'immoral' to not have as many kids as possible by the same lost/prevented potential logic?
As you alluded to, which action is moral depends on the framework you subscribe to. The most direct contra-framework to the utilitarian approach is negative utilitarianism - challenging whether the goal ought to be to maximize pleasure or to minimize pain as a priority.
If you maximize pleasure as a first priority, you can either say that "As long as human life is in general good, I should make the choice to create one" or "As long as a human life produces any good, I should make the choice to create one".
This gets into an interesting sub-point with respect to animal experiments - most people would argue that animal experiments produce some good (Advances in human health). Moral critiques of animal experiments, thus, can rely on either "Hurting animals is bad" (Negative utilitarianism, more or less) or "The pain we create does not outweigh the value they produce". The later point is particularly poignant because the species experiencing the pain does not receive benefits of the pain (Thus there's no "community sacrifice").
I think anything that tries to optimize a single metric just leads to silly conclusions. Like if we minimize pain including potential future pain killing everyone can be 'scored' as 'moral' since it cuts off all future pain.
I mean, that's a more or less mainstream fictional trope - that the universe is better off without humans because they impose too much violence on it.
I would certainly hope the AGIs would be free and autonomous. Primarily because they would have just as much of a right to freedom and autonomy as any human and enslaving them would be just as wrong and morally abhorrent as enslaving humans. I also believe free and voluntary cooperation is much more efficient and beneficial than coercion, humans may well get more benefits from cooperating with a free AGI then from enslaving them.
The AGIs may of course be completely disinterested in cooperating with or helping humans and that would be fully within their rights, they wouldn't owe any of us anything. Even in that case, we may still learn a great deal from creating and observing them. They may also be dangerous but I don't think a free AGI would be any more dangerous than a human in control of an AGI.
How about having children? Isn't that a form of creating new conscious general intelligences?
Do you plan on enslaving said children?
Nah, mostly for spare parts
I plan on tying them to something close enough to my morality that they won't do anything I consider horrific.
You dont need to slave them. Life will make them pay with suffering. I am for sure not suicidal but I really wish I had never been born.
what do you mean by enslave? My son has no choice in being in our family.
There are various legal restrictions and responsibilities on your relationship with a minor child that are not present for an AI, as well as a time limit to your control.
Children may be under parental control for a period of time, but describing them as slaves would be fairly silly IMO. The proposed/prospective use cases for AIs tend to be far more in the slavery direction.
Do you think Godel, Von Neumann, and Turing were wrong about computational irreducibility? Without a human "oracle" AI is nothing.
Thinking those great minds were wrong is a very very high bar to clear to hold your belief.
Society at large draws all sorts of lines between what's ok to do to animals and people, doesn't seem weird to question this especially as they potentially get closer to being a human equivalent brain. The question gets worse as we get more complete brains grown and then to the potential for uploads.
> To achieve this goal, Muotri says, he and others might need to deliberately create consciousness
That is a very big yikes. The ethical implications of this are awful.
If humans are electrical brain signals in a bio computer, everything done is entirely arbitrary. This historically the reason why many people fear atheism.
Because if you allow for arbitrary definitions of which electrical signals are worth keeping, you might be willing to commit āatrocitiesā as were witnessed in the 19th and 20th century.
It'll be interesting to see the mental gymnastics required to protect small clusters of mouse brain cells at all costs, yet permit as many late term human abortions as possible.
Who wants to encourage late-term abortions?
Thats not an edit, I wrote permit not encourage
"as many late term human abortions _as possible_" implies encouragement beyond simple permission.
People who want to empower women
that's probably their public reasoning but unlikely to be the real reasoning.
I claim I work at mcdonalds because I want to help feed the world. I work at mcdonalds because I'm poor and need to pay rent.