💾 Archived View for samsai.eu › writings › on-ai.gemini captured on 2023-11-14 at 07:49:41. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2023-01-29)

-=-=-=-=-=-=-

On AI

AI, artificial intelligence, is a topic that I find in some ways a bit scary. These days computer science and its applied fields are full of things we refer to as machine learning or AI, and in my view these practical applications often fall short of what I would find desirable.

These AIs are at the same time difficult to understand black boxes and frequently arbitrary. If you've participated in massive online platforms like YouTube which are basically entirely based on multitudes of these AIs, you'll probably understand why I view them as so. When their responses are deemed wrong, they can only be put through more training until their responses line up with our expectations.

But even when these AIs do what we want them to, in the back of my mind there's this doubt about whether the AI is just happening to guess right. Our current machine learning systems, to my knowledge, are just based on finding patterns for our AIs to repeat back. Certain characters stringed together form words, certain pixels form images of bees, some bits form audio/video streams which are more or less advertiser-friendly. But there is no process beyond that, no thought process.

This doubt does create an epistemological problem though. We cannot inspect the thought process of other humans, so theoretically there would be no way to differentiate between a human and an AI that is a very good impostor. We could ask humans to describe how they arrived at their conclusions, but the AI could also (though maybe with less probability) provide equally convincing statements by just acting out a pattern of human behaviour. We also cannot say for certain if the decisions of other humans aren't similarly arbitrary. For all you know, you could be the only rational actor and everybody else is just an impostor.

But I still feel like I am justified in my doubt, even if I cannot provide a solution to the Gettier problem of AIs mimicking humans. I would be more convinced if we could model natural workings of our universe on such a low level as to be able to create an accurate representation of a brain. An atomic AI of sorts, which is built upon simulations of natural phenomena. Obviously this approach is also rife with potential shortcomings, not the least of which are the numerous metaphysical problems that would doublessly come up.

Ability to create the atomic AI would also have a number of serious consequences. Firstly, this would mean that we have essentially created an artificial conscious being which would be a massive ethical conundrum regarding the AI's and everybody else's rights. Another issue that would also arise is the simulation hypothesis, which states that if you are able to create a computer simulation of the universe, it is highly likely that you yourself are also living in one. The creation of the atomic AI would require such an accurate degree of simulation that one could conceivably simulate the universe, provided they have enough computer storage and processing time.

These problems are obviously getting bigger than I can presume to handle, so I ought to stop here. What will become of AI and our relationship with it, nobody can know for certain. In the meantime, I would recommend some machine learning enthusiasts to have a cursory look at philosophy and despair.