💾 Archived View for satch.xyz › log › differentbots.gmi captured on 2024-08-31 at 11:40:53. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2024-03-21)

-=-=-=-=-=-=-

Needing to write down some thoughts about LLMs, making computers act like humans, 'Artificial Intelligence', and the future of it all.

Why I'm interested

My primary motivation for research and progress on these things is not technology for application in society. Personally I am fundamentally pessimistic about technology and even though I try to use the good parts of the internet I think the internet as a whole does more harm than good.

Similarly, creating new technologies that make computers capable of doing things they cannot currently do, specifically capable of better imitating humans or doing tasks that only humans can currently do, probably is not going to make the world a better place and definitely won't create a robot-powered luxury outerspace communist utopia like some people imagine.

Okay so given that, I care immensely about understanding the human experience, and I find the question of exactly what creates conscious experience a very important one. There are lots of theories about this and most of them a pretty shitty and some of them are alright but none of them are both answering my questions and supported by evidence.

So I'm thinking maybe I'll dedicate my life to try and make some progress on this.

The context

One strategy I have (a small part of the whole, mind you; computers do not contain THE ANSWER) is to try and create a fully sentient computer, or at least a computer which looks like it's sentient, and see what happens and where we fail. I think that process is going to be extremely informative.

So recently there has been an explosion in 'Generative AI' and LLMs which use Neural Networks or Deep Learning as it's called. You may have heard about this. They are pretty good at generating things that seem to show at least a superficial understanding of prompts and can take instructions and create goals and then try and achieve those goals using symbol manipulation. Basically the multiplication of lots of tensors and using the output of the number crunching to make predictions about what is the best thing to do. (Then there are other systems which take their predictions/instructions and actually do them, for example printing words to the screen in the case of a chatbot).

What I'm thinking

One of many pretty obvious things that LLMs and similar technologies *don't* do that most competitors for the 'sentient' label *do* do is have continuous cognitive activity from moment to moment, even when not being prompted. This is related to some idea of agency.

So that leads me to wonder whether it would really be so hard to run a while true loop on an LLM. Probably a few LLMs working together. They can talk to each other in a systematic way and have really specific training and have some kind of overarching goal to learn stuff. Maybe a more specific goal would be helpful. Then the system can have access to a web crawler and just run all day. Do its thing.

There would need to be a monitor system which basically watches everything the LLM is doing and generates reports. There would be a long term memory system which lets the LLM store and organize data however it wants for later access. There would need to be a huge number of subsystems and monitors in place.

More details

I don't really have the technical knowledge to do this. Maybe someday I will be able to, with help.

But probably someone is already doing this, no? It doesn't seem that hard to do using preexisting tech, just a lot of work.

I'm probably going to do some more work sketching this kind of brain simulation in the near future, maybe informed by some sources external to my brain.

If you know of someone who is doing this let me know:

mail@satch.xyz