💾 Archived View for sdf.org › icemanfr › 2023 › 24.gmi captured on 2023-12-28 at 16:01:13. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
Like many people this year (1), I tried generative AI on many different sites : Open AI, Google Bard and other clones, the list begins to be too long. But for what ? Are they really what we expect from an artificial intelligence. And what is intelligence vs culture ? For me, it's the buzz of the year but above all , a big marketing hoax !
It's difficult to define the word Intelligence. Is it «The ability to acquire, understand, and use knowledge» ? «the ability to learn, understand, and make judgments or have opinions that are based on reason» ? If we start with that definition, those tools are able to acquire knowledge through bots. But are they really able to understand the knowledge and to make judgments. If they could, they could judge their own answers to our questions and they won't be lying as they are in most of the answers I could try. Sometimes, they are honest and they are telling the fact they can't answer to a question. But most of the time, it's a complete bullshit mixed with some truths, like a … conspiracy website. And there's nothing who could prevent it, for the moment. The only one able to judge, for the moment, is a human, a specialist of the subject, of course, not the one who is asking an AI. Imagine if every students are using generative AI to complete or make their homework...
Do the generative AI learn something ? To do it, it's necessary to have a tutor, someone able to correct the answer. But you missed something in the equation. AI are the child of a programmation team with its opinion, its knowledge in some science, language, culture but not everything. Nobody in the world is able to understand every data on every subject. The mistake in those kind of pseudo-AI is to believe that intelligence is only a mass of data and an engine to interpret a language. It's much more than that and those generative AI are not such a huge progress in the science of human intelligence modeling. I could compare it to the win of Deep Blue against Gary Kasparov in chess. It was not a victory of intelligence, it was a victory of the speed of calculation of every hypothesis with a some good test to guide the program to good choices. For such games, mathematics and CPU can do it. But to generate something new, it's not only intelligence.
Those type of AI are not creating, they are copying. I tried many subjects for paintings, architecture, photography and it was very bad. They have a mass of culture in their datas but not a good knowledge of very specific subjects or styles. For example, they don't know specific painting styles if the author is not mainstream, even with their big data collection. It gives the impression of someone just discovering the broad outlines of a subject, a poor student who didn't delve into his subject. Perhaps AI are good for programming, because their programmers love that subject. But they are not good for what programming team are bad. And in many examples, we could see that AI are racist, misogynist, and full of clichés. They are copying the opinions of their creators. But why are they lying in most of the answers ?
Because their creators are lying, of course. They are maybe sure to have created an intelligence but with all the amount of money behind that, they can't tell the truth for investors : Their tool is a baby with the congress library behind him, without the ability to understand what is inside. Recently, we saw the problems of governance in OpenAI. It's typically due to the fact what they believe to be like gods but they are just simple humans. So they managed to create an intelligence able to build answers what appear credible. But they are not able to understand their own answer or to choose what is the truth between two different answers, depending of the source. A specialist told that the algorithm is just cutting words in sylabs or small parts and is dealing with that to build answers. They are ingesting now their own lies in their data, because everyone is copying as stupid as are AI, answers from AI in many, many sites and documents. Teachers are now reading some essays and works from their students that were written by an «AI». They are smooth talkers, like many in the marketing business.
Last avatar of Generative AI, Gemini form Google into Bards is done to recognize audio, video, pictures and words more efficiently than ChatGPT and other tools like that. Is it true ? Google proved it with its own test made of 32 tries. Easy...and possibly another hoax. I'm waiting a real open test to know if its not another marketing announce for investor and not a real path forward to a real AI. For what I saw in an AstTechnica article (2), it's not what you could imagine. Open AI seems to work on a new version named Q* (q star), also in a way to understand math problems, another small path. It's clear that GAFAM, Chinese tech giants and Europe are worried about that and all that people can do to manipulate data and opinions. In some domains, those tools can be helpful (coding?, synthesis?), but in the same time, they can be a generator of mistakes and errors for lots of domains, and influence our global knowledge. Is it what the boss of OpenAI had in mind when he said it could destruct humanity?
Can we at least believe a real AI can exist one day ? Now, intelligence of computers is better that human for specific subjects, like chess and games, like document analysis to find a synthesis like in the legal field. The Turing Test can be performed (3) by ChatGPT, bard, etc… But that is not intelligence, it's an help. Intelligence is finding how to convince humans of a jury with a fine speech if you are a lawyer. It's finding your own style in your art, making people forget your inspirations. It's all this kind of things that you can't explain yourself when you just created something. It's not only a matter of experience and culture, but something that centuries of research haven't found. It's like a good singer who is able to sing the right notes but without any emotion, or unable to create its own style. I'm not interesting in that, but when you see which music is in the top of the sales, I'm feeling much more alone…
(2) : Bard against ChatGPT in (not so good, for me) tests
(3) : If AI is making the Turing test obsolete, what might be better?
or by a reply on your blog