AI as a New Crystall Ball

Stanislav Panin, 2024-08-13

~~~~~~~~

For a long time, the most common approach to artificial intelligence was based on the idea of reconstructing the process of logical thinking. This approach belongs to the category of symbolic artificial intelligence[i]. To possess intelligence, in this view, means to be capable of connecting facts based on certain rules in order to discover new facts eventually achieving a comprehensive knowledge of the subject. An important tool in the field of symbolic AI is logic programming, the type of programming where a program consists of a collection of facts and rules that can be applied to reply to queries. The most well-known language designed for logic programming is Prolog created in 1972.

[i]

How does logic programming work?

A simple example of a program written in GNU Prolog for managing student registration to university courses might look like this:

% Database of students

student(john).
student(jane).

completed(jane, reli101).

% Database of courses

course(reli101).
course(reli212).
course(reli213).

prerequisite(reli212, reli101).

% Rules

satisfied_prerequisites(S, C) :-
 prerequisite(C, P), completed(S, P);
 \+ prerequisite(C, _).

can_take(S, C) :-
 student(S), course(C), \+ completed(S, C), satisfied_prerequisites(S, C).

In this example, I created two databases – the database of students, listing their names and completed courses, and the database of courses, listing courses and their prerequisites. Then I introduced two rules:

1. satisfied_prerequisites is true if the course has a prerequisite and the student has already completed the prerequisite; or if the course has no prerequisites (+ represents negation in GNU Prolog).

2. can_take is true if the student has not already completed the course and has satisfied its prerequisites.

Using this program, we can make queries, for example:

?- can_take(john, reli212).

will return “no” because John has not completed reli101, which is required for reli212. We can also ask the program to show us all courses that a student can take, e.g.:

?- findall(X, can_take(jane, X), L).

will return “L = [reli212,reli213]” – Jane cannot take reli101 because she has already completed it.

Importantly, this program does not simply look up facts in the database but actually generates new knowledge. We have never directly entered any information about which classes any of the two students can take; the program deduces this for us based on general rules that we have defined.

In the second half of the 2010s, another type of AI gained popularity, largely thanks to the success of the GPT family of large language models introduced in 2018. This type of AI is based on artificial neural networks and is different from symbolic artificial intelligence. Instead of trying to recreate the process of logical reasoning, artificial neural networks, as their name suggests, mimic the nervous system by modelling the way neurons are connected and communicate with each other. There are many types of applications based on artificial neural networks, including face recognition systems, text-to-image models (Midjourney, Stable Diffusion, etc.), and large language models (GPT, Llama, Google Gemini, etc.). LLMs, artificial neural networks designed to produce human-like replies in response to prompts, will be the focus of this essay.

In principle, the concept of artificial neural network is not new – it was introduced[i] in the 1940s by Warren McCulloch and Walter Pitts. However, prior to the 2010s, their impact was relatively small due to limitations in computational power and the scarcity of easily available data necessary for training LLMs. The access to large amounts of data was especially important. This is because, in order to generate answers, artificial neural networks are going through the process of training during which the network learns to generate plausible answers by processing tremendous amounts of examples. For general-purpose language generation, such as the one achieved by LLMs, it is necessary to use large collections of diverse text samples. This became possible with the growth of the Internet – both user-created content, such as the one found in social media, and digitization of more traditional sources, such as books, newspapers, and magazines, allowed the creation of impressive LLMs.

[i]

Language Models as “Bullshit Machines”

Unlike previous attempts at AI, LLMs intentionally bracket the meaning of texts they receive and produce. Instead of operating with logical connections, LLMs are created to generate meaningfully-looking replies, that is to say replies that a human interlocutor might perceive as meaningful. Due to how LLMs are designed, it is common for them to provide answers that look plausible but are wrong. To illustrate this statement, here is one example from my conversation with GPT-3.5:

Q: Rules for emphasis in Spanish. A: In Spanish, the placement of emphasis in a word is determined by the following general rules: 1. Words ending in a vowel, -n, or -s: The emphasis is typically on the second-to-last syllable if the word ends in a vowel, -n, or -s. For example, “casa” (house), “jardín” (garden), “feliz” (happy)… .

Here the reply seems to be reasonable at the first glance – the rule itself is correct. However, of the three examples that the AI included two do not fit the rule they are supposed to illustrate. In the word jardín, there is an explicit marker of emphasis on the last syllable, therefore it is an exception and should be pronounced as har-DIN. And feliz simply does not end with a vowel, -n, or -s; it, too, has the emphasis on the last syllable.

Cases like this are not accidental mistakes akin to the ones humans make from time to time, nor are they a simple quantitative issue that could be easily resolved by using more powerful computers or larger datasets. Producing false sentences like this is an inherent property of LLMs, in other words, this is simply how they work. By design, LLMs are not created to evaluate accuracy of statements they produce. In a recent publication, Michael Townsen Hicks, James Humphries, and Joe Slater write[i] that

[i]

…it’s not surprising that LLMs have a problem with the truth. Their goal is to provide a normal-seeming response to a prompt, not to convey information that is helpful to their interlocutor… . One attempted solution is to hook the chatbot up to some sort of database, search engine, or computational program that can answer the questions that the LLM gets wrong. Unfortunately, this doesn’t work very well either. For example, when ChatGPT is connected to Wolfram Alpha, a powerful piece of mathematical software, it improves moderately in answering simple mathematical questions. But it still regularly gets things wrong, especially for questions which require multi-stage thinking. And when connected to search engines or other databases, the models are still fairly likely to provide fake information unless they are given very specific instructions–and even then things aren’t perfect.

They further argue that the best way to understand LLMs is by identifying them as bullshit producing machines, where the word “bullshit” is used in the sense conceptualized by the American philosopher Harry Frankfurt. Frankfurt defined bullshit as a form of speech characterized by complete disregard for the content and its veracity. To quote the same article,

A student trying to sound knowledgeable without having done the reading, a political candidate saying things because they sound good to potential voters, and a dilettante trying to spin an interesting story: none of these people are trying to deceive, but they are also not trying to convey facts. To Frankfurt, they are bullshitting.

The importance of changing the language, as proposed by these authors, has to do with the growing necessity to amend our expectations with regard to LLMs. Many people perceive LLMs as the source of reliable answers they are simply not capable and probably will never be capable to provide.

In general, describing LLMs as a form of intelligence is misleading. APA defines[i] intelligence as “the ability to derive information, learn from experience, adapt to the environment, understand, and correctly utilize thought and reason.” LLMs do not understand language, nor do they produce any form of knowledge. The use of words like “intelligence” or “comprehension” with regards to LLMs has mode to do with marketing efforts and sci-fi than reality. AI researchers, in fact, know this well. For example, last year, the Meta’s AI chief scientist Yann LeCun pointed out[ii] that, despite impressive capability in producing texts, LLMs “are still very limited, they don’t have any understanding of the underlying reality of the real world” and that “we are missing something really big … to reach not just human level intelligence, but even dog intelligence.”

[i]

[ii]

What History of Esotericism Teaches Us About AI?

If the current language people use to talk about AI is ill-suited, it means that there is a space for a new, better way to talk about LLMs. I believe that history of esotericism can – somewhat unexpectedly – help us to find this way.

The function of LLMs is to produce texts based on originally meaningful human-created texts from the training dataset. While the LLM itself is not concerned with the meaning of these sentences, a human interlocutor can find meaning in the semi-random output produced by an LLM. This has a lot to do with human’s impressive ability to find meaning in almost everything. A well-known example of this is the Rorschach test, where random inkblots are interpreted as meaningful images by a patient. This parallel, however, is not perfect because the images in the Rorschach test are intentionally meaningless and serve. This is not the case with LLMs that are trained on meaningful sentences and are trying to produce new sentences by rearranging the material they were trained on.

A much better parallel would be a divinatory tool, such as a Tarot deck. An act of divination is a form of algorithm. It implies a random selection of meaningful building blocks, such as Tarot cards, that becomes a reply to a question for a person interpreting the spread. After thus, the practitioner interprets this arrangement of cards as a response to the query. Similarly to how LLMs learn to produce meaningful texts through training on textual datasets, Tarot practitioners learn to produce meaningful replies by internalizing Tarot literature. While the practice of Tarot and the mechanism of LLMs do not have one-to-one correspondence, both can be seen as methods of text generation, and their similarity is substantial enough to be of interest.

An even better comparison is with a séance. In an earlier essay, I argued[i] that the modern use of LLMs is functionally and structurally similar to nineteenth-century séances. I will briefly reproduce my main thesis:

[i]

Spiritualists aspired to answer religious questions scientifically by gathering empirical data through séances during which participants presumably communicated with the dead through mediums. People attended séances with different motivations. Some wanted to talk to their deceased loved ones, others hoped to learn what happens to us after death, and yet others sought to acquire posthumous writings of famous authors, such as Charles Dickens… . Contemporary conversations about AI, too, often touch on the topic of communication with the dead. For example, a developer of a popular AI-driven chatbot Replika explained that she created it after her best friend’s death in an accident, in an attempt to partly recreate his mind—a practice that mirrors Spiritualist communication with the dead. What is new is the way in which this communication happens.

Attempts at AI-enabled communication with the dead are more common than one could imagine. In 2021, Microsoft has secured[i] a patent for a chatbot enabling communication with the deceased. Three years later, new publications on the topic are abundant, and a new term, “ghostbots[ii],” was even proposed to describe this phenomenon.

[i]

[ii]

There is more to it than just a metaphor. Mediums who tried to receive new Dickens novels from the otherworld, were probably familiar with who Dickens was and with his writing. The communication with the dead, in this case, could be seen as channeling of this informational residue that allowed, through its recombination, to produce new texts by the same author. In this case, a medium operated akin to a biological LLM. Or, to inverse this analogy and translate it into esoteric language, we could say that LLMs is a way to collect thought-forms from the world of ideas and materialize them as new texts.

AI and Subtle Planes

In modern esoteric literature, there exists the notion that information produced by humanity as a whole forms a separate layer, or layers, of reality that in esoteric jargon are often called subtle planes, including the astral and mental planes. Precise lists of planes differ from source to source but these two appear most often. In the early twentieth-century esoteric literature, the notion of subtle planes was closely associated with the idea of the unconscious (and other similar concepts – subliminal, subconscious, etc.), which was considered the source of superhuman powers such as telepathy and clairvoyance. Here is, for example, how one author describes the mechanism of the visions in crystal balls:

…there are more things in our subconsciousness than we know, for the things we are aware of knowing are much fewer than the things we read or seen or otherwise contacted at one time or another. These things in our subconsciousness have got there in different ways. We may once have known a thing and forgotten it, or in other words, it may once have been present in our consciousness and have now sunk back into our subconsciousness; or we may never have known the thing, that is, we may have observed it without knowing that we had done so, and have thus allowed it to go direct to our subconsciousness. Anything in this subconsciousness may be brought up by means of a proper stimulus, in the present case a spepculum, and may make its appearence haphazard, in a phragmentary, dreamlike, meaningless manner… (Theodore Besterman. Crystal-gazing. London: William Rider & Son, 1924, 136).

Note the key points of this arguments. First, there exist a special, subconscious, layer of mind that we can tap into to get important insights that we were not aware of. Second, it can be done by using a special device, a crystal ball. Third, there is a warning: the visions in the ball requires additional interpretation, as they will appear “in a phragmentary, dreamlike, meaningless manner.”

In the world where people outsource more and more of their memory to cloud storages, social networks, and other technological applications, it is not entirely implausible to say that these technological devices become modern-day subconsciousness, which stores things we encountered but are not aware of anymore. An LLM then becomes a crystal ball that taps into this ocean of unconsciousness and, just as an actual crystal ball, generate for us this “haphazard” results. Just like a crystal ball requires a knowledgeable person able to decipher its messages, so are the replies of LLMs.

How to avoid this haphazardness? Well, esoteric literature has already proposed some answers. For starters, an operator should keep a scrying sessions as focused as possible. Here is how Donald Tyson explains it:

The second difficulty faced by scryers is how to control and direct the subject of the scrying session. If uncontrolled, we might receive visions about anything. Since scried visions tend to be rather cryptic at times, it would be almost impossible to make sense of them because we would have no field of reference (Donald Tyson. Scrying for Beginners. Llewellyn Publications, 1997, 19).

This advice resonates with recent publications in the field of LLMs. One thing that generally improves the relevance of LLM-generated replies is creating fine-tuned models tailored to a specific field. In a recent article[i] in Nature, Clusmann et al. indicate that in medicine, “models designed specifically for medical applications and trained on medical data show promising progress in this domain.” Similarly, Paul et al. argue[ii] that for law-related tasks one should consider both the domain and a country (in the article, the authors make a case for models trained on Indian legal documents).

[i]

[ii]

All of this seems in line with with the metaphor of scrying visions producing unclear and fragmentary results that require a careful and knowledgeable operator to transform them into a meaningful reply rather than with the idea or superior intelligence yielding perfect answers to the world’s most complex questions and eventually replacing human beings.

Conclusion

Before computer games, there were tabletop games. Before digital books, there were physical books. And before turning to ChatGPT people used other means to boost their creativity and extract new narratives from the ocean of information that humanity constantly generates – Tarot readings, automatic writing, channeling, and so on. Despite the technologies we use for extracting such information is constantly changing, plenty of structural similarities remains in how this process works.

It goes beyond simple similarity. One could argue that these techniques are, in a sense, identical. After all, it is as rational to believe in the power of AI to recreate your dead loved ones as it is to believe in the power of a medium to establish a connection with them during a séance. The difference between techno-scientific language of sci-fi and esoteric language is largely a subject of perspective, an explanatory framework that we choose to employ. We could say that automatic writing was an old, low-tech (some would also add inefficient) version of modern LLMs. But it is equally valid to say that LLMs can operate as present-day versions of automatic writing, Tarot reading, and channeling.

Limitations and issues of modern LLMs were already obvious for people engaging in structurally similar esoteric practices. By changing our language from unreasonable expectations of machine gods coming to save, or destroy, humanity to a much more humble idea of a divinatory tool that helps us to tap, with limitations, into humanity’s collective subconsciousness, we might be able to better understand capabilities and limitations of modern AI and use it in more responsible and productive ways.