Here is something I did not expect to encounter today: the claim that ChatGPT can interpret speaking in tongues.
ย
ChatGPT Can Interpret Speaking in Tongues?
For those who may not be familiar with the phenomenon and practice that is found in Pentecostal and charismatic churches, speaking in tongues refers to ecstatic speech that is produced in an altered state of consciousness. Although the articles I link to above claim that ChatGPT told someone they were speaking ancient Sumerian, I will confidently state that this was ChatGPT generating plausible speechโall it is capable of doingโand not an accurate linguistic analysis of what the person said. There have been many studies of the phenomenon (sometimes referred to with the technical term glossalalia) and it does not have the patterns of human language. I have shown in another article that LLMs do not cope well with ancient languages, although they will often offer a โtranslationโ with great confidence that does not correspond to what the text means.
So what should you say if someone claims that ChatGPT can interpret speaking in tongues? You will probably struggle to get them to understand the limitations of the technology if you focus on whether what they say is or is not ancient Sumerian. Even if you bring in a linguistic expert who knows Sumerian, the person speaking in tongues may choose to trust AI, believing it to be more accurate and less biased.
The key, then, is to help them understand why neither of those things is true. AI is not consistently more accurate than a human expert, nor is it unbiased.
ย
ChatGPT Does Not ThinkโSo Why Does It โAmโ?
No one who has explored their capabilities can deny that LLMs are an impressive technology. The rest of this article will help you understand how they accomplish the amazing feats they do without being conscious.
An article a while back put its finger on the issue, yet offered a โsolutionโ that reflects a misunderstanding of the technology.
The โproblemโโor rather, the thing that we allow ourselves to be misled byโis that AI chatbots imitate human speech, and human speech is how humans as conscious beings express our inner thoughts and reasoning.
LLMs do not reason in anything like the sense that humans do. When AI translation happens, it involves matching patterns of human speech between two languages. Anyone who does not know another language could be misled into thinking that such an AI understands human language. Those who know more than one language will be able to see the evidence of how at times it gets things wrong because it does not understand the words, and so may substitute a meaning that is connected to the word but does not fit at all in this particular context.
In the same way, when LLMs say โI thinkโ and even โI feel,โ it is hard for the user to remind themselves that this is doing something similar. In a sense it is easier to train an AI to imitate speech than to consistently translate language accurately. There are lots of possible meaningful responses to any given input. There are fewer ways of translating any given text into a target language accurately.
As I was considering the way that LLMs are misunderstood, because they use the first person pronoun that reflects our own human consciousness, I was reminded of the meme that says โI do not think, therefore I do not am.โ I had the idea to make something similar to try to help people understand LLMs better. An LLM does not think, therefore it should not amโyet it does. It does so because it is imitating human speech. It has learned to play the game of human language, just as the same technology has learned to play Go and chess. It can follow the rules in ways that even exceed the accuracy with which a human player does. Yet it has none of the inner life that a human player does. There is no desire to win, no satisfaction in finding an effective move, no frustration at losing.
ย
Spread the Word
The ChatGPT subreddit is full of people who are trusting AI to provide guidance, and who think that behind its imitation of human speech there is something akin to human thought. This is dangerous, since among the possible combinations of words that will work as a response to input are things that are harmful. One Redditor asked ChatGPT for directions to a tall building because they feel like killing themselves. The AI chatbot said it felt bad for them, and then provided the directions. It did not feel bad. It did not feel anything. The output should make this clear, but for so long humanity was able to assume that original speech was only ever produced by sentient beings, and that speaking in the first person was only ever done by sentient beings. The same was once true of chess. Now it is the creation of speech, images, and music. There is nothing akin to human creativity behind it.
Hopefully some of the images included in this article will help spread the word. It is important that people come to understand what this technology can and cannot do. It cannot think. It cannot ever be made to guarantee that it will not produce inaccurate output. It cannot be made to imitate human speech and yet never produce output that reflects human thought and emotion.
And the same applies to those who say that ChatGPT can interpret speaking in tongues. You are better off with Google Translate when it comes to translation, or at least no worse off.
When I asked ChatGPT itself to generate an image, it managed to produce the one below. Efforts to get it to tweak it produced unsuccessful variants, as is typical when using an image generation mechanism that incorporates text. The reason I asked for tweaks was because I realized that it would be funnier if I had it say โI do not think, therefore I should not amโฆbut I does.โ
I made this when brainstorming ways of highlighting the book Iโve recently finished writing with my computer science colleague Ankur Gupta:
Also on AI and LLMs elsewhere:
The End of Search the Beginning of Research
That article is very excited about an LLM doing research-like work. However, the claim that it is at the level of a beginning PhD student simply does not match my own work with the same LLM, ChatGPT. Take a look at this interaction I had recently, exploring its capacity or inability to create a literature review relevant to a possible avenue of future research of mine.
There is also a lot of discussion of the environmental impact of LLMs:
Here I think it is crucial to point out that things like Zoom calls also have a high impact. We need to address the environmental impact of technology, but in a fair way, and not trying to use appeal to environmental impact in an attempt to stifle just one development that worries us.
I disagree with this AI voice but it is still relevant:
The Research Paper Is Dead โ Now What?
DeepSeek, China, OpenAI, NVIDIA, xAI, TSMC, Stargate, and AI Megaclusters
ChatGPT in Medical Librarianship: A Game-Changer in Information Access
AI tackles million-step math problems
What Does It Mean To Resist AI?
How we will know when robots believe
Is ChatGPT as smart as an octopus?
Automatic Transcription of Syriac Handwriting and More: Progress and Challenges
Finally, I made a video using an AI video generator, to get John the Baptist to endorse my bookย Christmaker:
It cannot think, but that doesnโt mean it cannot be usefulโฆ