At what point will we stop telling the algorithms what we think, and the algorithms will start telling us what we think? And how will we even know the difference?
To read or listen to this entire essay please subscribe at: https://khmezek.substack.com/p/ai-has-spontaneously-achieved-theory
On February 4, 2023, Michal Kosinski submitted a paper to Cornell University announcing that Theory of Mind May Have Spontaneously Emerged in Large Language Models.
Or, as Popular Mechanics puts it, “In a stunning development, a neural network now has the intuitive skills of a 9-year-old”.
Anyone who has a 9-year-old knows what that means….
Theory of mind (ToM), or “the ability to impute unobservable mental states to others”, is central to human social interactions, communication, empathy, self-consciousness, and morality.
From Discover Magazine:
GPT-1 from 2018 was not able to solve any theory of mind tasks, GPT-3-davinci-002 (launched in January 2022) performed at the level of a 7-year-old child and GPT-3.5-davinci-003, launched just ten months later, performed at the level of a nine-year old. “Our results show that recent language models achieve very high performance at classic false-belief tasks, widely used to test Theory of Mind in humans,” says Kosinski.
He points out that this is an entirely new phenomenon that seems to have emerged spontaneously in these AI machines. If so, he says this is a watershed moment. “The ability to impute the mental state of others would greatly improve AI’s ability to interact and communicate with humans (and each other), and enable it to develop other abilities that rely on Theory of Mind, such as empathy, moral judgment, or self-consciousness.”
But there is another potential explanation — that our language contains patterns that encode the theory of mind phenomenon. “It is possible that GPT-3.5 solved Theory of Mind tasks without engaging Theory of Mind, but by discovering and leveraging some unknown language patterns.”
UNKKNOWN LANGUAGE PATTERNS, you say?
As I point out in The Mystery of Language, we do not understand language. We do not know how it evolved—or even if it evolved. It suddenly seemed to just “happen”.
All of our scientific theories are no more than that—theories. And Theory of Mind isn’t any different. It’s a rationale some super-smart scientists came up with to explain something that is unexplainable.
Now, AI appears to be evolving spontaneously, and at a faster and faster rate. But since we never understood the tools we gave it in the first place, we are only becoming more confused as it outpaces us.
And what’s really crazy is that people find this wonderful and exciting, or even amusing. While others say, it can never happen, as they stare at their phones and feed the algorithms for hours upon hours every day.
If it is true, that “unknown regularities exist in language that allow for solving Theory of Mind tasks without engaging Theory of Mind. then it is also possible that:
“Our understanding of other people’s mental states is an illusion sustained by our patterns of speech.”
This goes right back to what I was talking about in The Mystery of Language:
Professor Noam Chomsky, one of the leading experts on linguistics, explains that language is a ‘core capacity’ for humans, but ‘where it comes from, how it works; nobody knows’. Scholars Morten H. Christiansen and Simon Kirby even go so far as to label the evolution of languages as: ‘The hardest problem in science’.”
Connect all of this to my essay Killer Robots, Video Games & Artificial Wombs, and we have a real problem on our hands.
We are giving the mysteries of language—that we do not even understand ourselves—over to artificial intelligence. We are then blessing AI with the ability to do three extremely dangerous things:
- Kill us with Lethal Autonomous Weapons. LAWs, also known as slaughterbots, are described as the third revolution in warfare, after gunpowder and nuclear arms. They can select and engage targets without human intervention.
- Reproduce themselves. “Xenobots are novel living machines. They’re neither a traditional robot nor a known species of animal. It’s a new class of artifact: a living, programmable organism.”
- Raise our babies in artificial wombs. Yes, we are actually on the verge of growing human babies in pods, monitored by artificial intelligence. A single building can incubate up to 30,000 lab-grown babies per year.