AI (artificial intelligence) is a promise with many faces. With ChatGPT being the most recent hype, it might be time to take a closer look at what's going on here.
In all the enthusiasm surrounding ChatGPT and related AI, I see a lot of confusion and misunderstanding. This leads to wild statements and emotional discussions. All kinds of expectations are expressed that make it clear that the essence of artificial intelligence is not too well understood. I found the pinnacle of the statement "We live in an AI society". This is highly exaggerated and requires nuance.
The beginning: Artificial Intelligence
The term 'Artificial Intelligence' was first written down in 1955. A number of scientists wanted to research AI because in their view:
"Every aspect of learning or any other feature of intelligence can, in principle, be so precisely described that a machine can be made to simulate it".
So this was about describing (human) learning and every other property of (human) intelligence with such precision that a machine should be able to simulate it.
Artificial intelligence is the imitation of human intelligent behaviour by a machine. Hold on to this insight, because this is the common thread in all the hype and discussion surrounding the developments of AI.
Read the article Not Everything We Call an AI Is Actually Artificial Intelligence. Here's What to Know by George Siemens to learn more about AI.
The Golden Grail: Artificial General Intelligence
AGI, or Artificial General Intelligence, is a form of AI capable of performing any task that the human mind is capable of. So this is the AI that the researchers envisioned in 1955: AGI systems capable of learning, reasoning, planning and communicating in the same way as human beings.
This type of AI has not yet been developed and remains a challenge for researchers, but it is the ultimate goal of many research institutions and companies involved in AI. We've been doing this for 75 years and hundreds of billions of US dollars have been invested in it.
This is therefore AI in which the machine thinks and acts 100% humanly and is also self-aware. The examples can be found in… science fiction. And in the wild dreams of Singularity followers.
To clear up any misunderstanding: AGI does not exist. point. Even after 75 years, the promise has not yet been fulfilled. Is it ever possible? I will come back to that later.
The reality: Narrow AI
Narrow AI (ANI, Artificial Narrow Intelligence) focuses on performing specific tasks, such as recognizing objects in images, translating text from one language to another, or maintaining a conversation. This type of AI is specifically trained for the task it needs to perform, and is therefore also known as functional AI.
Narrow AI is very useful in many industries, from retail to healthcare, and is currently widely used in various applications such as chatbots, virtual assistants and autonomous systems.
This concerns AI whose output can perform a specific task just as well and often even better than a human. It is not AGI, but there is certainly value in this 'limited' form of AI.
Examples of Narrow AI (or Weak AI) are DeepBlue, AlphaGo, Siri, Alexa, image recognition in self-driving cars and automatic suggestions at Netflix.
The new promise: Generative AI
Generative AI (or Creative AI) focuses on generating new content, such as images, music, text or other digital media. This is done by training the model on a large amount of existing data, enabling it to create new content that is somehow similar to the original data.
Generative AI models can create images that resemble a certain style or color, or generate text that resembles a certain writing style. The output of the algorithm appears to be man-made, but it is 'fake'.
Examples of Generative AI are ChatGPT, Dall-E and Midjourney,
Also read the McKinsey article What is Generative AI?
The discussion: AGI to be or not to be?
There is a lot of discussion about AI and that is not always very subtle. People I like to follow in this 'discourse' are Gary Marcus (AI expert) Grady Booch and Noam Chomsky and Mariya Yao. All people who have more than earned their spurs in the field of artificial intelligence.
Gary and Grady had an interesting discussion about Grady's tweet: AI will not happen in your lifetime. Nice to read that the reactions vary from "Finally someone who says it" to "You clearly don't understand a thing about the profession".
Are we at a dead end?
It is really a mistake to see the new developments in Generative AI or Narrow AI as a prelude to Artificial Generative Intelligence (AGI). This is a fallacy many enthusiasts make.
Narrow AI took a giant leap with deep learning, where the machine learned itself and no longer needed to be fed with instructions. Back to the beginning of AI: the precise description of human intelligence. This is no longer necessary with 'deep learning'! This also led to an enormous hype and the expectation that AGI was now coming very close.
But the machine only learned something about a specific task and nothing more. Ask AI developers of self-driving cars how little progress they've made. With 'deep learning' you eventually reach the ceiling of the possibilities. AGI is not the sum of more and more deep learning.
The fact that ChatGPT - currently still based on GPT-3 - will probably use the more powerful GPT-4 this year does not mean that AGI is in the offing. ChatGPT can do its specific job even faster and better, but it will never grow into AGI. This road also ends in a dead end.
Fake is fake
Why won't AGI's golden grail be reached yet? The answer lies in the challenge set in 1955: to describe (human) learning and every other property of (human) intelligence with such precision that a machine can simulate it.
As long as we don't understand how people learn and how they store and transfer knowledge, we won't be able to explain to a machine how that works. Ergo: not a 100% imitation.
And any imitation that is not 100% human produces a weak imitation that we humans recognise as 'fake' in no time. Note: I am not talking about the output of the algorithms of Generative AI in particular. 'Fake' videos are almost indistinguishable from the real thing. But the machine that makes fake videos is anything but 'impersonated human'.
It's in the semantics
The fact that deep learning does not need instructions does not get around the lack of semantics. The problem with all forms of AI so far is that it is based on statistical models and has no understanding of meaning (semantics).
And it is precisely that semantics that is the greatest challenge. We still know far too little about it. It is complicated and complex and 'goat paths' such as deep learning ultimately turn out to be dead ends.
I refer to Grady Booch's explanation of why AGI is not going to happen in the foreseeable future:
"We do not yet - nor do I expect we will anytime in the future - have the proper architecture for the semantics of causality, abductive reasoning, common sense reasoning, theory of mind and of self, or subjective experience." (Grady Booch)
The machine is blind
The problem with comparing machine learning to human learning is that when humans learn, they associate the patterns they identify with semantic abstractions of the underlying high-order objects and activities. Our background knowledge and experiences, in turn, give us the necessary context to reason about those patterns and identify those that are most likely to represent robust, actionable knowledge.
Machines, on the other hand, blindly look for the strongest signals in a pile of data. In the absence of background knowledge or life experiences to understand the meaning of those signals, deep learning algorithms cannot distinguish between meaningful and false indicators. Instead, they blindly code the world according to statistics rather than semantics.
The arrogance of AI
The promise of AI is rather arrogant in my view. As if we can manage to understand and imitate a 300 million year evolution in the human mind with instructions and algorithms. And even then, how valuable and desirable is it to emulate human thinking in a machine?
"It is hard to know where [AI researchers] have gone wronger: in underestimating language or overestimating computer programs." (Drew McDermott)
First, let's see how we can use Narrow AI and Generative AI in a valuable way. There are already enough challenges there. I also come back to the self-driving car, a 'narrow AI' promise that just won't materialise.
Let's not create all kinds of false expectations again about how 'super intelligent' AI is going to become. Because the reality is that AGI still doesn't exist even after 75 years and that no development has brought us even close to that 'golden grail'.
Replication as a travesty
And let's not think that the output of a Generative AI algorithm has anything to do with art. I would like to give the last word on that to Nick Cave, the Australian composer and singer who has a stage in The Red Hand Files where fans can ask all kinds of questions.
Nick got some ChatGPT lyrics from fans "in the style of Nick Cave". His answer is, as always, sincere, loving and .... razor sharp:
"What ChatGPT is, in this instance, is replication as travesty." (Nick Cave)
Comments