Is GPT-4 showing signs of intelligence?
AGI and the Limits of Human Perception: Can Machines Expand Our Definition of Intelligence?
The field of artificial intelligence is rapidly evolving, with continuous discussions regarding the characterization and evaluation of machine intelligence. In a recent presentation at MIT, Sebastien Bubeck gave a talk titled, Sparks of AGI: early experiments with GPT-4. Bubeck discusses his team’s early experiments with GPT-4, highlighting how GPT-4 can reason, solve problems, think abstractly, and comprehend complex ideas. Bubeck is careful to explain that measuring whether something is intelligent is highly dependent on how intelligence is defined. For the purpose of his presentation, Bubeck uses a consensus definition published in 1994 by a group of 52 psychologists.
According to this definition, “Intelligence is a very general mental capability that, among other things, involves the ability to:
Reason
Plan
Solve problems
Think abstractly
Comprehend complex ideas
Learning quickly and learns from experience
Bubeck proceeds to discuss several examples where he and his team use GPT-4 to qualitatively test each ability. Citing several example prompts that he and his team tested, Bubeck argues, that GPT-4 can reason, solve problems, think abstractly, and comprehend complex ideas.
On the other hand, he explains that GPT-4 is not able to plan, certainly with respect to problem solving. And, although GPT-4 can learn quickly and learn from experience, its current capabilities only allow it to do so during a single session. So, in its current form, GPT-4 learns quickly from experience during a session, however once the session is reset, it starts learning from scratch.
He concludes the presentation by asking “is GPT-4 intelligent? And does it matter?” According to his interpretation of the consensus definition he used, (and his inherent Microsoft bias) he argues that GPT-4 is intelligent. He concedes however that it does not fit all 6 of the capabilities outlined by the group of 52 psychologists. Namely, the lack of memory (from session to session), and its inability to solve “problems that require thinking several steps in advance”.
Can humans be objective about what it means to be intelligent?
One of the more interesting parts of Bubeck’s presentation is the point he makes about human beings inability to be objective about intelligence. He explains that we as humans currently have only one model for truly understanding intelligence which is based on the natural intelligence of the natural world. He wonders whether artificial intelligence will usher-in a whole new definition of intelligence. Considering the fact that humans don’t understand how AI makes decisions or even how AI’s work all together, can AI’s expand our definition of intelligence? Bubeck challenges us to think beyond the question of whether or not the AI is intelligent, but he asks, “is it useful?” If we think it is, then does it matter if it matches our current definition of intelligence?
What does the machine think?
Bubeck’s discussion inspired me to ask ChatGPT whether natural, human/animal-based intelligence and machine intelligence should be defined by the same capabilities. Here is how it responded:
“Both natural, human/animal-based intelligence and machine intelligence share some common characteristics. However, the definition of intelligence for machine intelligence may also include additional criteria such as the ability to process large amounts of data quickly and accurately, to learn from that data and adapt to new situations, and to make decisions based on that data.
Machine intelligence may also involve specific techniques and algorithms designed to mimic or improve upon human cognitive abilities such as perception, problem-solving, and decision-making. However, it's important to note that machine intelligence is still a rapidly evolving field, and there is ongoing debate about the definition and measurement of intelligence in machines.”
This reinforces Bubeck’s point that humans must think beyond our own understanding and definitions of human intelligence when considering whether machines have reached Artificial General Intelligence (AGI). For one, it might be very difficult to tell whether machines have reached AGI. This is because AI technology seems to be making incremental advances, and the definition of machine intelligence with respect to human intelligence is evolving. In addition (and related), the standards by which human intelligence is defined, is arguably entirely different from how we should be thinking about machine intelligence. In some ways, machines have already surpassed humans with regards to processing large amounts of data quickly. At the same time, machines are (if at all) at the earliest stages of mimicking or improving upon human perception.
Is a human/machine mind meld the likely path to AGI?
Logically, it may turn out that AGI stems from a hybrid scenario - human + machine. These man-machine hybrids will likely involve Brain Computer Interface (BCI) technologies. This is an interesting area that we are exploring at FFC. I did a quick search and identified several companies working at the fusion of human and machine intelligence.
Some companies working on this include:
Neuralink - technology to enable people to control computers and other devices directly with their brains.
Kernel - advanced neural interfaces, that aim to improve human intelligence and memory.
CTRL-labs (acquired by Facebook in 2019) - translate electrical signals from the brain into digital to enable people to interact with machines more naturally and efficiently.
Paradomics - enable bidirectional communication between the brain and computers to treat neurological disorders such as Parkinson's disease and epilepsy.
Blackrock Neurotech - advanced neural recording and stimulation systems enable precise communication between the brain and machines.
In my opinion, BCI is likely one of the next frontiers of AI technology as it has the potential to bridge the gap between human and machine intelligence. BCI’s could lead to the development of advanced man-machine hybrids.
The technology has already shown promise in a variety of applications, including medical treatments for conditions such as Parkinson's disease and epilepsy, and assistive technologies for people with disabilities. However, there is still a great deal of research and development needed to fully realize the potential of BCI technology.
If successful, the development of BCI technology could lead to a range of new possibilities in areas such as human augmentation, enhanced human-machine collaboration, and the creation of more sophisticated and capable AI systems. Including the potential for an AI that reaches AGI.
Bubeck’s presentation is well worth a listen - though he is biased due to his role at Microsoft. His presentation raises important questions about the definition of intelligence and its relationship with machine intelligence. While Bubeck argues that GPT-4 can be considered intelligent, he acknowledges that the definition of intelligence for machines may be different from that of humans.
As AI technology continues to evolve, the definition of machine intelligence and the standards by which it is measured may change, leading to a new definition of AGI. The future of AGI is uncertain, but the discussion sparked by Bubeck's presentation highlights the need to broaden our understanding of intelligence beyond our current models. We must remain open to the possibility of new definitions emerging, including those that combine the best aspects of human and machine intelligence.
Here is a link to the paper he and his colleagues published earlier this month.
Whether you believe the current version of GPT4 is intelligent, or AGI will never be reached, by machine alone, or through combined with human intelligence, please remember to ignore the confusion!