Close

The Sparks of AGI or the Flickers of Overhype

As an AI practitioner, I cannot overstate the importance of caution when evaluating the recent paper, "Sparks of Artificial General Intelligence: Early experiments with GPT-4". While GPT-4's capabilities are undeniably impressive, we must not let auto-regressive models predicting sequences in the most conceivable manner deceive us into believing that AGI is imminent.

The paper claims that GPT-4, the latest iteration of OpenAI's LLM, exhibits "sparks" of AGI. Researchers argue that GPT-4 surpasses prior models in diverse tasks without specific training and demonstrates near-human performance in areas such as mathematics, coding, vision, medicine, law, and psychology. However, it is vital to critically assess these claims, as GPT-4's intelligence patterns remain distinct from human-like thinking, and there is no firm definition of AGI or intelligence in general.

I have long maintained that for AGI to materialize, it requires more than large self supervised models (LSSMs). Sensory grounding for meaning and understanding, as well as algorithmic access to agency, virtually infinite contexts, causal inference, and transformative computation paradigm shift (read quantum) are indispensable parts of this complex equation. The physical world is chaotic, unpredictable, and challenging to navigate – without incorporating agency and sensory interaction into multimodal ingestion and response, there would be virtually no 'generalization.'

It is crucial to acknowledge the usefulness of auto-regressive LLMs as searchbots, information gatherers, writing tools, and coding assistants. However, these models have inherent limitations, such as frequent hallucinations, primitive understanding of the physical world, limited context and working memory, and being far from Turing complete. Auto-regressive generation is an exponentially divergent diffusion process, making it uncontrollable by design.

Prompt engineering, fine-tuning, and reinforcement learning with human feedback (RLHF) offer valuable support, but they cannot alter the fundamental limitation of auto-regressive token production, which is subject to exponential divergence. Most human responses are not generated auto-regressively but are planned ahead, without exponential divergence. Mathematical proofs, for example, are dismissed if they do not yield the desired outcome.

Hallucination is a significant issue with auto-regressive models like GPT-4 or ChatGPT, wherein the model generates outputs that are nonsensical or unrelated to the input provided. This occurs because the model attempts to predict the next value based on patterns it has observed before, but it may lack the proper context or knowledge to make accurate predictions.

While AGI may still be somewhat distant, we are undeniably advancing closer to that goal with each successive development in AI research.

"The real risk with AGI isn't malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble."

- Max Tegmark

Share