When not inducing terrifying images of a terminator, Artificial General Intelligence, strong intelligence, or simply AGI refers to a hypothetical AI system that possesses similar general intellectual abilities as a human being. This would typically include capabilities like natural language processing, reasoning, problem solving, and so on. Whether or not AGI is achievable remains an open question, but many experts believe it is possible, at least in principle. Making predictions for strong intelligence is the same as forming a hypothesis, but it is also a great comprehension strategy.
I consider the following five goals to be achieved in some capacity before we see any glimpses of real AGI.
- Large multimodal models with corrigibility, coreference, correlation, and causality becoming mainstream (Deepmind’s Gato is a good step forward)
- Large models go beyond prompting to goal seeking – and become able to automatically finetune based on personas (domain, task, environment, and subject matter expertise)
- Elimination of AI silos and unification of existing (and future) AI ‘tribes’ i.e. symbolists, connectionists, Bayesians, evolutionaries, and analogizers is made possible
- Well defined practical quasi-working implementation of qualia, for subjective experiences and conscious perception, self-modulation, reportability, retrospective, and newness becomes generally understood and available
- Since nature isn’t classical, a generalized strong intelligence can’t be ‘classical’, it would be quantum mechanical, yeah you see where this is going…
Gaining the same general cognitive abilities to reason, learn, and solve problems as humans do, is a multifaceted feat – a worthwhile one, nevertheless. All these milestones are critical in the roadmap to AGI, and there are various approaches that could be taken; however, significant challenges remain before we can achieve true human-like AI, or hear Hal 9000 say,
“I'm sorry Dave, I'm afraid I can't do that”
2001: A Space Odyssey