Sam Altman now says AGI, or human-level AI, is 'not a super useful term’ — and he's not alone
Context:
Sam Altman, CEO of OpenAI, argues that the term 'AGI' (artificial general intelligence) is becoming less meaningful due to its vague definitions and the rapidly evolving nature of AI capabilities. Despite being a historical goal for AI development, the concept of AGI is criticized for creating hype rather than focusing on tangible progress in specialized AI. OpenAI and similar companies have been valued highly based on promises of achieving AGI, yet Altman acknowledges that their latest model, GPT-5, doesn't meet his own AGI criteria. Critics, including Wendy Hall, suggest that AI firms should adhere to global metrics to reduce exaggerated claims, while Altman believes discussing AI in terms of progress levels is more practical. Nick Patience from The Futurum Group agrees, highlighting that discussing specific AI capabilities is more beneficial than pursuing the abstract notion of general intelligence, which often serves more as a fundraising tool than a technical goal.
Dive Deeper:
Sam Altman of OpenAI suggests that 'AGI' is losing its usefulness as a term due to its ambiguous definitions and the fast-paced advancement of AI technologies, making it difficult to pinpoint what AGI truly encompasses.
The pursuit of AGI has historically driven funding and captured public imagination, but industry experts like Nick Patience argue it tends to obscure the significant progress made in specialized AI fields due to its nebulous and sci-fi inspired definition.
OpenAI's recent release of GPT-5, despite being marketed as smarter and more efficient, faced criticism for being only an incremental upgrade, highlighting the challenges of meeting the high expectations set by AGI aspirations.
Wendy Hall, a computer science professor, calls for AI companies to adhere to globally agreed metrics to enhance transparency and accountability, as the current landscape is likened to a 'Wild West' for exaggerated claims.
Altman recommends focusing on progress levels rather than a binary AGI/non-AGI distinction, proposing that defining clear capabilities and advancements is more practical and beneficial for understanding AI development.
Nick Patience emphasizes that the ongoing narrative around AGI is often a distraction from real-world advancements and breakthroughs, which are more relevant and beneficial to various fields than the elusive concept of general intelligence.
Despite the ongoing debate around AGI, Altman remains optimistic about AI contributing to significant discoveries in specific domains, such as mathematics and science, in the near future.