Papers
arxiv:2510.18212

A Definition of AGI

Published on Oct 21
· Submitted by taesiri on Oct 27
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

A quantifiable framework based on Cattell-Horn-Carroll theory evaluates AI systems across ten cognitive domains, revealing significant gaps in foundational cognitive abilities like long-term memory.

AI-generated summary

The lack of a concrete definition for Artificial General Intelligence (AGI) obscures the gap between today's specialized AI and human-level cognition. This paper introduces a quantifiable framework to address this, defining AGI as matching the cognitive versatility and proficiency of a well-educated adult. To operationalize this, we ground our methodology in Cattell-Horn-Carroll theory, the most empirically validated model of human cognition. The framework dissects general intelligence into ten core cognitive domains-including reasoning, memory, and perception-and adapts established human psychometric batteries to evaluate AI systems. Application of this framework reveals a highly "jagged" cognitive profile in contemporary models. While proficient in knowledge-intensive domains, current AI systems have critical deficits in foundational cognitive machinery, particularly long-term memory storage. The resulting AGI scores (e.g., GPT-4 at 27%, GPT-5 at 58%) concretely quantify both rapid progress and the substantial gap remaining before AGI.

Community

arXiv explained breakdown of this paper 👉 https://arxivexplained.com/papers/a-definition-of-agi

Paper submitter

The paper defines AGI as an AI matching or surpassing the cognitive versatility and proficiency of a well-educated adult, measured across ten human-like cognitive domains.

This paper should seriously reevaluate their framework on GPT-5 Pro rather than relying on the Auto mode of GPT-5 with a sloppy router behind it. We are talking about achieving AGI capabilities and exposing a range of serious risks for human kind. Therefore, measuring the most powerful frontier model at the moment does provide a better sense of where we stand today.

it's a start

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2510.18212 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.18212 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.18212 in a Space README.md to link it from this page.

Collections including this paper 6