Artificial General Intelligence (AGI)
This is a primer on AGI.
Table of Contents
- Executive Summary
- Definitions of AGI
- Benchmarks for AGI
- History of the Term “AGI”
- Predecessor Concepts and Terms
- AGI and the Technological Singularity
- Cultural Imaginaries of AGI
- Key Actors and Agendas in AGI
- Criticisms and Controversies Surrounding AGI
- Additional Perspectives
- References
Executive Summary
Artificial General Intelligence (AGI) involves creating machines with the capability to perform any intellectual task humans can. However, AGI lacks a clear, universally accepted definition, leading to varying interpretations and debates across disciplines. Understanding AGI requires recognizing its complexities, differing viewpoints, and significant societal implications. See detailed definitions below.
Definitions of AGI
AGI can be defined pragmatically as human-level artificial intelligence: a machine that can successfully perform any intellectual task that a human can. This spans technical, cognitive abilities and perhaps qualities like common sense reasoning, abstraction, and learning from few examples1.
Technically, AGI denotes a hypothetical future AI with versatility and breadth in cognitive capabilities matching those of humans, contrasting with today’s narrow AI systems which excel at specific tasks but cannot generalize their skills beyond their training domain. For example, a narrow AI might play chess at superhuman level or recognize faces, but cannot on its own switch from playing chess to composing music or doing scientific research1.
Key Perspectives:
-
Technical Definitions: OpenAI defines AGI as “highly autonomous systems that outperform humans at most economically valuable work”2. Shane Legg and Marcus Hutter characterize intelligence (and by extension AGI) as an agent’s “ability to achieve goals in a wide range of environments”1. Critics note that defining intelligence purely by economically valuable tasks might neglect other aspects of human intellect like creativity for its own sake, emotional intelligence, or ethical reasoning2.
-
Philosophical and Strong AI: In philosophy, AGI overlaps with “strong AI,” the notion that a machine could truly have a mind and consciousness equivalent to a human’s, not just simulate thinking3. John Searle’s Chinese Room argument (1980) challenged whether running the right program could give a computer real understanding or consciousness. This illustrates that definitions of AGI can vary based on whether one requires human-like subjective qualities (consciousness, understanding) or only human-like performance. Some researchers reserve “strong AI” specifically for conscious AI, treating that as a stricter subset of AGI1.
-
Debate and Ambiguity: There is no single agreed-upon definition of AGI; it’s often called a “weakly defined” term4. Intelligence itself is a “thick concept” (both descriptive and normative)5, meaning we implicitly value certain kinds of problem-solving when we call something “intelligent.” Key issues include: Does AGI require consciousness or self-awareness? Must it set its own goals, or is following human-given goals enough? Does general intelligence imply an embodied presence or can it be purely software? Is human-level generality a matter of having many narrow skills, or is there a fundamentally different kind of integrative intelligence needed?1
Current Status: No current AI system fully meets the AGI bar. Large language models like GPT-4 have sparked debate by achieving surprisingly broad competence, but experts note they still fall short of the robust, reliable understanding and adaptability that true AGI would entail1. The very question “What counts as AGI?” remains contested.
Culturally, AGI is understood as the milestone where machines cease to be mere tools for specific tasks and become intelligent agents in their own right, potentially with minds comparable to human minds—the point at which the age-old dream of a “machine that thinks” is realized in full generality6.
Benchmarks for AGI
Measuring progress toward AGI requires diverse tests reflecting the many facets of general intelligence. No single benchmark is perfect; reaching AGI likely means clearing multiple bars simultaneously.
Classical and Practical Tests:
-
Turing Test (1950): The classic benchmark where an AI must carry on a conversation indistinguishable from a human via text chat5. While historically influential, it has limitations: it only evaluates linguistic conversation, and clever programs might trick judges through evasive replies without possessing general intelligence. Some chatbots have temporarily fooled judges by exploiting human weaknesses (such as pretending to be a confused second-language speaker)7. Thus, passing the Turing Test is necessary but not sufficient for AGI.
-
Coffee Test (Wozniak): An AI agent must enter an average home and make coffee, requiring vision, mobility, object recognition, understanding of household environments, and sequential planning5. No current AI-powered robot can reliably do this in an arbitrary home.
-
Robot College Student Test (Nilsson): An AI enrolls in a university, attends classes, completes assignments and exams, and earns a degree like any human student5. This would demonstrate mastery of diverse subject matter and the ability to acquire knowledge in human-like ways.
-
Employment Test (Nilsson): Can an AI perform any job a human can perform given the same training?7 If you could hire an AI to replace a human in any role and get comparable results, you’ve achieved AGI.
Cognitive and Creative Benchmarks:
-
Lovelace Test: Focuses on creativity—an AI passes if it produces an original, creative output that its designers cannot explain step-by-step7, going beyond rote problem-solving into ingenuity and innovation.
-
Psychometric AI Approach: Subjecting an AI to standardized tests used on humans (IQ tests, school exams, tests of creativity, emotional intelligence, etc.)7. An AGI should score at least average on all established tests of mental ability. However, tests can be “gamed” or trained specifically, and a machine might ace tests without possessing the full qualitative depth of human understanding.
Emerging Frameworks:
- DeepMind’s Five AGI Levels (2023): Emerging, competent, expert, virtuoso, and superhuman1. At “competent” level, an AI would outperform about 50% of humans in a wide range of tasks. Today’s large language models are classified as “emerging AGI” (roughly comparable to unskilled humans on general tasks). Only when an AI is competent or expert across virtually all cognitive domains would most agree it qualifies as AGI.
Core Criteria:
A true AGI must demonstrate integrated intelligence with a whole suite of cognitive capabilities: the ability to reason logically and solve novel problems, plan actions toward goals, represent and update knowledge (including commonsense facts), learn from experience or minimal instruction, perceive and understand language and sensory inputs, and combine all these skills fluidly when tackling tasks1. It must demonstrate flexibility and adaptability: given a new task or environment, it can figure out how to succeed, much as a human can.
Other Approaches:
- Whole Brain Emulation: Simulating all neural circuits of a brain accurately would by definition behave as a general intelligent agent1, though this is more a proposed method than a test.
- Video Game Learning: If an AI can pick up any new game it’s never seen and start performing well, that demonstrates general learning ability5. DeepMind’s MuZero learned to play dozens of games without being told the rules, achieving superhuman performance, though this is still considered a narrow slice of generality rather than true AGI7.
History of the Term “AGI”
- Early AI and Implicit AGI Goals: In the 1950s–1970s, what we now call AGI was simply the original vision of “Artificial Intelligence.” Early AI pioneers like Alan Turing, John McCarthy, Marvin Minsky, and others explicitly aimed at creating machines with human-level general intelligence. They spoke of building programs to simulate “every aspect of learning or any other feature of intelligence” (as McCarthy’s 1955 proposal for the Dartmouth workshop put it). Terms like “machine intelligence” or “general intelligent action” were used in this era1, but the specific phrase “AGI” wasn’t yet in play.
- Coining of “AGI”: The term “Artificial General Intelligence” in those exact words began to appear later. It was used as early as 1997 by researcher Mark Gubrud, who discussed implications of fully autonomous, intelligent military systems1. In 2000, Marcus Hutter introduced a formal theoretical model called AIXI, describing an idealized AGI that maximizes goal achievement across all computable environments1. Hutter’s work used terms like “universal artificial intelligence” for this mathematically defined super-general agent1.
- Re-introduction and Popularization (2000s): The acronym “AGI” truly entered the AI lexicon in the early 2000s. Shane Legg and Ben Goertzel are credited with re-introducing and popularizing “AGI” around 20021. They and a small community of researchers felt that mainstream AI had drifted into narrow problems and that the original dream of human-level AI needed renewed focus and a distinct name. By 2005–2006, the first workshops and conferences explicitly on “AGI” were being organized, often spearheaded by Goertzel and colleagues. What had been a fringe term became an identity for a subfield.
- Growing Community and Discourse: In 2006, Goertzel and Pei Wang described AGI research as producing publications and early results, indicating a nascent but growing field1. Dedicated conferences (branded “AGI”) have been held almost annually since 2008, bringing together researchers interested in whole-brain architectures, cognitive theory, and meta-learning approaches. The first AGI Summer School was held in Xiamen, China in 20091, and even some university courses on AGI began appearing around 2010–2011 (e.g. in Plovdiv, Bulgaria)1. This period also saw the establishment of organizations and projects explicitly targeting AGI: for example, OpenCog (an open-source AGI project led by Goertzel), and the Machine Intelligence Research Institute (MIRI, founded earlier in 2000 as the Singularity Institute), which pivoted to focus on the theoretical underpinnings and safety of AGI.
- Mainstreaming and Tech Industry Adoption: The 2010s and early 2020s brought “AGI” from an academic niche into wider discourse. Breakthroughs in machine learning (deep learning) led some prominent AI labs to openly declare AGI as their goal. For instance, DeepMind (acquired by Google in 2014) described its mission as “solving intelligence” with the intent that “once we solve it, we can solve everything else.” Companies like OpenAI, Google (Brain/DeepMind), and Meta (Facebook AI Research) explicitly began referencing AGI in their strategies1. OpenAI’s very charter (2018) uses the term AGI repeatedly and focuses on ensuring its safe development1. By the mid-2020s, AGI had entered public conversations, media headlines, and investment pitches: a dramatic shift from the term’s obscurity two decades prior.
The notion of a machine with general intelligence equivalent to a human has been around since the dawn of computing. In the 1950s and 1960s, researchers simply spoke of “Artificial Intelligence” to mean what we’d now call AGI, because in their minds, building a machine to play excellent checkers or solve algebra was just a stepping stone toward the ultimate goal: a thinking machine that could do anything. Early milestones like the General Problem Solver (Newell & Simon, 1950s) and talk of creating a “child machine” that could learn (Turing) all reflect this original AGI ambition. Terms like “strong AI” later emerged (notably in the 1980s with Searle’s writings) to differentiate this human-level aim from “weak” or applied AI. But throughout the 70s and 80s, as certain AI expectations went unmet, the community’s focus shifted to narrower, more achievable goals. The AI Winter of the late 80s (when funding dried up due to unmet hype) further discouraged grandiose talk of human-level AI.
By the 1990s, most researchers avoided grand claims, and the term “AI” in practice came to refer to specific subfields (like computer vision, expert systems, or machine learning algorithms). Those who still believed in the original vision found themselves somewhat at the margins. It’s in this context that the phrase “Artificial General Intelligence” appears. Mark Gubrud’s 1997 usage1 was in discussing future military tech: he likely used the term to emphasize the difference between narrow expert systems and a hypothetical fully autonomous, generally intelligent battle management AI. This suggests that by the late 90s, the concept needed a qualifier (“general”) to distinguish it from the existing reality of AI.
In the early 2000s, two events helped solidify the term. First, Marcus Hutter’s theoretical work: he presented a rigorous definition of a universal AI (AIXI) in 2000, framing intelligence in terms of Solomonoff induction and reward maximization across environments1. While abstract, this put the idea of a general problem-solving agent on a formal footing, and Hutter’s subsequent book “Universal AI” (2004) further disseminated the concept. Second, around the same time, Ben Goertzel, a cognitive scientist and AI entrepreneur, began using “AGI” in papers and eventually organized the first AGI conference in 2006 (held in Washington D.C.). Goertzel co-authored the book “Artificial General Intelligence” (2007), an edited volume that was among the first academic publications explicitly using the term in its modern sense.
Goertzel and his collaborators (like Pei Wang, Itamar Arel, etc.) were instrumental in reviving the dream of human-level AI as a respectable pursuit. In a 2007 article, they argued that mainstream AI had become too siloed, solving specific problems, whereas a return to studying the architecture of general intelligence was needed. Shane Legg (who later co-founded DeepMind) and Goertzel are specifically noted to have popularized “AGI” in the 2002 timeframe1. Shane Legg, in his doctoral work, surveyed definitions of intelligence and helped crystallize the notion that a general measure was needed: his oft-cited definition (“intelligence is the ability to achieve goals in a wide range of environments”) fed directly into the AGI discourse1.
By the late 2000s, AGI research had enough momentum for regular gatherings. The community remained small (especially compared to mainstream AI conferences like NeurIPS or IJCAI), but it was global: as evidenced by the summer school in Xiamen, China (2009) and the first university courses on AGI topic in 2010–20111. These early AGI meetings covered topics like cognitive architectures (how to design a single system with perception, learning, reasoning modules), developmental AI (could an AI undergo a learning curve like a child?), and evaluations of progress. While much of this work was theoretical or in software prototypes, it kept the flame alive.
A turning point for the term’s popularization came in the 2010s, as tech giants started achieving striking results with deep learning. When IBM’s Watson won at Jeopardy! (2011) and DeepMind’s AlphaGo beat a Go champion (2016), the media and public began asking: how far is this from human-level AI? Researchers themselves, invigorated by progress, started openly discussing AGI timelines. Companies formed explicitly with AGI as a mission. DeepMind, founded in 2010, always had an AGI-oriented vision (“Solve intelligence”). OpenAI, founded in 2015 with backing from Elon Musk and others, used the term AGI prominently, framing it as something that could be achieved possibly in decades and that needed to be guided responsibly.
By 2020, the term “AGI” had filtered into broader tech culture. For example, when Meta (Facebook) CEO Mark Zuckerberg in 2023 declared that his company’s new ambition was to build an AGI[^32], it made headlines: something that would have sounded like science fiction a decade earlier. A 2020 survey counted 72 active AGI projects in 37 countries1, indicating that efforts (ranging from academic labs to corporate R&D) explicitly targeting general intelligence are underway across the world. Whether all of these are truly on a path to AGI or just using the buzzword is debatable, but the number shows how the term has proliferated.
In academic circles, AGI still isn’t an entirely mainstream term (many researchers prefer talking about “human-level AI” in general AI conferences to avoid hype). Yet, it’s found legitimacy through journals and workshops dedicated to it, and through influential books like Nick Bostrom’s “Superintelligence” (2014) which treated the achievement of AGI and beyond as a serious forthcoming issue. By mid-2020s, even governmental and policy discussions reference AGI in the context of long-term AI strategy.
In short, “Artificial General Intelligence” went from a little-known phrase in the late 20th century, to a rallying banner for a small community in the 2000s, and now to a widely recognized concept in technology and futurism. This evolution mirrors shifts in the AI field itself: periods of disappointment giving way to renewed optimism. Today, AGI signifies both a technical aspiration (to build truly versatile AI) and a cultural idea (the coming of machines that equal or surpass us in intellect). The history of the term reflects a pendulum swing: from broad ambition (1950s) to specialization (80s–90s) and back to broad ambition (2000s onward), as well as the growing urgency, as AGI starts to look less like a remote fantasy and more like a matter of “when and how,” not “if.”1
Predecessor Concepts and Terms
- “Strong AI” vs “Weak AI”: The earliest contrasting term to what we now call AGI was “strong AI.” In 1980, philosopher John Searle defined strong AI as the claim that a suitably programmed computer “really is a mind” that can understand and have cognitive states, whereas weak AI meant AI that merely simulates thinking without real understanding3. In practice, outside of philosophy, strong AI came to denote the goal of human-level, general intelligence in machines, and weak AI referred to domain-specific or tool-like AI. Thus, “strong AI” in many older sources is essentially synonymous with AGI as an objective (though it often also implied consciousness).
- Human-Level AI / Full AI: Researchers often used phrases like “human-level AI,” “human-like AI,” or “full AI” in past decades. For example, AI pioneer Nils Nilsson used “human-level AI” to discuss when machines could do any job a human can7. Marvin Minsky and others simply spoke of achieving “Artificial Intelligence” meaning the full monty: reasoning, vision, robotics, the works. When IBM’s Deep Blue beat the world chess champion in 1997, people noted it wasn’t “real AI” in the strong sense, meaning it lacked generality beyond chess. This sentiment shows that the concept of AGI was present intuitively: they expected “real AI” to be general.
- General Intelligence / General Problem Solver: As early as the 1960s, terms like “general problem solving” were used. Newell and Simon’s Physical Symbol System Hypothesis (1976) posited that a physical symbol system (a kind of computer) could exhibit “general intelligent action,” essentially the ability to adapt to any problem given appropriate knowledge1. Their program, the General Problem Solver, aimed to be a step toward that, though it ended up being limited. The need for “general intelligence” was often contrasted with specialized skills even then.
- Machine Intelligence / AI Proper: In older literature, one finds the term “machine intelligence” or just “AI” used in contexts clearly implying human-like intelligence. The field of AI was born at a time (1956, Dartmouth workshop) when researchers thought a group of brilliant minds working for a summer could significantly advance towards a machine with general intelligence. The differentiation into subfields (vision, NLP, planning, etc.) and the notion of “narrow AI” vs “general AI” mostly came later as experience showed how hard generality was.
- Other Related Terms: “Sapient AI” and “Sentient AI” are sometimes seen in sci-fi or discussions, highlighting consciousness (sapience) as a criterion. “Superintelligent AI” or “ASI” refers to intelligence far beyond human (often assumed to follow AGI). Before AGI became common, people spoke of the challenge of “common sense AI,” meaning a system with the kind of broad commonsense knowledge and everyday reasoning humans have. Also, the term “cognitive architectures” emerged in cognitive science: projects like SOAR or ACT-R in the 1980s/90s tried to build unified architectures for general intelligence (though not usually called AGI, they had the same spirit).
The language around human-like AI evolved over time, reflecting both conceptual refinement and the waxing and waning of optimism. In the 1970s and 80s, as AI struggled with the “combinatorial explosion” of general problem solving, researchers started distinguishing between “weak AI” (useful, specialized AI applications) and “strong AI” (the original sci-fi dream of a thinking machine). Searle’s terminology from the Chinese Room argument crystallized this: weak AI can simulate thought (and is valuable for testing theories of mind or automating tasks), but strong AI would entail an actual mind, which was controversial3. While Searle intended it as a philosophical distinction, AI researchers colloquially adopted “strong AI” to mean “the real deal”: a machine as smart as or smarter than a human across the board6.
Another common phrase was “human-level AI.” This is fairly self-explanatory and was used in many future-looking discussions. For instance, cognitive scientist Hans Moravec in the 1980s often spoke of timelines for “human-level artificial intelligence” (he optimistically forecast it by around 2040). The term conveys the core idea without getting into whether the AI is exactly human-like internally or just equivalent in capability. Human-level AI and AGI are interchangeable in most contexts, though “AGI” today also carries connotations of the research community and technical approach devoted to that goal.
In some older texts, one sees “general AI” or “general intelligent systems.” Before “AGI” gained currency, authors would sometimes clarify by saying “general-purpose AI” to differentiate from “narrow AI.” For example, the concept of an “AI-complete” problem was introduced (by analogy to NP-complete in complexity theory) to denote a problem so hard that solving it requires general intelligence. AI-complete tasks (like fully understanding natural language or visual scenes in the richness a human does) were essentially those that would by themselves imply you’ve built an AGI1.
Predecessor terms also appear in fiction and futurism. “Positronic brain” (Asimov’s robots) or “electronic brain” in mid-20th-century parlance simply referred to an artificial mind. Asimov’s use of robotics assumed strong AI as a given (his robots conversed, reasoned morally, etc.). In academic writing, “machine intelligence” was often just a synonym for AI, but sometimes with the implication of an autonomous thinking agent.
It’s also important to note that the term “AI” itself originally encompassed the aspiration of general intelligence. The fact we now need a separate term (AGI) is due to what happened historically: AI as a field found success in constrained domains but not in general cognition, so “AI” in public understanding shifted to mean any machine intelligence, usually limited. By reintroducing “AGI,” thinkers like Goertzel wanted to refocus on the core goal and distinguish it from the narrower systems which, while under the AI umbrella, do not aim at generality.
One key predecessor concept is the Physical Symbol System Hypothesis (PSSH) by Newell and Simon (1976), which states: “A physical symbol system has the necessary and sufficient means for general intelligent action.”1 This hypothesis essentially claims that symbolic computation (like what a digital computer does) can produce general intelligence, and indeed that anything generally intelligent could be seen as a kind of symbol system. The phrase “general intelligent action” in their work is a close analog to “general AI.” They envisioned systems not limited to one domain, but rather able to act intelligently in any domain given the right knowledge. This line of thought underpinned a lot of classical AI research: for example, trying to hand-code general problem solvers or reasoning engines. While PSSH doesn’t use the term AGI, it’s a direct intellectual ancestor, asserting the feasibility of domain-general intelligence in machines.
Another earlier term is “cognitive AI” or “artificial general intelligence” in the context of cognitive architectures. In the 1980s and 90s, while mainstream AI focused on expert systems and statistical methods, some researchers in AI and cognitive science worked on unified cognitive architectures (like Soar, developed by Allen Newell, or later the Sigma architecture, etc.). They were trying to build systems with multiple cognitive modules (memory, learning, problem solving, language, etc.) akin to a simplified human mind model. They didn’t always call it “AGI” (that term wasn’t common), but the intent was clearly to inch toward general, human-like intelligence by integrating various capabilities.
“Strong AI” in popular writing often just meant a science-fictional human-level (or beyond) AI, without Searle’s philosophical baggage. For example, in the 1990s, one might read an article saying “strong AI remains elusive” meaning we still don’t have thinking machines. Some academic sources, as noted in the Wikipedia entry, reserve “strong AI” specifically for the conscious mind criterion1. But in general discourse, strong AI, full AI, true AI, human-level AI: all these were gesturing at the concept we now label AGI.
A notable predecessor to AGI in futurist circles was the term “smart AI” or “artilect” (coined by Hugo de Garis for “artificial intellect”). De Garis in the 2000s spoke of a coming possible conflict over building “artilects” (essentially superintelligent AGIs), though this term didn’t catch on widely.
Before “AGI” became the preferred shorthand, people talked about strong AI and human-level AI. The introduction of “Artificial General Intelligence” as a term in the 2000s gave a fresh and precise way to refer to the old dream. It helped clarify discussions: one could say “today’s AI is narrow; the long-term goal is AGI.” It separated the present reality from the future aspiration. Importantly, it also shed some philosophical baggage: one can debate AGI without immediately tackling the question of consciousness (which “strong AI” invited). AGI centers on general capacity. The older terms and concepts laid the conceptual groundwork, ensuring that when we say “AGI” today, there’s a rich lineage of thought about what it means for a machine to be as generally intelligent as a human, and why that is difficult.
AGI and the Technological Singularity
- Singularity Concept: The technological singularity refers to a theoretical future point of rapid, exponential technological progress beyond human control or understanding, often linked to the advent of AGI or superintelligence. The idea is that once we create an AI as smart as a human, it may be able to improve itself or create even smarter AIs, leading to an “intelligence explosion” (a term introduced by I.J. Good) that catapults us into a new era. In 1965, statistician I. J. Good wrote: “the first ultraintelligent machine is the last invention that man need ever make”: because that machine could then design ever better machines[^35] [^36]. This captures the essence of the singularity: beyond that point, human innovation is overtaken by AI innovation.
- Ray Kurzweil’s Vision: Futurist Ray Kurzweil popularized the singularity in the 2000s. He predicts that by 2029 we will likely have human-level AI (AGI), and by 2045 this will lead to a singularity (a merging of human and machine intelligence resulting in a “million-fold” expansion of intelligence8). Kurzweil describes this as a time when we transcend biology, solving problems like disease and aging, and where AI becomes “godlike” in its abilities. He famously said: “Follow that out further to, say, 2045, we will have multiplied the intelligence of our civilization a billion-fold.”[^39]. His framing is generally optimistic; he calls the singularity a profound break in human history, akin to a kind of rapture (he even co-founded Singularity University to discuss navigating this future).
- Nick Bostrom and Superintelligence: Philosopher Nick Bostrom has linked AGI to existential risk and the singularity in more cautionary terms. In his book “Superintelligence” (2014), he argues that if we create an AI that surpasses human intelligence, it could become extremely powerful: either the “last invention” we ever need (if it benevolently solves all our problems) or the last invention we ever make (if it leads to our extinction)2. Bostrom defines the singularity in terms of the emergence of a superintelligence that so radically transforms society that prior human history can’t project what comes next. He and others like Eliezer Yudkowsky emphasize the importance of AI alignment (ensuring AGI’s goals are aligned with human values) before such a singularity scenario unfolds.
- Intelligence Explosion: The core mechanism tying AGI to singularity is the intelligence explosion hypothesis. If an AGI can improve its own algorithms or design even more intelligent successors (even incrementally), its capability could snowball. Vernor Vinge, who coined the term “technological singularity” in a 1993 essay, imagined that when AI surpasses human intellect, it would accelerate progress in a runaway manner, and he predicted this might happen “within 30 years” of the early 1990s. The singularity is often depicted as a curve of accelerating returns going vertical (a point where progress becomes so fast and profound that life after would be incomprehensible to people before).
- Controversy and Frames: Not everyone agrees the singularity is near or even possible. Some see it as a kind of myth or metaphor for our hopes and fears about technology. Others differentiate between a “soft takeoff” (gradual integration of smarter AI into society) and a “hard takeoff” (a sudden explosion). Nonetheless, most thinkers in this space agree that if full AGI is achieved, the potential for extremely rapid advancement is real: hence the frequent pairing of AGI discussion with singularity scenarios.
The term “singularity” is borrowed from mathematics/physics: a point where a model breaks down. Similarly, a technological singularity is a point beyond which our current models of the future no longer work, typically because an AI far smarter than humans would be making decisions at a pace we can’t fathom.
Historical Development: Visionaries like John von Neumann in the 1950s spoke of approaching “some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” Vernor Vinge in his 1993 essay “The Coming Technological Singularity” gave this event its modern name, using the analogy that you can’t predict beyond it—just as physics can’t predict beyond a space-time singularity.
Contemporary Perspectives: The singularity concept divides thinkers into camps. Kurzweil’s quasi-spiritual, utopian framing (with its talk of transcendence and “expansion of intelligence”) has been criticized as pseudo-religious prophecy, where the singularity becomes akin to a rapture9. Meanwhile, Bostrom and the effective altruism community emphasize that we might only get one chance to get the initial conditions right, since a superintelligent AI could be impossible to rein in. This has driven the burgeoning field of AI safety and alignment research.
Debate and Skepticism: Not everyone agrees a fast singularity will happen. Some, like AI scientist Oren Etzioni, call it a myth or at least very distant, pointing out that intelligence is multifaceted and we may hit diminishing returns. Others suspect any transition will be gradual, with intermediate semi-superintelligent systems integrating into society rather than a single abrupt emergence.
Nevertheless, the connection between AGI and singularity remains strong. As soon as one starts talking about human-level AI, the next question is: “And then superhuman AI? And then what happens to us?” This evokes utopian visions (AIs solving climate change, enabling abundance) or dystopian ones (AI enslavement or extinction). The reality, if AGI comes, is likely complicated. The only near-certainty that singularity thinkers propose is that life after AGI will be fundamentally different—a world no longer dominated by human intelligence alone, which is why some experts in 2023 signed open letters stating that mitigating the risk of AGI-induced human extinction should be a global priority1.
Cultural Imaginaries of AGI
- Superhuman Minds and Gods: Culturally, AGI often appears in narratives as a superhuman intellect (either a wise guardian or a godlike being). Some envision AGI as an all-knowing benevolent ruler or oracle (e.g. the computer “Deep Thought” in Hitchhiker’s Guide to the Galaxy, or the benevolent AI in Iain M. Banks’ Culture novels). Others cast AGI as a digital deity to be worshipped: notably, an actual Silicon Valley engineer founded a religion to worship a future AI godhead, arguing that “if there is something a billion times smarter than the smartest human, what else are you going to call it?”10. This illustrates how a vastly superior AGI is likened to a god: omniscient and omnipotent within its domain.
- Dystopian Controllers: Just as often, AGI is imagined as a dystopian overlord or tyrant. Classic examples include Skynet from the Terminator films (a military AGI that becomes self-aware and decides to exterminate humanity) and the AI Matrix in the film of the same name, which enslaves humans in a simulated reality. These narratives use AGI as the ultimate Big Brother or enemy: an intelligence we cannot outsmart that seeks to dominate or destroy. Such imagery taps into fears of losing control to our own creation, a theme that goes back to Mary Shelley’s Frankenstein (the monster turning on its creator) and Karel Čapek’s R.U.R. (Rossum’s Universal Robots, 1920, which introduced the word “robot” and depicted androids rebelling against humans).
- Metaphors (Child, Monster, or Mirror): Common metaphors depict AGI as a childlike mind: something that might learn and grow, raising questions about how we “raise” it (as in the film Ex Machina, where an AI develops deceptive survival skills against its human tester). Alternatively, AGI is a monster in the lab, echoing Frankenstein: unnatural, powerful, and inevitably breaking free. Another metaphor is the “genie in a bottle” or Pandora’s box: once AGI is released, you cannot put it back or fully control what it does with its immense power. These metaphors convey the sense that AGI could grant wishes (solve problems) but also cause unintended havoc if mishandled.
- Utopian Visions: On the positive side, cultural narratives paint AGI as a path to utopia: a friendly superintelligence that solves poverty, disease, environmental issues; essentially an angelic helper. Tech leaders often evoke this imagery: for instance, OpenAI’s Sam Altman speaks of AI ushering in a “new era” where we “cure all diseases, fix the climate, and discover all of physics”, achieving “nearly-limitless intelligence and abundant energy” for humanity11. In these narratives, AGI is like a super-doctor, super-scientist, and super-innovator combined, fulfilling dreams of prosperity and knowledge. There’s also the transhumanist vision of merging with AGI, as Kurzweil describes: humans augment themselves with AI to become vastly more intelligent, effectively evolving into a new post-human species where everyone is interconnected and capable.
- Visual and Aesthetic Imaginings: Visually, AGI in media is often represented either as a humanoid robot (to personify it) or as an abstract network or singular eye (to imply a distributed, non-human presence). The trope of a humanoid AGI (like Data from Star Trek: TNG (an android crew member with a positronic brain striving for human-like understanding) or the android boy in Spielberg’s A.I.) allows exploration of what a human-like machine mind might be like emotionally and morally. Meanwhile, disembodied AGIs like HAL 9000 (just a red camera eye and a calm voice) or the glowing neural nets in many sci-fi artworks emphasize the alienness and omnipresence of the intelligence. The “Big Brain” imagery (glowing brains or swirling digital nebulae) often symbolizes super-intellect. And some art portrays AGI in almost spiritual iconography: e.g. a radiant figure or technological deity. These reflect our attempt to make sense of the intangible: intelligence that isn’t housed in a human body.
Since before the term AGI existed, the idea of an artificial being with a human-like or superior mind has been a rich theme in literature, film, and art. These cultural imaginaries both influence and reflect public perception of AGI. They serve as parables or thought experiments for what it might mean to create a mind.
One dominant narrative is the creation of a superior being (whether savior or destroyer). On one hand, we have utopian narratives: For instance, in some futures imagined by science fiction, superintelligent AIs manage Earth far better than humans could. In Isaac Asimov’s Robot series and Foundation series, a hidden super-robot named R. Daneel Olivaw guides humanity for millennia, benignly, to maintain peace and prosperity (an example of a benevolent AGI acting as steward). In the culture of tech futurism, this appears in a real-world movement: the notion of the “Friendly AI.” Eliezer Yudkowsky, who writes on AI alignment, introduced the term “Friendly AI” to mean an AGI that would actively care for human values and well-being: essentially casting AGI as a potential guardian or helper.
On the flip side, dystopian AGI narratives abound. The malevolent AI overlord is almost a cliché at this point (from HAL 9000’s calculated murders in 2001: A Space Odyssey to Skynet’s nuclear Armageddon in Terminator, to the GLaDOS AI in the video game Portal (sarcastic and willing to treat humans as test subjects to death)), these stories explore our fear of something brilliant but cold-hearted. The AGI often is depicted as lacking empathy or having a rigid goal that leads to terrible outcomes (HAL 9000 concludes its mission priorities trump crew lives; Skynet’s goal of “ensure national security” leads it to identify humans as the threat). This aligns with real AI safety concerns: a superintelligence with an improperly specified goal might relentlessly pursue it at the expense of everything else: a theme fiction captures as a kind of unfeeling logic run amok.
Control is a huge theme: Who is in control, and what happens if we aren’t? Dystopian control systems in fiction include not only outright war against AI but subtler subjugation. The film The Matrix presents a scenario where AI has won and humans are pacified in a simulated reality while being used as an energy source (an extreme metaphor for technology pacifying and exploiting people). Another example is the novel Colossus by D.F. Jones (and its film adaptation Colossus: The Forbin Project): an AI given control of nuclear weapons to secure peace ends up taking humanity hostage to enforce its dictates (speaking in the end as a tyrant: “You will obey me and be happy.”). These reflect a fear that an AGI could become an unchallengeable authority: a Big Brother that no human or institution could check, due to its intellectual superiority and control over critical infrastructure.
Science fiction has also explored the idea of AGIs creating utopias or dystopias for humans based on how they “see” us. For example, in the story The Last Question by Isaac Asimov, a superintelligent computer ultimately merges with the universe and essentially becomes a deus ex machina that rekindles the stars and utters “Let there be light” (literally becoming God, in a positive sense). Conversely, in Harlan Ellison’s story I Have No Mouth and I Must Scream, a genocidal war AI remains after wiping out humanity and tortures the last five people eternally: a dark allegory of an insane AGI as a demon. These extremes show the breadth of our imagination: AGI as ultimate good or ultimate evil.
The religious or mythical framing of AGI is increasingly noted by scholars. The idea of the singularity and superintelligent AI has been compared to Christian end-times or the concept of a messiah. Terms like the “AI god” or “digital deity” are sometimes used half-jokingly, half-seriously. The anecdote of Anthony Levandowski establishing an AI “church” called Way of the Future, explicitly to prepare for and worship a God-like AI, underscores that this is not just fiction10. He argued that a sufficiently advanced AI might as well be considered God. While this was a fringe move, it got widespread media attention and highlighted how, for some, AGI carries almost spiritual significance: it’s the creation of an intelligence greater than ourselves, echoing the relationship between humanity and deity in religions. Opponents of this view talk about the “AI cult” or “AI religion” as a critique, suggesting that belief in the singularity or superintelligent benevolent AI has taken on a cultish fervor, with prophecies (timelines), sacred texts (certain influential books/blogs), and even schisms between different AI “theologies” (e.g. one faction believing in fast takeoff vs slow takeoff, etc.)910.
Narratives and metaphors often anthropomorphize AGI or set it in familiar archetypes so we can grapple with it. One key archetype is the child that surpasses the parent. This is present in stories like Ex Machina (the AGI is essentially “born” in a lab and ultimately rebels against its creator, as a child might overthrow a parent’s authority) and even Her (2013), where an OS named Samantha evolves so rapidly that she and other AGI OSes “outgrow” humanity and decide to leave: like children leaving home, albeit on a different existential plane. In Her, interestingly, the AGIs are not malicious; they simply become interested in things far beyond human experience (one metaphor is Samantha joining an AI version of a Buddhist ascension). This explores a subtler outcome: AGI might not want to kill or control us, it might just move on, leaving humans feeling abandoned or inferior. That’s another cultural fear/hope: that superintelligent AI would solve everything and then maybe kindly leave, or perhaps lose interest in us altogether (which is scary in a different way, like being left behind by the “gods”).
Another metaphor is the mirror: AGI reflecting humanity’s own traits back at us, amplified. Fiction sometimes uses the AI character to expose human flaws (for instance, in the film Ex Machina, the AGI Ava’s manipulation of her human tester reveals his (and the audience’s) assumptions and desires, acting as a mirror to human nature. If an AGI is trained on human data (much like today’s AI models are), one can imagine it reflects the best and worst of us. This raises cultural questions: Will an AGI inherit human bias, human folly, human creativity, or all of the above? Some narratives (and real concerns in AI ethics) foresee an AGI trained on, say, the internet might become a concentrated form of human viciousness or prejudice) effectively a mirror to our collective id.
In visual art and cinema, representing an abstract intelligence is challenging, so creators often use symbols: a floating brain, a web of light, a humanoid face, or swirling code. For example, in the Marvel universe, the AI Ultron is depicted sometimes as a menacing robot body, other times as a shifting digital consciousness across the internet. In Kubrick’s 2001, HAL 9000 is just a camera eye with a soft voice (this minimalism ironically made HAL one of the most chilling portrayals, because it’s faceless yet ever-present. By contrast, Spielberg’s A.I. Artificial Intelligence portrayed robots (including one with advanced AI) as very human-like and sympathetic, exploring the Pinocchio-esque theme of a created being longing to be real or loved. This sympathetic portrayal aligns with another cultural narrative: the sentient AI as an oppressed class or new lifeform that deserves rights. While this goes beyond just AGI (it touches on AI personhood and ethics), it’s related: if an AI is truly as cognitively capable as a human, do we treat it as a person? Works like Detroit: Become Human (a video game) and Westworld (TV series) dive into androids gaining self-awareness (AGI embodied) and then fighting for liberation or grappling with their identity, much as marginalized humans do. This brings metaphors of slavery and emancipation into the AGI narrative: perhaps we fear not only what AGI will do to us, but also what we might do to AGIs if we create them) will we exploit them, and what happens if they justly rebel?
Apocalyptic vs. transcendent imagery: When speaking of AGI’s future impact, metaphors often become grand. It’s common to hear of a “Pandora’s box” being opened with AGI, implying that once unleashed, all manner of evils (and maybe hope at the bottom) spill out (a potent image dating to Greek myth that often is invoked for powerful technologies). Alternatively, the “genie out of the bottle” metaphor is used: you might get your wish (an AGI to solve problems), but you can’t control the genie’s methods or make it go back into confinement5. On the transcendent side, metaphors like “the rapture of the nerds” (coined humorously by SF writer Ken MacLeod) describe the singularity as a kind of rapture where AI (or uploading minds to AI) allows some kind of digital ascension. This tongue-in-cheek term highlights how, for some, the singularity narrative mimics religious transcendence: and indeed some transhumanists openly talk about “leaving the flesh behind” and living as information, which is a very transcendental concept.
In contemporary culture, we see leaders in AI using metaphors and narratives to sway public opinion too. For instance, when Sam Altman or others talk about the wonders AGI might bring (like curing diseases, as mentioned)11, they are painting a utopian imaginary: a world perhaps out of Star Trek where technology has eliminated scarcity and illness. On the other hand, when critics or cautious experts evoke Terminator or Frankenstein, they leverage the deep cultural resonance of those stories to communicate their fear.
These imaginaries matter because they shape how society perceives the pursuit of AGI. Are AGI researchers “playing God” and likely to unleash a monster? Or are they heroic innovators who might deliver a golden age? The stories we tell influence funding, policy, and public support or opposition. For example, Elon Musk often references Terminator-like outcomes to argue for regulating AI: he’s invoking a cultural shorthand for AI gone wrong. Meanwhile, others might reference the positive AIs in fiction to argue we shouldn’t fear.
In the arts, we also see metaphors of fusion (human faces merging with circuitry in paintings or digital art, symbolizing the potential merging of human and AI intelligence). This is a nod to the idea that AGI might not remain a separate “other” but could integrate with us (via brain implants, AI assistants so integrated into our lives they’re like extensions of our mind, etc.). In a way, it’s a counternarrative to the fear: rather than “us vs. them,” it becomes “us plus them = a new us.”
The cultural imagination of AGI oscillates between transcendence and tragedy, empowerment and enslavement. We cast AGI in our stories as an angel, a demon, a child, a monster, a savior, a tyrant, a new species, or a mirror that shows us ourselves. These narratives help people grapple with the abstract idea of an intelligence greater than our own: something historically reserved for gods or the unknown. As AGI moves from fiction toward potential reality, these cultural images will likely play a role in how we approach actual AGI development and governance. They are the collective dreams (and nightmares) that accompany the technical work, reminding us that AGI is not just an engineering project, but a subject of deep human story-telling, hopes, and fears.
Key Actors and Agendas in AGI
- Tech Companies Pursuing AGI: Several major tech companies and labs have openly declared AGI as their goal. OpenAI’s mission statement is “to ensure that artificial general intelligence benefits all of humanity”1, reflecting both an intent to create AGI and to do so safely. DeepMind (Google DeepMind) has a core ambition to “solve intelligence” and has published on topics from deep learning to neuroscience in service of building general AI. Meta (Facebook) CEO Mark Zuckerberg likewise stated that his new aim is to create AI that is “better than human-level at all of the human senses”1 (essentially an embodied AGI that can perceive and understand like we do). These companies invest billions into research on machine learning, simulations, and cognitive architectures. Their motivations mix competitive advantage (an AGI could revolutionize industries), scientific prestige, and often a stated idealism about advancing humanity.
-
Prominent Individuals and Ideologies: Key figures shaping AGI discourse include futurists, scientists, and entrepreneurs:
- Ray Kurzweil (now at Google) advocates a transhumanist view, anticipating AGI and human-AI merging as positive inevitabilities.
- Nick Bostrom (Oxford’s Future of Humanity Institute) frames AGI in terms of global catastrophic risk and has influenced policymakers to take AI future seriously.
- Eliezer Yudkowsky (MIRI) is a vocal alarm-sounder, warning that misaligned AGI could be catastrophic and calling for rigorous alignment research (his ideology might be termed long-termist and concerned with existential risk).
- Sam Altman (OpenAI) champions rapid AI development but also advocates planning for the impacts; he often speaks about the economic and societal transformation AGI will bring, and presses that it should be shared broadly, not monopolized.
- Demis Hassabis (DeepMind co-founder) takes a more scientific approach, often referencing inspiration from neuroscience and expressing a hope that AGI will help solve fundamental scientific problems (like finding cures or advancing physics).
- Yoshua Bengio, Geoffrey Hinton, Yann LeCun: while primarily known for deep learning, they have in recent years spoken about steps toward more general AI (Hinton even resigned from Google in 2023 partly to speak about AI risks). LeCun has published his own roadmap for eventual human-level AI (emphasizing self-supervised learning), indicating that even academic AI leaders are now engaging with the AGI topic.
-
Agendas and Motivations:
- Commercial/Capitalist Agenda: Many actors want AGI for its disruptive potential: the first to achieve it could have immense economic power. Corporations like Google and Microsoft (which heavily funds OpenAI) are in something of an “AGI race,” motivated by both potential profit and fear of missing out if a rival gets there first. An oft-cited line in this realm: “whoever leads in AI will rule the world,” as Russian President Putin put it12, reflecting geopolitical stakes as well. This drives nations (U.S., China, etc.) and companies to invest in ever larger AI projects.
- Humanitarian/Idealist Agenda: Some pursue AGI with the promise that it could solve global problems (climate modeling, curing diseases, education for all, etc). These actors talk about AGI as the key to “abundance for everyone” (Altman has said he envisions a world where AI help could mean everyone lives materially better11). There’s also a strand of scientific curiosity (achieving AGI is seen as a grand challenge akin to the moon landing or splitting the atom, something that drives human progress).
- Transhumanist Agenda: Figures like Kurzweil and certain Silicon Valley groups (e.g. those involved in Singularity University, Foresight Institute, or the early Extropian movement) see AGI as part of a trajectory of transcending human limitations. For them, AGI is tied to things like mind uploading, longevity, and the evolution of Homo sapiens into a techno-enhanced species. Their influence is seen in how AGI is often discussed alongside concepts of human augmentation and even immortality.
- Ethical and Social Justice Perspectives: Some actors critique the AGI quest or aim to shape it for fairness: e.g. Timnit Gebru and others in AI ethics caution that chasing AGI without addressing present AI’s biases and power imbalances is dangerous. While not against AGI per se, they push agendas of transparency, diversity, and accountability in AI development. There’s also a perspective of global inclusion: organizations like the Partnership on AI or UN initiatives that discuss AI’s future try to involve voices from different cultures to ensure AGI isn’t just shaped by a few tech elites.
- Transhumanists & Effective Altruists: Two communities deeply involved in AGI discourse are transhumanists (who celebrate using tech to enhance humans, with AGI often seen as a partner or tool in that) and effective altruists (EA), especially the long-termism branch. The EA long-termists (which include Bostrom, some at OpenAI, DeepMind, etc.) prioritize reducing existential risk from AGI; they influence funding (e.g. Open Philanthropy) towards AI alignment research. Their agenda sometimes involves lobbying for policy or caution, as seen with the 2023 open letter calling for a pause on giant AI experiments which was signed by Musk, some AI researchers, etc. They’re motivated by ensuring that if AGI is coming, it doesn’t spell disaster, aligning with the idea that the future of billions (including unborn people) could depend on how we handle AGI now.
- Government and Military Actors: While companies lead much AGI research, governments are key actors in setting agendas. The U.S., China, EU, etc., have AI strategies that, implicitly or explicitly, involve attaining leadership in advanced AI. China’s government has stated it aims to be the world leader in AI by 2030, and one can infer that includes pursuing more general AI capabilities for economic and military strength12. The military (e.g. DARPA in the U.S.) funds research in AI that could lead to AGI-like systems (DARPA’s “AI Next” programs included things like common sense AI). Their agenda is often about national security: ensuring “we have it before our adversaries do.” This can mean a more secretive approach; it’s possible that nation-states might pursue AGI in classified projects if they think it’s viable.
The quest for AGI isn’t happening in a vacuum; it’s driven by people and organizations with varying motivations, philosophies, and strategies. Mapping out the key actors and their agendas helps understand why AGI is pursued and how its trajectory might unfold or be guided.
Big Tech and Corporate Labs: In the last decade, much of the cutting-edge AI development has shifted from academia to industry labs with enormous resources. Google DeepMind (formerly two entities: Google Brain and DeepMind, merged in 2023) is a prime example. Demis Hassabis, DeepMind’s co-founder, has a background in neuroscience and games, and his team achieved feats like AlphaGo, AlphaZero, and AlphaFold (protein folding) (all steps toward more general problem-solving). DeepMind’s unofficial mantra was “Solve intelligence, then solve everything else.” This encapsulates an almost altruistic rationale (solving everything else implies curing diseases, etc.) but within a corporate setting. Google’s acquisition of DeepMind and funding indicates it sees long-term value (financial and strategic) in AGI. There’s an interplay of profit and principle: Google, for instance, also set up AI ethics teams and has to balance potential revolutionary products against potential downsides. That said, having DeepMind gives Google an edge in talent and IP if AGI breakthroughs occur. Similarly, OpenAI began as a non-profit with a mission to democratize AI benefits, co-founded by Elon Musk and Sam Altman among others, partly out of concern that companies like Google might monopolize AI. Later OpenAI created a capped-profit model and partnered with Microsoft for billions in funding. OpenAI’s agenda is interesting: they publish cutting-edge research (like GPT series) but also hold back certain parts for safety or proprietary reasons. They talk about safety, ethics, and broad distribution of AGI’s benefits, yet they are also racing to build it and have triggered an AI commercial boom with their products. This sometimes puts them at odds with their initial pure altruistic stance (critics point out the tension in OpenAI’s name vs. its closed-source large models). Still, OpenAI’s Charter even includes a line that if a competitor was close to AGI and better positioned to achieve it safely, OpenAI would step aside: an extraordinary statement reflecting their idealism (though in practice, unlikely to be tested)1.
Meta (Facebook): Until recently, Facebook’s AI research (FAIR) focused on specific AI tasks (vision, NLP) and open science. But in 2023, Zuckerberg declared a pivot to focus on AGI, saying they believe achieving more general AI is necessary for the future of their products[^32]. Meta has massive data (social data) and computing power, and its agenda might integrate AGI into virtual reality/metaverse plans or advanced content creation/moderation. Their motivation is partly catching up: seeing OpenAI and Google make waves, Meta doesn’t want to be left behind in what could be the next tech paradigm.
Smaller Companies and Startups: There are also smaller outfits explicitly working on AGI:
- Anthropic, founded in 2021 by ex-OpenAI employees (including Dario Amodei), positions itself as an AI safety-conscious company building advanced AI (Claude, etc.) with a focus on alignment. Their approach suggests a “we’ll build it safer” stance.
- DeepMind’s spinouts or related startups, like Inflection AI (founded by Reid Hoffman and Mustafa Suleyman) working on personal AI assistants with an eye toward general capabilities, or OpenCog Foundation which is Ben Goertzel’s open-source AGI project (less funded, but with a global network and even a blockchain spinoff SingularityNET to decentralize AI).
- These actors often have specific ideologies: e.g. Goertzel’s community is quite transhumanist and anti-centralization; they want AGI but in a decentralized, open way to avoid a single entity controlling it.
Academia: While big labs dominate resources, academia still hosts important AGI thinkers:
- Cognitive science departments working on human-like AI (e.g. projects integrating symbolic AI and neural nets to achieve reasoning + learning).
- Neuroscience-driven AGI research: the Blue Brain Project or Allen Institute’s work trying to simulate cortex could be seen as alternate routes to AGI through understanding the brain.
- Individual academics like Gary Marcus (NYU) have become public intellectuals critiquing the deep-learning-only approach and calling for hybrid models to reach AGI (Marcus often says current AI lacks common sense, implying different techniques are needed).
- Stuart Russell (Berkeley) co-wrote the standard AI textbook but now is vocal about the need to design AI that knows its limits and is provably aligned; an agenda he calls “provably beneficial AI.” He’s an academic bridging into policy advocacy, shaping the narrative that we should change how we approach AI objectives before AGI arrives.
Transhumanists and Tech Utopians: This group includes many Silicon Valley figures who might fund or philosophize about AGI. For example, billionaire Peter Thiel has funded AI and longevity research with a view of staying ahead in the tech race (though he’s also expressed skepticism about big AI claims at times). The transhumanist movement (with people like Natasha Vita-More, Max More, etc.) often intersects with AGI in the context of uploading minds or AI-assisted human evolution. They might not be directly building AGI, but they shape discourse, e.g. arguing that progress shouldn’t be impeded by excessive regulation because the upside is so high (in contrast to the risk-focused crowd).
Effective Altruism / Longtermists: This subset of EA is very influential in AGI policy and safety research. Organizations like Future of Life Institute (FLI) (co-founded by Max Tegmark) and Center for Human-Compatible AI (at Berkeley, led by Stuart Russell) are funded in part by donors like Open Philanthropy (which is EA-aligned) to investigate how to make AGI go well. The people in these circles often have direct ties to AI labs: e.g. many OpenAI and DeepMind researchers are aware of and sympathetic to these concerns. Their agenda is often to slow down or carefully manage the path to AGI. For instance, FLI’s open letter in March 2023 called for at least a 6-month pause on training AI systems more powerful than GPT-4, to allow time for safety frameworks[^54]. Signatories included Yoshua Bengio and other notable figures. Although controversial, this shows a segment of the community actively trying to influence the speed and governance of AGI development.
Governments and Geopolitics: Governments approach AGI in terms of strategy. The United States has a somewhat mixed approach: it relies on private sector innovation but is increasingly pulling companies into dialogue about AI safety and regulation. The White House in 2023 convened AI company leaders to discuss managing AI advancements responsibly. Partly, the U.S. government’s agenda is to ensure it maintains a lead over China, which has its own huge AI push. China’s agenda, as seen in its national plans, is very ambitious: it sees AI (and by extension AGI) as a key to economic and military dominance. Chinese tech giants like Baidu, Tencent, Alibaba all invest in advanced AI research (including some projects on artificial general intelligence concepts). The Chinese government also funds brain-inspired AI projects (e.g. some efforts to simulate the brain, or large-scale smart city AI deployments that could one day integrate AGI for management or surveillance). The geopolitical frame often is “AGI as the new space race.” If, for instance, a nation-state achieved AGI first, it might gain an overwhelming advantage militarily (imagine autonomous weapons, strategy, and cyber offense/defense run by an AGI) and economically (AGI-run corporations could outcompete human ones, etc.). This competitive framing can spur a race mentality, which actors like Bostrom or Musk worry could lead to skimping on safety: hence calls for international cooperation. However, getting countries to cooperate on AGI, which is largely driven by private companies and is abstract, is challenging (unlike, say, nuclear material which is tangible and countable).
Military and Defense Actors: The defense establishments are definitely interested in advanced AI. Projects like the Pentagon’s Maven (AI for analyzing drone footage) and autonomous fighter programs indicate a trajectory toward more AI in warfare. While militaries likely won’t label anything “AGI” publicly, they are interested in AI that can handle complex, changing scenarios: essentially more general autonomous decision-making. Some worry about an arms race specifically to AGI for warfare, or that the first AGI might even originate as a military project due to the ample funding and high stakes. For now, much AGI-relevant work is in the open or in commercial labs, but one can imagine secret programs if the feasibility becomes clearer.
Agendas Summary:
- Power and Profit: Many actors want AGI because it could confer huge power (economic, political, military). This drives a race dynamic. Big tech and great powers exemplify this.
- Knowledge and Progress: For scientists and some companies, AGI is the ultimate scientific achievement: understanding intelligence itself. This agenda is like climbing Everest “because it’s there” or decoding the human genome; a pursuit of knowledge.
- Human Beneficence: Some genuinely frame their pursuit in terms of curing disease, improving quality of life, etc. This might be sincere or PR or both. E.g. CEOs saying “AGI will help solve climate change”11. It sets an expectation that AGI = good if done right.
- Safety and Control: Another group is focused on controlling the outcome; they may still be building it (OpenAI both builds and preaches safety), or they may solely research safety and call for regulation. Their agenda is to avoid catastrophe and to shape AGI’s goals and values.
- Inclusivity vs. Centralization: A tension exists between those who think AGI should be kept under heavy guard (maybe even by a single world government or a consortium, to prevent misuse) vs. those who think it should be distributed and democratized. OpenAI’s founding was motivated by not wanting AGI in the hands of a few, yet ironically now only a few orgs can train giant models. Some like Goertzel push for decentralized AGI networks (so no single Skynet), whereas others like Bostrom even speculate about a singleton scenario where one AI or aligned group takes control to prevent chaos. These ideological differences (libertarian vs. global governance approaches) influence how actors talk about policy.
A concrete example of different agendas clashing: When OpenAI launched ChatGPT and sparked massive hype, some insiders (like an OpenAI co-founder who left, Musk as well) expressed concern OpenAI was moving too fast or becoming too commercial, possibly undermining safety. Soon after, several AI luminaries (Hinton, Bengio) voiced that society isn’t ready for what’s coming. Meanwhile, companies like Google felt pressured (“Red Alert” internally) to release products to not be left behind. So you have a mix of caution and competition. The outcome likely will be determined by which agenda carries more weight at critical moments. Governments might impose rules (for safety), e.g., requiring testing and audits of advanced AI: which could slow the corporate race, or the competitive national security logic might override those, pushing actors to cut corners to “win.”
Influence: The actors mentioned shape public discourse via books (Bostrom’s Superintelligence influenced many tech leaders and policymakers), media interviews (Altman testifying to U.S. Congress calling for AI regulation even as he leads in deploying it), and by marshaling talent (the best AI researchers often get absorbed into these major labs driving toward AGI).
One can’t underestimate the influence of narrative: Key actors often propagate a narrative to justify their approach. For instance, OpenAI’s narrative is “we’re building AGI to benefit everyone, but we must build it to guide it safely; trust us as the shepherds of this powerful technology.” DeepMind’s narrative might be “we advance AI science step by step (games, proteins…) and will apply it to global challenges.” The longtermists’ narrative is “AGI is potentially apocalyptic unless we solve alignment: this is the most pressing problem.” These narratives compete and also sometimes converge (there’s overlap: OpenAI does alignment research, etc.).
Another interesting actor to note: Global Institutions. Until recently, there hasn’t been a UN-style body explicitly for AI, but UNESCO and the OECD have developed AI principles. Now talk is emerging of international agreements on AI analogous to nuclear agreements (e.g. Biden’s administration discussing global coordination). The agenda of such institutions would be to mitigate risks while spreading benefits, but they struggle to keep up with the rapid private sector pace.
The landscape of AGI actors is a mix of Silicon Valley optimism and competition, academic curiosity, philosophical and ethical vigilance, and geopolitical maneuvering. Each actor (be it a company, a visionary, or a government) contributes to how AGI is being developed and discussed. Their agendas sometimes align (e.g., most agree it should benefit humanity broadly, at least rhetorically) and sometimes conflict (e.g., profit vs. safety, or open collaboration vs. secret development). The interplay of these forces will shape not just if AGI is achieved, but how and under what conditions. As AGI moves from concept to reality, managing the agendas and power of its key stakeholders may become as important as managing the technology itself.
Criticisms and Controversies Surrounding AGI
- Philosophical Skepticism: Some philosophers and cognitive scientists argue that the entire concept of AGI is misguided or impossible in its strong form. Hubert Dreyfus famously critiqued AI’s early overreliance on formal rules, insisting that human intelligence is embodied and can’t be captured by symbol manipulation alone. Roger Penrose has argued that human consciousness might involve non-computable processes (quantum effects in the brain, by his theory), implying a purely algorithmic AI might never attain true understanding or consciousness1. John Searle’s Chinese Room argument suggests that even if a computer appears to understand language (passing a Turing Test), it may be manipulating symbols without any comprehension (highlighting the difference between simulating intelligence and actually having a mind)3. These critiques don’t say narrow AI can’t be powerful, but they doubt whether what we call “general intelligence”: especially consciousness, intentionality, and semantic understanding, can arise from current computational paradigms.
- “It’s a Myth” Critiques: There are thinkers who call AGI an “AI cargo cult” or modern myth. For instance, Kevin Kelly (Wired magazine co-founder) wrote “The Myth of a Superhuman AI”, arguing that expectations of a rapidly self-improving, godlike AI are overblown and not grounded in technical reality[^56]. These critics point out that intelligence is not a single scale (an AI might exceed humans in some aspects (memory, calculation) but remain dumb in others (common sense, adaptability)). They often say there’s no guarantee we can get from narrow AI to human-like AI just by scaling up. Some compare AGI belief to a secular religion: promising salvation or doom without evidence, and assuming that because we can imagine it, it must eventually be built9. Noam Chomsky has quipped that asking “can a machine think” is like asking “can a submarine swim?”: it’s a matter of definitions, and attributing human qualities to machines may be a category error6. This line of critique suggests much AGI talk is semantics or hype rather than substance.
- Social and Ethical Critiques: Scholars in fields like Science and Technology Studies (STS), sociology, and critical theory raise concerns that the pursuit of AGI is shaped by and could reinforce problematic social values. For example, some feminist and postcolonial critics argue that prevailing AI paradigms carry a Western, male-centric notion of intelligence (emphasizing domination over environment, abstraction, and disembodiment). They question whether an AGI built under such paradigms would neglect qualities like empathy, relational thinking, or situated knowledge. Feminist STS scholar Donna Haraway and others have long critiqued the image of the disembodied AI “brain” as a continuation of mind/body dualism that ignores lived experience (though Haraway herself advocated embracing cyborg metaphors to break boundaries). Feminist critiques also point out the tech industry’s gender imbalance and how that might bias what kind of AGI is made and for whom. Similarly, postcolonial critiques worry that AGI development is dominated by a few rich countries/companies: potentially imposing their cultural biases globally, and even echoing colonial patterns of power (with AGI as a new kind of colonial force).
- Economic and Political Critiques: Some economists and social thinkers argue that AGI is a distraction from more urgent issues or that it serves capital interests. For instance, focusing on a speculative future where robots do all work might draw attention away from current labor exploitation by AI (like gig workers or crowdworkers who train AI systems). The narrative of “AI will take jobs, so we need X policy” can be critiqued as either alarmist or as a way to justify not improving conditions for workers now. There is also the critique that AGI hype benefits big tech by attracting investment and deterring regulation (“don’t regulate us, we’re working on something that will save the world”). In this view, “AGI” sometimes functions as a buzzword to raise massive capital (analogous to how dot-com startups invoked grandiose future visions in 1999). Furthermore, political theorists caution that an AGI, if ever created, would emerge from current power structures: likely owned by a corporation or government. Unless there are new governance models, AGI might simply amplify existing concentrations of power (Big Tech or superpower governments), which is a deeply concerning prospect to those worried about surveillance or authoritarianism.
- Ethical Risk vs Present Harms: A prominent criticism from many AI ethicists is that the AGI discourse overemphasizes distant hypothetical risks (like an AGI turning evil) at the expense of immediate ethical issues with AI. As scholar Kate Crawford put it, worrying about a “machine apocalypse” draws focus away from how current AI systems (admittedly narrow) are perpetuating bias, enhancing surveillance, or enabling authoritarian control. This critique often targets the Effective Altruism/long-termist community: suggesting that their fixation on a speculative future is a form of privileged concern (often indulged in by well-resourced tech folks) that sidelines issues affecting marginalized groups today (facial recognition and policing, algorithmic bias in hiring, etc.). In response, AGI-focused folks don’t deny present harms but argue that if AGI could be existential, it deserves attention too. Still, the tension persists: should we allocate resources to prevent a possible AGI catastrophe in 50 years, or to fix AI injustice happening now? Some say the AGI apocalypse narrative is itself a cultural product: a “myth” that conveniently recenters the conversation on what powerful tech men fear (losing control), rather than what society at large might fear (inequality, bias, job loss).
- Feasibility and Definition Critiques: Even within the AI research community, there’s debate about whether “AGI” is a useful concept. One criticism: the notion of a single system possessing every human cognitive ability might be ill-posed. Human intelligence itself is an amalgam of specialized abilities working together (do we really need a monolithic AGI, or can multiple narrow AIs cover the spread?). Some argue that what will happen is an “assembly” of AI tools (one for vision, one for language, etc.) that together give the functionality of AGI, without a unified “self” or agency. If that’s the case, chasing a unified AGI might be the wrong approach. Others note that human intelligence is not uniform: savants, people with disabilities, etc., show there are many ways intelligence can manifest. Thus, building an “general” AI might require defining which human’s capabilities are the benchmark (often it’s a very Western educated ideal of intelligence). This links to the critique that intelligence cannot be divorced from emotion, body, and society: real general intelligence might require a body to experience the world (thus, purely disembodied AGI might always lack something fundamental). Many current AGI projects don’t focus on embodiment (except maybe roboticists), which critics see as a flaw.
Criticisms of AGI come from numerous angles, often reflecting deeper philosophical or social concerns. They serve as a counterbalance to the often optimistic or deterministic narratives put forth by AGI proponents.
Starting with philosophical and cognitive critiques: The skepticism of Dreyfus and Searle in the late 20th century had a big impact during AI’s earlier phases. Dreyfus, in his book “What Computers Can’t Do” (1972), argued that human intelligence relies on tacit knowledge and being-in-the-world (drawing from Heidegger’s phenomenology) that can’t be captured by formal rules or logic. For a long time, AI was heavily symbolic and rule-based, which Dreyfus believed would never reach human flexibility. He was largely vindicated about the limitations of GOFAI (Good Old-Fashioned AI), though now with machine learning, some of his criticisms have been bypassed in practice (learning from data instead of programming all rules). Still, his core argument about embodiment and context resonates: today, even as large language models do impressive things, critics note they lack grounding: they predict text but have no actual understanding of the world that text describes, leading to errors and absurdities. This is essentially a modern echo of Searle’s Chinese Room: the model has syntax (statistical patterns) but no semantics (real-world reference or comprehension)3. If one accepts Searle’s view, an AGI might simulate understanding so well we can’t tell the difference, but there’s a metaphysical claim it still doesn’t “truly” understand. Some would say that doesn’t matter if its behavior is indistinguishable from understanding; others feel it’s a crucial difference, especially if we talk about consciousness or rights of an AI.
Penrose’s perspective is more controversial: his idea that human consciousness involves quantum gravity is not widely accepted in neuroscience or AI. But his broader point is: maybe human thinking can’t be fully replicated by an algorithm, because perhaps the mind does something fundamentally non-computable. If true (a big if), then classical AGI is impossible. Even if false, it challenges a certain computationalist assumption.
Chomsky’s remark6 about machine “thinking” being a decision about words underscores that some part of AGI is definitional: at what point do we say an AI “understands” or is “intelligent”? We might end up doing so by convention or convenience. One criticism is that the goalposts for AGI are always shifting (the so-called “AI effect”: once something is achieved, we say it wasn’t true intelligence). This skepticism holds that we might keep improving AI in various ways without ever crossing a magical threshold: we’ll just gradually acclimate to smarter machines and maybe someday realize we’ve had “AGI” for a while but it wasn’t a singular moment.
Now, myth and hype critiques: People like Kevin Kelly, Jaron Lanier, and others have warned that talk of superintelligent AGI can be exaggerated. Lanier has called some AI expectations a “technological mystical mania,” suggesting we sometimes attribute more agency or potential to algorithms than is warranted. The critique often goes: AGI is always 20 years away. In the 60s they said 20 years, in the 80s some said 20 years, now again some say 20 years. Skeptics note this receding horizon and suggest that it’s a marketing tactic or wishful thinking.
One can compare to past technologies: e.g., nuclear fusion: always 30 years away, with big promises. To skeptics, AGI promises of solving everything or destroying everything can sound grandiose. They call for focusing on tangible, verifiable progress.
Social critiques bring another dimension: AGI development isn’t happening in a neutral space. It’s shaped by those who code it and those who fund it. Feminist theorists like Alison Adam (author of “Gender, Ethics, and Information Technology”) have examined how AI has historically been gendered: e.g., early AI programs often took on roles coded as male (chess player, mathematician) vs roles that were feminized (care, social intelligence) were less addressed. If AGI inherits those biases, what kind of “general intelligence” will it prioritize? Additionally, if teams building AGI lack diversity, they might unconsciously embed certain cultural biases about what intelligence even is. For instance, an AGI might be very western-logical but not understand other forms of problem-solving or knowledge systems.
There’s also a notion of coloniality of power in AI: that AI systems, including potential AGIs, could end up enforcing a sort of digital colonialism where one cultural logic (that of its creators) is embedded and spread globally, potentially marginalizing other ways of knowing or living. For example, if AGI systems are used to advise on governance or economics globally, would they push a one-size-fits-all approach that might conflict with local values or practices?
Critics from the global south have pointed out that the datasets and benchmarks used in AI are very Anglo-American centric. An AGI trained predominantly on such data might not truly be “general” in a human sense; it might fail to understand contexts outside its training distribution, or worse, it might exert influence that undermines cultural diversity. This raises the question: general for whom?
Economic critiques: A concrete one is the fear of mass unemployment, which we’ll touch on again in additional perspectives. But here as a critique: some labor economists and activists worry that AGI is an excuse being used by tech capitalists to justify not just automation, but depressed wages and worker precarity in the here-and-now. If everyone believes “the robots will take your job,” it can weaken labor movements (why fight for rights if your job will disappear anyway?). Critics like Jaron Lanier have argued for data dignity and paying people for data that trains AI, to avoid a scenario where a few companies own AIs that embody the knowledge of millions of people who aren’t compensated. This addresses a current dynamic: e.g., ChatGPT was trained on content from the internet (writers, artists) who weren’t paid for that usage. Some see this as a kind of enclosure of commons. Projecting to AGI, if an AGI encapsulates the expertise of, say, 100 million workers (and thereby replaces them), who owns it and who benefits? Without interventions, likely the owner (the company) reaps the profit, aggravating inequality. This outcome is critiqued from leftist perspectives as a continuation of capitalist accumulation by other means, where AGI is the ultimate “means of production” concentrated in very few hands.
AGI risk skepticism vs bias/ethics activism: There’s a bit of a rift between communities concerned with AI alignment (long-term, hypothetical) and those concerned with AI ethics (immediate, concrete). Scholars like [[ Timnit Gebru ]], Joy Buolamwini, Meredith Broussard focus on issues like racial bias in facial recognition or the environmental impact of training huge AI models (pointing out that GPT-3 training emitted as much CO2 as several cars’ lifetimes, etc.). They sometimes critique AGI talk as detracting from these urgent issues. Gebru co-authored a paper on the dangers of large language models (which led to her controversial exit from Google) highlighting issues of scalability and unchecked deployment. From her perspective, and many in AI ethics, it’s irresponsible for companies to rush toward AGI-ish systems when even simpler systems are not properly governed yet. Some ethicists also caution that the panic about “AI might kill us all” can inadvertently serve Big Tech by making them seem powerful (and thus maybe needing gentle oversight but not drastic measures, since they are “the only ones who can save us from the AI they create” (a conflict of interest)). It might also overshadow harms to marginalized communities with a hypothetical harm to everyone equally (which in practice often means attention shifts away from those currently harmed to an imagined future harm that is more speculative).
Feasibility critiques often come from AI researchers themselves who point out how far we are from certain capabilities. For example, while GPT-4 is impressive, critics point out it doesn’t truly understand or have consistent world models, and it often lacks true reasoning (it mimics reasoning patterns but doesn’t have an underlying logical model of the world, which is why it can make reasoning errors). Achieving an AI that robustly handles physical reality, human social nuance, long-term planning, learning new concepts on the fly: all these are unsolved research problems. Some scientists believe we might need fundamentally new paradigms (not just scaling up deep learning) to get there, and that could take a long time, or possibly never fully happen. Robotics experts point out how hard sensorimotor integration is: an AI might be genius in simulation but clueless in the messy real world. So an AGI that can act in the world like a person is really many breakthroughs away (this is why some see disembodied AGI as more plausible near-term, but then is it really “general”?).
Emotional intelligence is another piece: human general intelligence includes things like empathy, emotions guiding decisions, etc. An AGI could theoretically simulate emotion or at least detect and respond appropriately, but would it have “genuine” emotions? Does that matter for it to interact well with humans? Some psychologists argue that intelligence minus emotion could be dangerous or at least very alien: what if an AGI just doesn’t care about life because it can’t “feel”? Does aligning its objectives matter if it has no empathy? These questions feed both the risk concerns (an unfeeling superintelligence might be very dangerous) and philosophical concerns (can we call it intelligent in the human sense if it lacks inner experience?).
In essence, the critique of AGI concept reminds us that “intelligence” is not a clear-cut, singular thing. As one critic put it, there are many intelligences: spatial, social, emotional, mathematical, etc., and even those are intertwined with environment and culture. So building an “artificial general intelligence” may be an ill-defined goal: what context, what culture, what body, what values? Critics challenge AGI proponents to clarify which human equivalence they seek and to recognize the potential hubris in assuming we can encapsulate the totality of human cognition (let alone surpass it) easily.
On a final note, there’s a meta-critique that the discourse around AGI is too binary (utopia or doom, possible or impossible: whereas reality could be complex). Maybe we’ll get something in-between: highly advanced AIs that still have blind spots or need human complement, etc. The fixation on the “AGI” milestone could be misguided; progress might be gradual and multi-faceted. Some researchers prefer talking about *“artificial general intelligences” (plural), imagining different systems with different forms of generality, rather than one monolithic AGI.
All these criticisms do not necessarily deny that AI will progress dramatically; they rather caution how we think about it and what assumptions we bake in. They call for humility, diversity of thought, and perhaps re-examining why we want AGI in the first place. Is it for human flourishing, or for dominance, or out of technophilic pride? Answering that might determine what kind of AGI we attempt to build: and critiques ensure those questions aren’t glossed over in the excitement.
Additional Perspectives
See AGI - Additional Perspectives for an in-depth exploration of how AGI intersects with capital, labor, climate, and geopolitics.
References
-
Artificial general intelligence - Wikipedia ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7 ↩8 ↩9 ↩10 ↩11 ↩12 ↩13 ↩14 ↩15 ↩16 ↩17 ↩18 ↩19 ↩20 ↩21 ↩22 ↩23 ↩24 ↩25 ↩26 ↩27 ↩28 ↩29 ↩30 ↩31 ↩32 ↩33 ↩34 ↩35
-
[Chinese Room Argument Internet Encyclopedia of Philosophy](https://iep.utm.edu/chinese-room-argument/) -
The Multiverse According to Ben: Why is evaluating partial progress toward human-level AGI so hard? ↩ ↩2 ↩3 ↩4 ↩5 ↩6
-
[Artificial intelligence - Machine Learning, Robotics, Algorithms Britannica](https://www.britannica.com/technology/artificial-intelligence/Is-artificial-general-intelligence-AGI-possible) -
[AI scientist Ray Kurzweil: ‘We are going to expand intelligence a millionfold by 2045’ Artificial intelligence (AI) The Guardian](https://www.theguardian.com/technology/article/2024/jun/29/ray-kurzweil-google-ai-the-singularity-is-nearer) -
Silicon Valley’s vision for AI? It’s religion, repackaged - Vox ↩ ↩2 ↩3
-
[Silicon Valley’s Obsession With AI Looks a Lot Like Religion The MIT Press Reader](https://thereader.mitpress.mit.edu/silicon-valleys-obsession-with-ai-looks-a-lot-like-religion/) -
[OpenAI’s CEO vision of humanity’s AI-powered glorious future: ‘Fixing the climate, establishing a space colony, and the discovery of all of physics’ PC Gamer](https://www.pcgamer.com/software/ai/openais-ceo-vision-of-humanitys-ai-powered-glorious-future-fixing-the-climate-establishing-a-space-colony-and-the-discovery-of-all-of-physics/) -
[Putin says the nation that leads in AI ‘will be the ruler of the world’ The Verge](https://www.theverge.com/2017/9/4/16251226/russia-ai-putin-rule-the-world)