A Spending Spree Without Precedent
Something extraordinary happened in the first two weeks of February 2026. As the five largest US cloud and AI infrastructure providers filed their quarterly earnings, a single number began to crystallize: somewhere between $660 billion and $690 billion in planned capital expenditure for the year. Nearly double what they spent in 2025. Roughly three-quarters of it directed squarely at AI compute, data centers, and networking.
To put that in perspective, $690 billion is more than the entire GDP of Switzerland. It exceeds the combined annual military budgets of every NATO country except the United States. It is, by any measure, the largest coordinated infrastructure buildout in the history of private enterprise.
And every single one of these companies says the same thing: they’re supply-constrained, not demand-constrained. The capacity they build is being consumed as fast as they can deploy it.
Year-over-Year: The Doubling
What makes these numbers staggering isn’t just the raw totals—it’s the velocity. Every single company in the “Big Five” is approximately doubling their 2025 spend. Amazon jumped from $131 billion to $200 billion. Alphabet revised upward three separate times before landing between $175 and $185 billion—more than double its $91 billion in 2025. Meta went from $72 billion to a guided range of $115–$135 billion.
These are not speculative startups chasing hype. These are the five most profitable technology companies on Earth, each independently concluding that underspending on AI infrastructure is a greater risk than overspending. As Sundar Pichai put it during Alphabet’s earnings call: the risk of underinvesting is dramatically greater than the risk of overinvesting.
Who’s Spending What—and Why
Each company has its own strategic calculus. The headline numbers are similar, but the motivations, revenue justifications, and risk profiles vary enormously. Here’s how the Big Five are each framing their bets.
Amazon leads the field with a capex number that exceeded Wall Street consensus by more than $50 billion. AWS grew 24% in Q4—its fastest in 13 quarters—and the backlog climbed to $244 billion. CEO Andy Jassy insists capacity is being monetized as fast as it’s installed. Custom chips like Trainium and Graviton are on track for over $10B in revenue. The stock dropped 10% anyway.
The most aggressive year-over-year increase. Alphabet revised its capex guidance upward three times in 2025 before landing here. Google Cloud revenue grew 48% YoY in Q4 with a backlog that surged 55% sequentially to $240 billion. Gemini has 750 million monthly active users. They also cut Gemini serving costs by 78% through model optimization—a critical efficiency signal.
Microsoft didn’t give a full-year number, but at $37.5 billion in Q4 alone, it’s tracking well above $120 billion annualized. The company disclosed an $80 billion backlog of Azure orders that cannot be fulfilled due to power constraints. Their spend also funds the ongoing OpenAI partnership, Copilot expansion, and sovereign AI deployments globally.
Oracle rounds out the five with an estimated $50 billion, driven largely by the Stargate partnership with OpenAI and SoftBank. A figure that would have been unthinkable for Oracle just two years ago now positions it as the infrastructure landlord of the AI revolution.
The $500 Billion Backdrop
Behind the Big Five’s spending sits an even larger shadow: Project Stargate, the $500 billion AI infrastructure joint venture between OpenAI, SoftBank, and Oracle. Announced at the White House in January 2025 and now well into construction, Stargate represents the single largest private-sector infrastructure project in human history.
$500 billion committed investment over four years. $100 billion deployed in the initial phase. Nearly 7 gigawatts of planned data center capacity across six US sites in Texas, New Mexico, Ohio, and Wisconsin. 10 gigawatts total target by end of decade. Led by SoftBank (financing) and OpenAI (operations), with Oracle, Microsoft, NVIDIA, and Arm as technology partners.
Stargate is not included in the Big Five capex totals above—it’s additional spending, layered on top. The Abilene, Texas flagship campus is already operational, with NVIDIA GB200 AI racks delivered and running early training workloads for OpenAI’s next-generation models. Five additional sites were announced in September 2025, and international expansions to Norway, the UAE, and Argentina are underway.
The project has also expanded beyond US borders. Stargate UAE, a 1GW data center partnership with G42 and NVIDIA, is expected to begin delivering 200MW of capacity by 2026. Stargate Norway, OpenAI’s first European initiative in Narvik, leverages hydropower to run 100,000 planned GPUs.
Add it all up and the total global AI infrastructure investment in 2026—hyperscalers plus Stargate plus sovereign AI funds from the EU, Japan, South Korea, and the Middle East—easily approaches $1 trillion. We are watching the construction of a new kind of utility in real time.
It’s Not About Money. It’s About Megawatts.
If you listen carefully to the earnings calls, a pattern emerges. Every single hyperscaler is telling the same story: the constraint isn’t capital, customers, or even chips anymore. It’s electricity. The physical grid cannot deliver power fast enough to the places where data centers are being built.
Microsoft disclosed an $80 billion backlog of Azure orders that cannot be fulfilled due to power constraints—not silicon shortages, not software limitations, but the inability to plug servers into the wall. Amazon’s data center division added nearly 4 gigawatts of capacity in 2025 and plans to double that by 2027, a rate of deployment that requires negotiating directly with regional utilities and building dedicated substations.
Alphabet’s infrastructure lead Amin Vahdat has described the pace required to keep up: capacity must double every six months to meet demand. To make that work, the company acquired data center firm Intersect for $4.75 billion in December and is building multi-campus complexes across Virginia, Texas, and Ontario with dedicated renewable generation and optical fiber links.
This is the hidden story of the AI infrastructure sprint. The money is there. The demand is there. The chips are increasingly there. But the grid was designed for a world before AI consumed the energy budget of medium-sized cities. Whoever solves the power bottleneck first doesn’t just win the infrastructure race—they own the physical layer of the AI economy.
Are We in a Bubble?
On Microsoft’s earnings call, an analyst asked it directly: “Are we in a bubble?” On Alphabet’s call, another pressed: “What early signs give you confidence that the spending is really driving better returns long-term?” These are the right questions. The answers are complicated.
The Bull Case
The demand signals are real. AWS grew 24%—its fastest in 13 quarters. Google Cloud grew 48% with a backlog north of $240 billion. Azure backlog hit $80 billion. All five hyperscalers report that AI capacity is absorbed as fast as it can be deployed. Enterprise AI adoption is broadening from experimentation into production, and inference workloads—the computational cost of actually using AI models, not just training them—are scaling faster than anyone anticipated.
Alphabet also reported cutting Gemini serving costs by 78% through optimization in 2025. That’s the kind of efficiency gain that suggests the economics will improve even as deployment scales. Spending more today, but getting dramatically more per dollar tomorrow.
The Bear Case
Bank of America credit strategists have pointed out that these five companies are collectively reaching a limit on how much they can fund from cash flows. AI capex is projected to consume 94% of operating cash flows (minus dividends and buybacks) in 2026, up from 76% in 2024. Multiple companies are turning to debt markets to bridge the gap. Meta brought the year’s biggest investment-grade bond deal at $30 billion.
The depreciation math is sobering. By 2030, the hyperscalers plan to add roughly $2 trillion in AI-related assets. At a 20% annual depreciation rate, that implies $400 billion in depreciation expense annually—more than their combined 2025 profits. And these totals don’t capture spending by the “neoclouds” (CoreWeave, Lambda, Crusoe) or Stargate’s full buildout.
The software sector has lost 30% of its value over the past three months, driven by uncertainty about how AI reshapes incumbent tools and whether infrastructure spending delivers returns on a timeline Wall Street can stomach. Stock drops of 8–10% on capex announcements—even alongside strong earnings—show that investor patience has limits.
The Infrastructure Layer as Moat
Here’s the part that gets missed in the spending-spree headlines: this isn’t just an investment cycle. It’s a structural power grab. Every dollar spent on data centers, fiber, substations, and custom silicon is a dollar that raises the barrier to entry for everyone else. The Big Five aren’t just buying compute. They’re buying the physical layer of the AI economy.
Consider what this means for the next tier of AI companies—the Anthropics, the Mistral AIs, the xAIs of the world. They cannot build this infrastructure themselves. They are, by necessity, tenants in buildings owned by their competitors. Even OpenAI, the most well-funded AI startup in history, needed a $500 billion partnership with SoftBank and Oracle to build Stargate because it couldn’t do it alone.
The model layer is increasingly commoditized—open-weight Chinese models from Qwen and DeepSeek now match proprietary Western models on many benchmarks at a fraction of the cost. But the infrastructure layer? That’s consolidating into fewer and fewer hands, and it will likely stay that way. You can fine-tune a model on a laptop. You cannot build a gigawatt data center campus without access to capital, power contracts, and years of lead time.
The Human Loop
For the humans in the loop—the builders, the regulators, the enterprise buyers making decisions about which cloud to bet on—the implications are practical:
If you’re building on AI, your infrastructure choices are narrowing. Multi-cloud strategies will matter more, not less, because vendor lock-in at the infrastructure layer is permanent in ways that model-level lock-in is not. When your models run on someone else’s gigawatt campus, switching costs aren’t measured in API calls—they’re measured in data gravity.
If you’re in security, the centralization of AI compute into a handful of physical locations creates new attack surfaces. A few thousand acres of land in central Texas are about to host a meaningful percentage of the world’s AI training capacity. The physical security implications are enormous.
If you’re watching policy, the energy question will dominate 2026 AI regulation. States are already fighting over who pays for data center power. The federal government is navigating export controls on AI hardware. And the concentration of compute in five companies raises questions about market power that existing antitrust frameworks weren’t designed to address.
The $690 billion isn’t just a number. It’s a declaration: the companies that control the physical infrastructure of AI will control the economic value it produces. Whether that bet pays off—for the companies, their shareholders, and the rest of us—is the defining financial question of this decade. We’re all in the loop now.