Human In The Loop
Industry

The Biggest Chip Deal in History

Meta just committed to deploying millions of Nvidia processors—GPUs, CPUs, networking, the full stack—in a deal analysts estimate at tens of billions of dollars. It is the largest single AI hardware commitment ever made.

February 18, 2026 · 11 min read · Industry · By Justin Sparks
Scroll

Millions of chips. Tens of billions of dollars. One vendor.

On February 17, 2026, Meta Platforms and Nvidia jointly announced a multi-year partnership that dwarfs every previous AI hardware procurement in both scope and ambition. The terms are staggering even by the standards of an industry that has lost all sense of proportion: Meta will deploy millions of Nvidia processors across its global data center fleet, covering current-generation Blackwell GPUs, next-generation Rubin GPUs, standalone Grace CPUs, upcoming Vera CPUs, and Spectrum-X Ethernet networking infrastructure. Neither company disclosed a dollar figure. Ben Bajarin, CEO of Creative Strategies, offered the market’s best estimate: “The deal is certainly in the tens of billions of dollars.”

To put that in context, Meta currently accounts for roughly 9% of Nvidia’s total revenue. This deal does not merely preserve that relationship—it deepens it across every layer of the data center stack. GPUs were expected. CPUs were not. Networking was a bonus. The totality of the commitment suggests that Meta has looked at the next five years of AI infrastructure requirements and concluded that Nvidia is the only vendor capable of delivering at the scale and timeline it needs.

The announcement comes six weeks after Mark Zuckerberg told investors that Meta plans to spend approximately $135 billion on AI infrastructure in 2026 alone—nearly double the prior year’s outlay. The company has committed to $600 billion in total U.S. infrastructure investment through 2028. These are not research budgets. They are construction budgets. Meta is building 30 data centers—26 in the United States and four internationally—including Prometheus in New Albany, Ohio (1 gigawatt capacity) and the jaw-dropping Hyperion facility in Richland Parish, Louisiana, designed for 5 gigawatts. For reference, 5 gigawatts is roughly the output of five nuclear power plants.


From Blackwell to Rubin—and everything in between

The deal spans three generations of Nvidia silicon and two product categories that have never before been bundled at this scale. Start with what’s shipping now: Blackwell GPUs, Nvidia’s current flagship, are already being deployed across Meta’s existing facilities. These are the workhorses powering the training runs for Meta’s Llama model family and the inference infrastructure behind the AI features embedded in Facebook, Instagram, WhatsApp, and Threads. The new deal expands this fleet by millions of additional units.

Next come the Rubin GPUs, Nvidia’s next-generation platform announced at CES 2026 in January. Rubin represents a full architectural generation beyond Blackwell, with each Vera Rubin module combining one Vera CPU and two Rubin GPUs in a single package. Meta is committing to these systems before they ship—a level of forward purchasing that reflects both confidence in Nvidia’s roadmap and anxiety about securing allocation in what remains a supply-constrained market.

But the most consequential line item may be the one that got the least attention in the initial coverage: Spectrum-X Ethernet switches. Networking is the unglamorous plumbing of AI infrastructure, but it is increasingly the bottleneck. Training a frontier model requires thousands of GPUs communicating at extraordinary bandwidth with minimal latency. Nvidia’s networking division—built largely through the 2020 acquisition of Mellanox—has become a strategic weapon. By bundling GPUs, CPUs, and networking into a single deal, Nvidia is selling Meta not just chips but an integrated system architecture. That integration is harder for competitors to unbundle than any individual component.

“Meta doing this at scale is affirmation of the soup-to-nuts strategy that Nvidia’s putting across both sets of infrastructure: CPU and GPU.” — Ben Bajarin, CEO of Creative Strategies

Nvidia just declared war on Intel and AMD

The headline is the GPU volume. The story is the CPUs. Meta is the first hyperscaler to deploy Nvidia’s Grace central processing units as standalone chips—not paired with GPUs in the Grace Hopper superchip configuration that Nvidia has marketed since 2023, but running independently in CPU-only servers handling general-purpose data center workloads. This is new territory for Nvidia, and it puts the company in direct competition with Intel and AMD in the market those two have dominated for decades.

The Grace CPU is built on 72 Arm Neoverse V2 cores with LPDDR5x memory. It is not an x86 processor. It does not run the instruction set that has defined server computing since the 1990s. What it does, according to Meta’s own testing, is deliver up to 2x the performance per watt on back-end workloads compared to the x86 alternatives currently in its fleet. In a company operating data centers measured in gigawatts, a 2x efficiency improvement on CPU workloads is not a marginal optimization. It is a structural cost reduction worth hundreds of millions of dollars annually.

The next-generation Vera CPU pushes further: 88 custom Arm cores, simultaneous multi-threading, and confidential computing features designed for privacy-sensitive workloads. Meta has indicated that Vera’s confidential computing capabilities will be deployed specifically for WhatsApp’s encrypted messaging infrastructure—processing AI features on private data without exposing that data even to Meta’s own operators. Standalone Vera CPU servers are planned for 2027.

The x86 reckoning

For Intel and AMD, this is not an abstract competitive threat. It is a concrete loss of server CPU sockets at one of the world’s largest data center operators. Intel, already reeling from years of process technology delays and a collapsing market share in high-performance computing, can least afford to cede ground. AMD, which had been gaining share in the data center with its EPYC processors, now faces a pincer: Nvidia attacking from the Arm side with Grace, while Amazon (Graviton) and Google (Axion) continue expanding their own Arm-based custom silicon.

Bajarin framed the shift succinctly: “We were in the ‘training’ era, and now we are moving more to the ‘inference era,’ which demands a completely different approach.” Inference workloads—running trained models in production rather than training new ones—are CPU-heavy, latency-sensitive, and energy-constrained. They favor exactly the kind of efficient, purpose-built Arm processors that Nvidia is now selling. The CPU market has been a duopoly for so long that most of the industry has forgotten it could be anything else. Meta just reminded them.


Why Meta is betting everything on Nvidia—despite trying not to

The irony of this deal is that Meta has spent three years and considerable engineering resources trying to reduce its dependence on Nvidia. The company’s in-house chip program, MTIA (Meta Training and Inference Accelerator), was supposed to be the escape hatch. MTIA v1 shipped in 2023 for inference workloads. MTIA v2 followed in 2024. A training-focused chip was designed, taped out in March 2025, and manufactured by TSMC. The original plan called for deployment in 2026. That timeline has slipped. The Financial Times reported that Meta experienced “technical challenges” with the new training chips—a diplomatic way of saying they don’t work well enough to deploy at the scale Meta needs.

The MTIA program is not dead. Meta continues to describe it as a long-term investment in workload-specific optimization and energy efficiency. But the expanded Nvidia deal tells a different story about the near-term reality: when you need millions of processors delivered on a schedule that matches a $135 billion capital expenditure plan, you go to the vendor who can actually deliver. Custom silicon is a multi-year bet. Nvidia ships now.

Meta is also not putting all its eggs in the Nvidia basket, even if this deal makes it look that way. The company operates a substantial fleet of AMD Instinct GPUs and participated in designing AMD’s Helios rack systems, which are scheduled to launch later in 2026. In November 2025, reports emerged that Meta was exploring the use of Google’s tensor processing units—a revelation that briefly knocked 4% off Nvidia’s stock price. The diversification strategy is real. But diversification and dependence are not mutually exclusive, and this deal makes clear which vendor sits at the center of Meta’s AI infrastructure stack.

The model behind the metal

All of this hardware serves a specific purpose: Meta is building the infrastructure to train and deploy Avocado, its next frontier model and the successor to Llama 4. Llama 4’s reception was mixed—competitive on benchmarks but viewed by many in the community as a step behind Anthropic’s Opus 4.6 and Google’s Gemini 3 Pro. Zuckerberg has framed Meta’s AI ambition in characteristically grandiose terms: “delivering personal superintelligence to everyone in the world.” Translating that vision into reality requires compute at a scale that no single company has ever assembled. This deal is how you start.


What this means for everyone else

The Meta deal is the latest and largest data point in a pattern that has defined the semiconductor industry since late 2024: the hyperscalers are spending at levels that make previous capital cycles look like rounding errors, and Nvidia is capturing a disproportionate share of that spending. Microsoft, Amazon, Google, and now Meta have all announced AI infrastructure budgets in the tens or hundreds of billions. The aggregate capital committed to AI compute by the five largest cloud and consumer AI companies now exceeds $1 trillion through 2028. Nvidia’s ability to serve as a one-stop shop—GPUs, CPUs, networking, software stack—gives it a structural advantage in capturing wallet share that no competitor can currently match.

For Nvidia’s competitors, the picture is mixed. AMD retains a meaningful position in Meta’s fleet and continues to gain traction with its MI300X and upcoming MI400 accelerators. Broadcom is co-developing custom AI chips with OpenAI. Cerebras and Groq are carving out inference-specific niches, though Nvidia blunted Groq’s momentum by acquiring key talent in December. The competitive field is not empty. But none of these players can offer what Nvidia just sold Meta: a vertically integrated solution spanning training GPUs, inference GPUs, standalone CPUs, and high-bandwidth networking, all delivered at scale across multiple product generations with a single vendor relationship.

For the broader AI industry, the deal reinforces a dynamic that many find uncomfortable: the companies building the most important AI models are becoming increasingly dependent on a single chip supplier, while that supplier is becoming increasingly dependent on a handful of customers. Meta is 9% of Nvidia’s revenue. Nvidia is the foundation of Meta’s AI infrastructure. Neither can easily replace the other. Whether this mutual dependence represents healthy partnership or systemic risk depends entirely on whether you think the current trajectory of AI infrastructure spending is sustainable. The market, for now, has rendered its verdict: Nvidia closed up 3.1% on the day of the announcement. Intel closed down 1.8%. The chips, as it were, are on the table.