AMD, Meta Strike Multi-Year Deal to Power Next Wave of AI Infrastructure
Advanced Micro Devices and Meta Platforms have agreed to a sweeping multi-year partnership aimed at rapidly expanding artificial intelligence infrastructure, deepening ties between the chipmaker and one of the world’s largest technology companies.
Under the agreement, Advanced Micro Devices will supply up to six gigawatts of its AMD Instinct GPUs across multiple product generations to support Meta’s next phase of AI development. The companies said the collaboration will align their silicon, systems and software roadmaps to build AI platforms purpose-built for Meta’s massive and evolving workloads.
The first deployment will rely on a custom AMD Instinct GPU based on the MI450 architecture, optimized specifically for Meta’s infrastructure needs. Shipments supporting the initial one-gigawatt phase are expected to begin in the second half of 2026. Those systems will also incorporate 6th Gen AMD EPYC processors, codenamed “Venice,” and run on AMD’s ROCm software stack.
The infrastructure will be built on AMD’s Helios rack-scale architecture, unveiled at the 2025 Open Compute Project Global Summit and developed jointly by AMD and Meta through the Open Compute Project. The rack-level design is intended to enable hyperscale AI deployments with improved performance efficiency and scalability.
Scaling AI at Gigawatt Levels
The six-gigawatt commitment marks one of the largest publicly disclosed AI infrastructure agreements to date. By framing the partnership around power capacity rather than chip volume alone, the companies underscored the enormous energy footprint and computing scale required to train and deploy cutting-edge AI models.
AMD Chair and CEO Lisa Su said the collaboration positions the company at the center of one of the industry’s largest AI buildouts.
“This multi-year, multi-generation collaboration across Instinct GPUs, EPYC CPUs and rack-scale AI systems aligns our roadmaps to deliver high-performance, energy-efficient infrastructure optimized for Meta’s workloads,” Su said in a statement.
Meta CEO Mark Zuckerberg characterized the deal as a key step in diversifying the company’s AI compute base.
“We’re excited to form a long-term partnership with AMD to deploy efficient inference compute and deliver personal superintelligence,” Zuckerberg said. “I expect AMD to be an important partner for many years to come.”
Deepening CPU and GPU Collaboration
Beyond GPUs, the agreement expands Meta’s use of AMD EPYC server processors. Meta has deployed millions of EPYC CPUs over several generations and has already incorporated AMD Instinct MI300 and MI350 series GPUs into its infrastructure.
As AI systems grow more complex, CPUs play a critical role in orchestrating workloads, managing memory and enabling efficient scaling across clusters of accelerators. Meta will act as a lead customer for AMD’s 6th Gen EPYC “Venice” processors, as well as “Verano,” a forthcoming EPYC chip designed with workload-specific optimizations aimed at improving performance per dollar and per watt.
The companies said they are coordinating development timelines to ensure tighter integration between GPU accelerators, CPUs and system-level architecture — a strategy intended to deliver full-stack AI platforms tuned for hyperscale deployment.
Performance-Based Equity Structure
As part of the agreement, AMD issued Meta a performance-based warrant for up to 160 million shares of AMD common stock. The warrant is structured to vest in tranches as specific shipment milestones are reached.
The first tranche will vest upon completion of the initial one-gigawatt deployment, with additional tranches tied to scaling purchases up to the full six-gigawatt commitment. Vesting is also linked to AMD achieving certain stock price thresholds, while exercise depends on Meta meeting technical and commercial milestones.
AMD CFO Jean Hu said the structure tightly aligns incentives between the companies and is expected to drive meaningful financial benefits.
“We expect this partnership to drive substantial multi-year revenue growth and be accretive to our non-GAAP earnings per share,” Hu said, adding that it supports AMD’s long-term financial model.
Strategic Implications
The deal highlights intensifying competition among semiconductor companies vying to supply the AI infrastructure that underpins large language models, recommendation systems and emerging “superintelligence” ambitions.
For Meta, which operates some of the world’s largest data center networks, the partnership strengthens supply diversification while enabling custom silicon configurations tailored to its AI research and production workloads. For AMD, the agreement reinforces its position as a primary alternative to incumbent AI chip providers and signals confidence in its Instinct GPU roadmap.
The companies said they will continue collaborating across silicon, systems and software layers to accelerate deployment of AI-powered services used by billions globally.
AMD will host a conference call to discuss the announcement, with a live webcast available through its investor relations website.
While both companies expressed optimism about the partnership’s scale and impact, they cautioned that forward-looking statements about performance, timelines and financial outcomes remain subject to risks and uncertainties, including manufacturing capacity, supply chain constraints, market conditions and broader economic factors.
Still, the agreement underscores the accelerating race to build the computational backbone of the AI era — measured not just in chips, but in gigawatts.
