Meta to deploy 6GW of AMD GPUs

Meta is partnering with AMD to deploy six‑gigawatt GPU as part of its AI buildout.

The deal comes hot on the heels of Meta’s large-scale AI infrastructure agreement with NVIDIA to secure next‑generation H100‑class and Blackwell‑generation GPUs for training and inference across its data centres.

Meta’s move to lock in capacity from both NVIDIA and AMD underscores its strategy to diversify suppliers, reduce dependence on a single GPU vendor, and ensure enough compute to power its AI models and “personal superintelligence” roadmap at hyperscale.

“We’re excited to form a long-term partnership with AMD to deploy efficient inference compute and deliver personal superintelligence. This is an important step for Meta as we diversify our compute. I expect AMD to be an important partner for many years to come,” said Mark Zuckerberg, Founder and CEO of Meta.

“This multi-year, multi-generation collaboration across Instinct GPUs, EPYC CPUs and rack-scale AI systems aligns our roadmaps to deliver high-performance, energy-efficient infrastructure optimised for Meta’s workloads, accelerating one of the industry’s largest AI deployments and placing AMD at the center of the global AI buildout,” said Lisa Su, Chair and CEO of AMD.

The partnership will see Meta deploy up to 6GW of AMD Instinct GPUs over multiple generations, tightly aligning roadmaps across silicon, systems and software.

The first wave will use a custom AMD Instinct GPU based on the MI450 architecture, built on the AMD Helios rack‑scale design co‑developed by AMD and Meta through the Open Compute Project, with shipments for the first gigawatt slated to begin in the second half of 2026 alongside 6th Gen AMD EPYC Venice CPUs running ROCm software.

Meta will act as a lead customer for 6th Gen EPYC Venice and a next‑generation EPYC chip codenamed “Verano,” designed with workload‑specific optimisations to improve performance‑per‑dollar‑per‑watt for AI data centres.