Meta is building its AI infrastructure under a new long-term partnership with NVIDIA that will see the social media giant deploy millions of NVIDIA Blackwell and Rubin GPUs across hyperscale data centres built for AI training and inference workloads.
The multi-year, multi-generational agreement spans on‑premises and cloud environments and establishes NVIDIA as a core supplier for Meta’s future AI platforms.
Under the deal, Meta will roll out NVIDIA’s Arm-based Grace CPUs alongside its latest GB300-based systems to create a unified architecture that stretches from Meta’s own data centres to NVIDIA Cloud Partner facilities.
It is also standardising on NVIDIA’s Spectrum‑X Ethernet switches for its Facebook Open Switching System platform to deliver predictable low-latency networking, higher utilisation and improved power efficiency for AI-scale clusters.
“No one deploys AI at Meta’s scale — integrating frontier research with industrial-scale infrastructure to power the world’s largest personalisation and recommendation systems for billions of users. Through deep codesign across CPUs, GPUs, networking and software, we are bringing the full NVIDIA platform to Meta’s researchers and engineers as they build the foundation for the next AI frontier,” said Jensen Huang, Founder and CEO of NVIDIA.
“We’re excited to expand our partnership with NVIDIA to build leading-edge clusters using their Vera Rubin platform to deliver personal superintelligence to everyone in the world,” said Mark Zuckerberg, Founder and CEO of Meta.
Meta has also adopted NVIDIA Confidential Computing technologies to preserve user privacy while enabling advanced AI capabilities on sensitive data.
The deal is central to delivering more capable and energy‑efficient AI services over the coming years, with Meta expecting substantial performance‑per‑watt gains in its data centres as it scales out AI infrastructure.
The companies are already collaborating on future Vera CPUs, with the potential for large-scale deployment around 2027.
