Akamai deploys NVIDIA Blackwell GPUs for global AI inference network

Akamai Technologies plans to build one of the world’s most widely distributed AI platforms by deploying thousands of NVIDIA Blackwell GPUs across its distributed cloud infrastructure.

The new system supports AI research and development, fine-tuning of large language models, and post-training optimisation by routing inference workloads to optimised resources in Akamai’s network spanning more than 4,400 global locations. It integrates NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and BlueField-3 DPUs to enable predictable high-performance inference, localised model customisation for data privacy, and refinement of foundation models with proprietary data.

This distributed architecture reduces latency by up to 2.5x and offers potential cost savings of up to 86 percent on AI inference compared to traditional hyperscaler setups.

“By distributing inference-optimized compute across our global fabric, Akamai isn’t just adding capacity. We’re providing the scale, at minimal latency, that is required to move AI from the laboratory to the street corner and the hospital bed – where the work happens, where the data lives, and where the ROI is realised,” said Adam Karon, Chief Operating Officer and General Manager of Cloud Technology Group at Akamai.

Akamai’s edge network addresses limitations of centralised data centres to support real-world applications such as autonomous delivery, smart grids, surgical robotics, and fraud detection.

The move builds on Akamai’s 2025 Inference Cloud launch and follows strong demand for its initial RTX PRO 6000 Blackwell GPU rollout, with plans for further capacity expansion.

Tagged with: