Take that, Intel! GPU giant NVIDIA has fired back at Intel, which reported that its Xeon Scalable processors outperforms NVIDIA GPU on ResNet-50 deep learning Inference.According to NVIDIA, it took the combined power of two power-hungry, high-end CPUs (costing US$50,000-$100,000) to marginally outstrip the performance of a single mainstream NVIDIA V100 GPU.
“Intel’s performance comparison also highlighted the clear advantage of NVIDIA T4 GPUs, which are built for inference. When compared to a single highest-end CPU, they’re not only faster but also 7x more energy-efficient and an order of magnitude more cost-efficient,” noted the blog post.
Inference performance is crucial for AI-powered services. While Intel’s Cascade Lake CPUs include new instructions that improve inference, it cannot compete with NVIDIA deep learning-optimised Tensor Core GPUs.
The GPUs are used for inferencing by consumer internet companies such as Microsoft, Paypal, Pinterest, Snap, and Twitter.