For years, hardware enthusiasts have been pushing their graphics cards to gain the best possible performance in benchmark tests. That’s no different at the top end where GPU-accelerated AI systems are tested on their capabilities to run AI workloads.
According to the latest MLPerf, seven companies put around a dozen commercially available NVIDIA A100 Tensor Core GPU-based systems to be tested on industry benchmarks. Dell, Fujitsu, Gigabyte, Inspur, Lenovo, Nettrix, and Supermicro shone and delivered stunning results.
NVIDIA and its partners were the only ones to run all eight workloads in the latest round of benchmarks. And the outcome is that they delivered up to 3.5x more performance compared to last year.
What this translates to is that these GPU-accelerated systems train AI models faster than others.
Formed in May 2018, MLPerf is an industry benchmarking group backed by industry leaders such as Alibaba, Arm, Baidu, Google, Intel, and NVIDIA.
The benchmarks are based on popular AI workloads and scenarios, covering the likes of computer vision, natural-language processing, recommendation systems, and reinforcement learning. The training benchmarks focus on time to train a new AI model.
MLPerf’s ratings give users an indication of the performace of AI systems to make more informed buying decisions.