The numbers are insane. 1 powerful AI supercomputer = 4 exaflops of AI performance.driven by 6,159 NVIDIA A100 Tensor Core GPUs.
That’s the Perlmutter, the new fastest AI supercomputer named after astrophysicist Saul Permutter. It outpaces the 3.3-exaflop Summit at the US Department of Energy’s Oak Ridge National Laboratory.
And that’s just the first phase. The performance will increase as with the second phase later this year at the system based at Lawrence Berkeley National Lab.
Dedicate today by the National Energy Research Scientific Computing Center (NERSC), Perlmutter will help piece together a 3D map of the universe, probe subatomic interactions for green energy sources and much more.
“AI for science is a growth area at the US Department of Energy, where proof of concepts are moving into production use cases in areas like particle physics, materials science and bioenergy,” said Wahid Bhimji, Acting Lead of NERSC’s data and analytics services group.
“People are exploring larger and larger neural-network models and there’s a demand for access to more powerful resources, so Perlmutter with its A100 GPUs, all-flash file system and streaming data capabilities is well timed to meet this need for AI,” he added.
Perlmutter will feature NVIDIA HPC SDK (Software Development Kit) which includes NVIDIA compilers, libraries and software tools essential to maximising HPC developer productivity, application performance and portability.
More than 7,000 researchers will work on over two dozen applications on the new system to advance science in astrophysics, climate science and more.
“Perlmutter’s ability to fuse AI and high performance computing will lead to breakthroughs in a broad range of fields from materials science and quantum physics to climate projections, biological research and more,” said Jensen Huang, Founder and CEO of NVIDIA.