Tokyo Institute of Technology plans to create Japan’s fastest AI supercomputer, which is will deliver more than twice the performance of its predecessor to slide into the world’s top 10 fastest systems.
Called Tsubame 3.0, it will use Pascal-based NVIDIA P100 GPUs that are nearly three times as efficient as their predecessors, to reach an expected 12.2 petaflops of double precision performance.
Tsubame 3.0 will excel in AI computation with more than 47 PFLOPS of AI horsepower. When operated with Tsubame 2.5, it is expected to deliver 64.3 PFLOPS, making it Japan’s highest performing AI supercomputer.
“NVIDIA’s broad AI ecosystem, including thousands of deep learning and inference applications, will enable Tokyo Tech to begin training Tsubame 3.0 immediately to help us more quickly solve some of the world’s once unsolvable problems,” said Professor Satoshi Matsuoka of Tokyo Tech.