Think artificial intelligence (AI) and the advent of powerful thinking machines and images of Arnold Schwarzenegger of The Terminator come to mind.
NVIDIA has pulled yet another trick out of its always-filled hat of technology goodies with the launch of Volta, the world’s most powerful GPU computing architecture. At his keynote address at GTC in San Jose, NVIDIA CEO Jensen Huang dubbed it “the next level of computer projects”.
Volta is created to drive the next wave of advancement in artificial intelligence (AI) and high performance computing.
The first Volta-based processor is the NVIDIA Tesla V100 data centre GPU, which brings extraordinary speed and scalability for AI inferencing and training, as well as for accelerating HPC and graphics workloads.
Facebook is developing new artificial intelligent (AI) systems to help manage the vast amount of information — such as text, images and videos — generated daily so people can better understand the world and communicate more effectively, even as the volume of information increases.
It has worked with NVIDIA on Caffe2, a new AI deep learning framework that allows developers and researchers to create large-scale distributed training scenarios and build machine learning applications for edge devices.
Providing AI-powered services on mobile is a complex data processing task that must happen within the blink of an eye. Increasingly, the processing of lightning-fast AI services requires GPU-accelerated computing, such as that offered by Facebook’s Big Basin servers, as well as highly optimised deep learning software that can leverage the full capability of the accelerated hardware.
First, it was Singapore Management University (SMU). Now two other Singapore universities — Singapore University of Technology and Design (SUTD) and Nanyang Technological University (NTU) — have also deployed the NVIDIA DGX-1 deep learning supercomputer for their research projects on artificial intelligence (AI).
SUTD will use the DGX-1 at the SUTD Brain Lab to further research into machine reasoning and distributed learning. Under a memorandum of understanding signed earlier this month, NVIDIA and SUTD will also set up the NVIDIA-SUTD AI Lab to leverage the power of GPU-accelerated neural networks for researching new theories and algorithms for AI. The agreement also provides for internship opportunities to selected students of the lab.
“Computational power is a game changer for AI research, especially in the areas of big data analytics, robotics, machine reasoning and distributed intelligence. The DGX-1 will enable us to perform significantly more experiments in the same period of time, quickening the discovery of new theories and the design of new applications,” said Professors Shaowei Lin and Georgios Piliouras, Engineering Systems and Design, SUTD.
RIKEN, Japan’s largest comprehensive research institution, will have a new supercomputer for deep learning research in April. Built by Fujitsu using 24 NVIDIA DGX-1 AI systems, the new machine will accelerate the application of artificial intelligence (AI) to solve complex challenges in healthcare, manufacturing and public safety.
Conventional high performance computing architectures are proving too costly and inefficient for meeting the needs of AI researchers. That’s why research institutions such as RIKEN are looking for GPU-based solutions that reduce cost and power consumption while increasing performance. Each DGX-1 combines the power of eight NVIDIA Tesla P100 GPUs with an integrated software stack optimised for deep learning frameworks, delivering the performance of 250 conventional x86 servers.
“We believe that the NVIDIA DGX-1-based system will accelerate real-world implementation of the latest AI technologies technologies as well as research into next-generation AI algorithms. Fujitsu is leveraging its extensive experience in high-performance computing development and AI research to support R&D that utilises this system, contributing to the creation of a future in which AI is used to find solutions to a variety of social issues,” said Arimichi Kunisawa, Head of the Technical Computing Solution Unit at Fujitsu.
Singapore is renowned as a food paradise. And with so many mouth-watering dishes to pick from, sometimes even locals have difficulty identifying a specific dish.
Singapore Management University (SMU) is working on a food artificial intelligence (AI) application that is calling on a supercomputer to help with recognising the local dishes to achieve smart food consumption and healthy lifestyle.
The project, developed as part of Singapore’s Smart Nation initiative, requires the analysis of a large number of food photos.
Australia’s federal research agency Commonwealth Scientific and Industrial Research Organisation (CSIRO) has become the first in Asia-Pacific to deploy the NVIDIA DGX-1 deep learning supercomputers.
Installed in CSIRO’s Canberra data centre, the two supercomputers will expand the capability of Australian scientists and broaden the science impact possibilities for the nation.
The NVIDIA DGX-1 is the world’s first deep learning supercomputer to meet the computing demands of artificial intelligence. It enables researchers and data scientists to easily harness the power of GPU-accelerated computing to create a new class of computers that learn, see and perceive the world as humans do.
At his opening keynote address at GTC in San Jose, Jen-Hsun Huang, CEO of NVIDIA made a slew of announcements, including the world’s first deep learning supercomputer to meet the unlimited computing demands of artificial intelligence (AI).
As the first system designed specifically for deep learning, the NVIDIA DGX-1 comes fully integrated with hardware, deep learning software and development tools for quick, easy deployment. It is a turnkey system that contains a new generation of GPU accelerators, delivering the equivalent throughput of 250 x86 servers.
The DGX-1 deep learning system enables researchers and data scientists to easily harness the power of GPU-accelerated computing to create a new class of intelligent machines that learn, see and perceive the world as humans do. It delivers unprecedented levels of computing power to drive next-generation AI applications, allowing researchers to dramatically reduce the time to train larger, more sophisticated deep neural networks.