H2O.ai has announced that its Driverless AI automated machine learning platform and H2O4GPU open source GPU-accelerated machine learning package are now both fully optimised for the latest-generation NVIDIA Volta architecture GPUs — the NVIDIA Tesla V100 — and CUDA 9 software.
NVIDIA’s Volta architecture is leaving quite an impression. According to a NVIDIA press release issued at SC17, the Volta-based NVIDIA Tesla V100 GPU is available through every major computer maker and chosen by every major cloud to deliver artificial intelligence (AI) and high performance computing.
The cloud infrastructure services market is continuing to grow strongly, up 47 percent year on year in Q2 to reach US$14 billion, according to Canalys. Growth was driven by demand for primary cloud infrastructure services, such as on-demand computing and storage, across all customer segments and industries.
However, future growth is expected to be fueled by customers using the artificial intelligence (AI) platforms cloud service providers are building to develop new applications, processes, services, and user experiences.
Amazon Web Services (AWS) maintained its leadership position, growing 42 percent on an annual basis and accounting for more than 30 percent of total spend. But its growth rate was lower than those of its main rivals, Microsoft (up 97 percent growth) and Google (up 92 percent), but higher than fourth-placed IBM (up 23 percent). Overall, the top four cloud services providers represented 55 percent of the cloud infrastructure services market, which includes IaaS and PaaS.
NVIDIA has pulled yet another trick out of its always-filled hat of technology goodies with the launch of Volta, the world’s most powerful GPU computing architecture. At his keynote address at GTC in San Jose, NVIDIA CEO Jensen Huang dubbed it “the next level of computer projects”.
Volta is created to drive the next wave of advancement in artificial intelligence (AI) and high performance computing.
The first Volta-based processor is the NVIDIA Tesla V100 data centre GPU, which brings extraordinary speed and scalability for AI inferencing and training, as well as for accelerating HPC and graphics workloads.