NVIDIA has updated its GPU-accelerated deep learning software that will double deep learning training performance.
With the new software, data scientists and researchers can supercharge their deep learning projects and product development work by creating more accurate neural networks through faster model training and more sophisticated model design.
The NVIDIA DIGITS Deep Learning CPU Training System version 2 (DIGITS 2) and NVIDIA CUDA Deep Neural Network library version 3 (cuDNN 3) provide significant performance enhancements and new capabilities.
For data scientists, DIGITS 2 now delivers automatic scaling of neural network training across multiple high-performance GPUs. This can double the speed of deep neural network training for image classification compared to a single GPU.
For deep learning researchers, cuDNN 3 features optimised data storage in GPU memory for the training of larger, more sophisticated neural networks. cuDNN 3 also provides higher performance than cuDNN 2, enabling researchers to train neural networks up to two times faster on a single GPU.
The new cuDNN 3 library is expected to be integrated into forthcoming versions of the deep learning frameworks Caffe, Minerva, Theano and Torch, which are widely used to train deep neural networks.
The DIGITS 2 Preview release is available today as a free download for NVIDIA registered developers. To learn more or download, visit the DIGITS website. The cuDNN 3 library is expected to be available in major deep learning frameworks in the coming months. To learn more visit the cuDNN website.