Tag: deep learning

NVIDIA AI Conference keynotes: Transforming industries, creating AI infrastructure

The first NVIDIA AI Conference in Sydney on September 4 will kick off with two keynote addresses. Marc Hamilton, Vice President of Solutions Architecture and Engineering, NVIDIA, will talk about Transforming Industries With AI. Jason Humphrey (right), Head of Retail Risk, ANZ Bank, will then share on Creating the Infrastructure to Undertake Deep Learning.

Making autonomous vehicles more perceptive

Training neural networks to read human behaviour for safe self-driving.
Training neural networks to read human behaviour for safe self-driving.

Perceptive Automata, a startup which had its beginnings in Harvard University, is leveraging deep intelligence to give human intuition into autonomous vehicles. The technology will be able to observe body language of pedestrians and react accordingly to enable safer self-driving.

NVIDIA expands DLI offerings

Booz Allen Hamilton

The need for deep learning skills is increasing as more and more companies and industries hop on the bandwagon. Launch a little more than a year ago, NVIDIA’s Deep Learning Institute (DLI) has already trained tens of thousands of students, developers and data scientists.

And the company is expanding its DLI offerings with:

  • New partnerships: Team up with Booz Allen Hamilton and deeplearning.ai to train thousands of students, developers and government specialists in artificial intelligence (AI).
  • New University Ambassador Program: Instructors worldwide can teach students critical job skills and practical applications of AI at no cost.
  • New courses:  More courses are added to teach domain-specific applications of deep learning for finance, natural language processing, robotics, video analytics, and self-driving cars.

Singapore’s AI agenda gets double boost!

NVIDIA Fellow Dr David Kirk
NVIDIA Fellow Dr David Kirk delivers the keynote address at the NVIDIA AI Conference.

Singapore’s aim to be an artificial intelligence (AI) hub has been boosted with two initiatives — the setting up of a shared AI platform for researchers and the awarding of scholarships to develop AI talents.

At the NVIDIA AI Conference in Singapore yesterday, NVIDIA and Singapore’s National Supercomputing Centre (NSCC) agreed to establish a platform to bolster AI capabilities among its academic, research and industry stakeholders and in support of AI Singapore (AISG), a national programme set up in May to drive AI adoption, research and innovation in Singapore.

Called AI.Platform@NSCC, it will provide AI training, technical expertise and computing services to AISG, which brings together all Singapore-based research and tertiary institutions, including the National University of Singapore (NUS), Nanyang Technological University (NTU), Singapore University of Design and Technology (SUTD), Singapore Management University (SMU), as well as research institutions in the Agency for Science, Technology and Research (A*STAR).

Deeper into AI

The keynote address at Google I/O yesterday showed that Google is much more than just a search company. It is becoming more artificial intelligence (AI). Google is specifically using deep learning to help in many areas of everyday life.

 

Here are some as shared on Google’s blog post:

Google Assistant can help answer your questions and find information—but it can also help you get all kinds of useful things done. Today we’re adding a few more:

  • Schedule new calendar appointments and create reminders. Starting today on Google Home, you can schedule appointments and soon you’ll also be able to add reminders. Since it’s the same Google Assistant across devices, you’ll be able to get a reminder at home or on the go.
  • Make your home smarter. We now have 70+ smart home partners supporting the Google Assistant across Google Home and Android phones, including August locks, TP-Link, Honeywell, Logitech, and LG.

Finally, the Big Bang for AI!

I am AI opening video at GTC 2017 keynote.

Artificial intelligence (AI) is not new. In fact, it has so many false starts over the past 60 years. The term went into hibernation for a long time.

Research into AI began way back in Dartmouth College in 1956 and was constantly associated with being the next frontier in the 1980s when mainframe computers ruled and supercomputers were a ginormous investment that very few could afford.

Despite the research put in over the years, the technology never quite took off and fell flat in many instances.

NVIDIA to train 100,000 deep learning developers this year

Greg Estes of NVIDIA (left) addressing the global media at a press conference at GTC.

Interest in deep learning is growing so strongly that NVIDIA expects to train 100,000 developers this year — that’s 10 times more than last year —through its Deep Learning Institute (DLI).

According to research firm IDC, 80 percent of all applications will have an artificial intelligence (AI) component by 2020.

Greg Estes, Vice President of Developer Programs at NVIDIA, noted that there is a hunger for deep learning training. He cited the example of a DLI training at India Institute of Technology (IIT) in India where people came at 7.30am to try to sign up for a fully subscribed course.

Caffe2, anyone?

Facebook is developing new artificial intelligent (AI) systems to help manage the vast amount of information — such as text, images and videos — generated daily so people can better understand the world and communicate more effectively, even as the volume of information increases.

It has worked with NVIDIA on Caffe2, a new AI deep learning framework that allows developers and researchers to create large-scale distributed training scenarios and build machine learning applications for edge devices.

Providing AI-powered services on mobile is a complex data processing task that must happen within the blink of an eye. Increasingly, the processing of lightning-fast AI services requires GPU-accelerated computing, such as that offered by Facebook’s Big Basin servers, as well as highly optimised deep learning software that can leverage the full capability of the accelerated hardware.

NVIDIA springs Titan X surprise

New_NVIDIA_TITAN_XJust when we thought NVIDIA was done with the Pascal range of GPUs with the benchmark release of the GeForce GTX 1060 early this week, NVIDIA CEO Jen-Hsun Huang pulled off a major surprise with the announcement of the new NVIDIA Titan X at an artificial intelligence meeting in Stanford University.

The new NVIDIA Titan X, based on the new Pascal GPU architecture, is the biggest GPU ever built with a record-breaking 3,584 CUDA cores.

Here are the numbers that matter:

  • 11 TFLOPS FP32
  • 44 TOPS INT8 (new deep learning inferencing instruction)
  • 12B transistors
  • 3,584 CUDA cores at 1.53GHz (versus 3,072 cores at 1.08GHz in previous TITAN X)
  • Up to 60 percent faster performance than previous TITAN X
  • High performance engineering for maximum overclocking
  • 12 GB of GDDR5X memory (480 GB/s)

NVIDIA adds AI and supercomputing prowess to driverless cars

DRIVE PX_illustrationThe new NVIDIA DRIVE PX 2 is set to give driverless cars a major boost.

Touted at the world’s most powerful engine for in-vehicle artificial intelligence, it allows the automotive industry to use artificial intelligence (AI) to tackle the complexities inherent in autonomous driving. NVIDIA DRIVE PX2 utilises deep learning on NVIDIA’s advanced GPUs for 360-degree situational awareness around the car, to determine precisely where the car is and to compute a safe, comfortable trajectory.

“Drivers deal with an infinitely complex world. Modern artificial intelligence and GPU breakthroughs enable us to finally tackle the daunting challenges of self-driving cars,” said Jen-Hsun Huang, Co-founder and CEO of NVIDIA. “NVIDIA’s GPU is central to advances in deep learning and supercomputing. We are leveraging these to create the brain of future autonomous vehicles that will be continuously alert, and eventually achieve superhuman levels of situational awareness. Autonomous cars will bring increased safety, new convenient mobility services and even beautiful urban designs – providing a powerful force for a better future.”

NVIDIA software update doubles performance for deep learning training

NVIDIANVIDIA has updated its GPU-accelerated deep learning software that will double deep learning training performance.

With the new software, data scientists and researchers can supercharge their deep learning projects and product development work by creating more accurate neural networks through faster model training and more sophisticated model design.

The NVIDIA DIGITS Deep Learning CPU Training System version 2 (DIGITS 2) and NVIDIA CUDA Deep Neural Network library version 3 (cuDNN 3) provide significant performance enhancements and new capabilities.