The first NVIDIA AI Conference in Sydney on September 4 will kick off with two keynote addresses. Marc Hamilton, Vice President of Solutions Architecture and Engineering, NVIDIA, will talk about Transforming Industries With AI. Jason Humphrey (right), Head of Retail Risk, ANZ Bank, will then share on Creating the Infrastructure to Undertake Deep Learning.
Perceptive Automata, a startup which had its beginnings in Harvard University, is leveraging deep intelligence to give human intuition into autonomous vehicles. The technology will be able to observe body language of pedestrians and react accordingly to enable safer self-driving.
A global line-up of artificial intelligence (AI) experts will be heading to Sydney to speak at the NVIDIA AI Conference. Researchers and developers will also get training from the NVIDIA Deep Learning Institute (DLI) during the event.
High-performance server maker AMAX has launched the DL-E48A, a reconfigurable single/dual root high-density GPU platform designed for artificial intelligence (AI) training and inference.
On June 13, Intel had the GPU world in a flurry when it tweeted “Intel’s first GPU coming in 2020”. The media were quick to post stories of this incoming new GPU, which would add interesting competition to a market dominated by NVIDIA with AMD a distant second.
Amazon has announced that the AWS DeepLens video camera for running deep learning models is now on sale.
NVIDIA researchers have demonstrated how robots can be trained to observe and repeat human actions — a “first of its kind” capability powered by deep learning.
With its ability to crunch massive data and make predictions, it was only a matter of time before artificial intelligence (AI) was going to be applied to gambling, if it had not already done so.
Where’s a taxi when you need one? That’s the bane of passengers from around the world, except possibly in Taipei where taxis somehow seem to be just where you need them.
Arm is taking its recently-announced Project Trillium a step further with a collaboration with NVIDIA. The partners will bring the open-source NVIDIA Deep Learning Accelerator (NVDLA) architecture into Project Trillium platform for machine learning.
Adobe and NVIDIA have formed a strategic partnership to rapidly enhance their industry-leading artificial intelligence (AI) and deep learning technologies.
NVIDIA CEO Jensen Huang (above) dubbed it the “world’s biggest GPU”. And he certainly wasn’t kidding as the NVIDIA DGX-2 is a massive 350-pounder that delivers an amazing two petaflops of computational power.
Robotics is no longer just a hobby but serious stuff. NVIDIA’s Deep Learning Institute is now working with online learning provider Udacity to develop a programme that will immerse students in the field of robotics, giving them career-ready skills.
The need for deep learning skills is increasing as more and more companies and industries hop on the bandwagon. Launch a little more than a year ago, NVIDIA’s Deep Learning Institute (DLI) has already trained tens of thousands of students, developers and data scientists.
And the company is expanding its DLI offerings with:
- New partnerships: Team up with Booz Allen Hamilton and deeplearning.ai to train thousands of students, developers and government specialists in artificial intelligence (AI).
- New University Ambassador Program: Instructors worldwide can teach students critical job skills and practical applications of AI at no cost.
- New courses: More courses are added to teach domain-specific applications of deep learning for finance, natural language processing, robotics, video analytics, and self-driving cars.
Singapore’s aim to be an artificial intelligence (AI) hub has been boosted with two initiatives — the setting up of a shared AI platform for researchers and the awarding of scholarships to develop AI talents.
At the NVIDIA AI Conference in Singapore yesterday, NVIDIA and Singapore’s National Supercomputing Centre (NSCC) agreed to establish a platform to bolster AI capabilities among its academic, research and industry stakeholders and in support of AI Singapore (AISG), a national programme set up in May to drive AI adoption, research and innovation in Singapore.
Called AI.Platform@NSCC, it will provide AI training, technical expertise and computing services to AISG, which brings together all Singapore-based research and tertiary institutions, including the National University of Singapore (NUS), Nanyang Technological University (NTU), Singapore University of Design and Technology (SUTD), Singapore Management University (SMU), as well as research institutions in the Agency for Science, Technology and Research (A*STAR).
The keynote address at Google I/O yesterday showed that Google is much more than just a search company. It is becoming more artificial intelligence (AI). Google is specifically using deep learning to help in many areas of everyday life.
Here are some as shared on Google’s blog post:
Google Assistant can help answer your questions and find information—but it can also help you get all kinds of useful things done. Today we’re adding a few more:
- Schedule new calendar appointments and create reminders. Starting today on Google Home, you can schedule appointments and soon you’ll also be able to add reminders. Since it’s the same Google Assistant across devices, you’ll be able to get a reminder at home or on the go.
- Make your home smarter. We now have 70+ smart home partners supporting the Google Assistant across Google Home and Android phones, including August locks, TP-Link, Honeywell, Logitech, and LG.
Hundreds of thousands of computers in 150 countries have been hit by the WannaCry ransomware. While users are scampering around trying to fix their computers, the top of mind question is whether this could have […]
Artificial intelligence (AI) is not new. In fact, it has so many false starts over the past 60 years. The term went into hibernation for a long time.
Research into AI began way back in Dartmouth College in 1956 and was constantly associated with being the next frontier in the 1980s when mainframe computers ruled and supercomputers were a ginormous investment that very few could afford.
Despite the research put in over the years, the technology never quite took off and fell flat in many instances.
Interest in deep learning is growing so strongly that NVIDIA expects to train 100,000 developers this year — that’s 10 times more than last year —through its Deep Learning Institute (DLI).
According to research firm IDC, 80 percent of all applications will have an artificial intelligence (AI) component by 2020.
Greg Estes, Vice President of Developer Programs at NVIDIA, noted that there is a hunger for deep learning training. He cited the example of a DLI training at India Institute of Technology (IIT) in India where people came at 7.30am to try to sign up for a fully subscribed course.
Facebook is developing new artificial intelligent (AI) systems to help manage the vast amount of information — such as text, images and videos — generated daily so people can better understand the world and communicate more effectively, even as the volume of information increases.
It has worked with NVIDIA on Caffe2, a new AI deep learning framework that allows developers and researchers to create large-scale distributed training scenarios and build machine learning applications for edge devices.
Providing AI-powered services on mobile is a complex data processing task that must happen within the blink of an eye. Increasingly, the processing of lightning-fast AI services requires GPU-accelerated computing, such as that offered by Facebook’s Big Basin servers, as well as highly optimised deep learning software that can leverage the full capability of the accelerated hardware.
Just when we thought NVIDIA was done with the Pascal range of GPUs with the benchmark release of the GeForce GTX 1060 early this week, NVIDIA CEO Jen-Hsun Huang pulled off a major surprise with the announcement of the new NVIDIA Titan X at an artificial intelligence meeting in Stanford University.
The new NVIDIA Titan X, based on the new Pascal GPU architecture, is the biggest GPU ever built with a record-breaking 3,584 CUDA cores.
Here are the numbers that matter:
- 11 TFLOPS FP32
- 44 TOPS INT8 (new deep learning inferencing instruction)
- 12B transistors
- 3,584 CUDA cores at 1.53GHz (versus 3,072 cores at 1.08GHz in previous TITAN X)
- Up to 60 percent faster performance than previous TITAN X
- High performance engineering for maximum overclocking
- 12 GB of GDDR5X memory (480 GB/s)
The new NVIDIA DRIVE PX 2 is set to give driverless cars a major boost.
Touted at the world’s most powerful engine for in-vehicle artificial intelligence, it allows the automotive industry to use artificial intelligence (AI) to tackle the complexities inherent in autonomous driving. NVIDIA DRIVE PX2 utilises deep learning on NVIDIA’s advanced GPUs for 360-degree situational awareness around the car, to determine precisely where the car is and to compute a safe, comfortable trajectory.
“Drivers deal with an infinitely complex world. Modern artificial intelligence and GPU breakthroughs enable us to finally tackle the daunting challenges of self-driving cars,” said Jen-Hsun Huang, Co-founder and CEO of NVIDIA. “NVIDIA’s GPU is central to advances in deep learning and supercomputing. We are leveraging these to create the brain of future autonomous vehicles that will be continuously alert, and eventually achieve superhuman levels of situational awareness. Autonomous cars will bring increased safety, new convenient mobility services and even beautiful urban designs – providing a powerful force for a better future.”
NVIDIA is teaming up with China high performance computing firm Sugon and the Institute of Computing Technology, of the Chinese Academy of Sciences to jointly operate a deep learning laboratory. The laboratory will promote deep learning […]
NVIDIA has updated its GPU-accelerated deep learning software that will double deep learning training performance.
With the new software, data scientists and researchers can supercharge their deep learning projects and product development work by creating more accurate neural networks through faster model training and more sophisticated model design.
The NVIDIA DIGITS Deep Learning CPU Training System version 2 (DIGITS 2) and NVIDIA CUDA Deep Neural Network library version 3 (cuDNN 3) provide significant performance enhancements and new capabilities.