At the inaugural Audi Summit in Spain, Audi revealed that its flagship 2018 A8 features a multitude of high-tech wizardry powered by NVIDIA.
“The car of the future will make its occupants’ life easier with the help of artificial intelligence (AI),” declared Rupert Stadler, Chairman of the Board of Audi, as he introduced such A8 features as Audi AI Traffic Jam Pilot, Remote Park Pilot, Natural Voice Control and Swarm Intelligence.
The A8 is packed with NVIDIA powered systems, including revolutionary new user interfaces, a new infotainment system, a new virtual cockpit, and new rear seat entertainment options.
NVIDIA is investing in Deep Instinct, an Israeli-based startup that uses deep learning to thwart cyber attacks.
Deep Instinct uses a GPU-based neural network and CUDA to achieve 99 percent detection rates, compared with about 80 percent detection from conventional cyber security software. Its software can automatically detect and defeat the most advanced cyber attacks.
“Deep Instinct is an emerging leader in applying GPU-powered AI through deep learning to address cybersecurity, a field ripe for disruption as enterprise customers migrate away from traditional solutions. We’re excited to work together with Deep Instinct to advance this important field,” said Jeff Herbst, Vice President of Business Development of NVIDIA.
NVIDIA and Baidu have teamed up to bring artificial intelligence (AI) technology to cloud computing, self-driving vehicles and AI home assistants.
Baidu will deploy NVIDIA HGX architecture with Tesla Volta V100 and Tesla P4 GPU accelerators for AI training and inference in its data centres. Combined with Baidu’s PaddlePaddle deep learning framework and NVIDIA’s TensorRT deep learning inference software, researchers and companies can harness state-of-the-art technology to develop products and services with real-time understanding of images, speech, text and video.
To accelerate AI development, the companies will work together to optimise Baidu’s open-source PaddlePaddle deep learning framework on NVIDIA’s Volta GPU architecture.
With all that rage of artificial intelligence (AI) powering driverless cars, the same technology can also be used to keep drivers safe. It can acts like a guardian angel and look our for danger (watch video above).
NVIDIA’s AI Co-Pilot technology uses sensor data from the cameras and microphone inside and outside a car to track the environment around the driver. When the AI notices a problem — such as the driver looking away from an approaching pedestrian — it could sound an alert.
Hundreds of thousands of computers in 150 countries have been hit by the WannaCry ransomware. While users are scampering around trying to fix their computers, the top of mind question is whether this could have been avoided. And if artificial intelligence could have predicted and prevented such an attack.
At GPU Technology Conference (GTC) in San Jose, California last week, Israeli firm Deep Instinct won the Most Disruptive Startup category in NVIDIA’s Inception Award. The firm is the first to use AI to predict and prevent malware attacks.
According David Eli, Chief Technology Officer (CTO) of Deep Instinct, more than a million new malware threats are released daily, but most antivirus software focuses on known threats.
His firm’s graphics processing unit (GPU)-accelerated deep learning software detects malware in real time. Trained on hundreds of millions of files, the neural network learns to detect more threats and then uses its experience to predict new attacks.
“Winning this prize is the ultimate recognition from the deep learning industry because deep learning and NVIDIA are synonymous,” said Eli.
This is the inaugural year of NVIDIA’s Inception Awards, which recognises startups in three categories – Hottest Emerging, Most Disruptive and Social Innovation. Winners received significant cash prizes and graphics processing unit (GPU) hardware to further accelerate their activities.
Emerging out of stealth mode in November 2015, Deep Instinct’s patent-pending application of deep learning to cybersecurity results in cutting-edge capabilities of unmatched accurate detection and real-time prevention.
Leveraging the capabilities associated with deep learning, Deep Instinct provides instinctive protection on any device, platform, and operating system. Zero-day and APT attacks are immediately detected and blocked before any harm can happen to the enterprise’s endpoints, servers, and mobile devices.
“Deep Instinct relies on end-to-end deep learning for all its advanced malware detection and prevention capabilities. The deep neural network is trained on hundreds of millions of malicious and legitimate files. To handle such large-scale training, Deep Instinct developed its proprietary deep learning infrastructure directly on NVIDIA’s GPU machines,” said Eli.
“The powerful capabilities of NVIDIA GPUs enable us to perform our training at a substantially faster speed compared to CPUs: while training the Deep Instinct brain on NVIDIA’s GPUs takes a little over a single day of training, the same task on CPUs would take more than three months,” he added.
“We are thrilled to be recognised by NVIDIA for what we believe is a groundbreaking application of GPUs. Being able to leverage powerful technological capabilities to apply deep learning to cybersecurity empowers enterprises with unprecedented, real-time protection from the next unexpected attack,” said Guy Caspi, Chief Executive Officer of Deep Instinct.
As a sign of its coming of age, the GPU Technology Conference (GTC) held annually in San Jose, California since 2009, is no longer a niche event but one that is drawing the who’s who of the technology industry.
NVIDIA’s shift of focus from being a visual computing company to an AI company has certainly played a big part in the expansion of the conference. It has attracted around 50 sponsors and 150 exhibitors on top of more than 7,000 participants.
However, it’s not the number of sponsors and exhibitors but rather the quality that is worthy of attention. The line-up of technology firms includes luminaries such as Adobe, Alibaba, Amazon, Autodesk, Cisco, Cray, Dell EMC, DreamWorks Animation, IBM Watson, Lenovo, Microsoft, Samsung Electronics, Verizon Labs, VMware, and Yahoo Research.
With AI being such a prime mover of autonomous vehicles, it is also not surprising that leading names in the automotive industry were also present — BMW Group, Chevron, ExxonMobil, Ford Motor Company, General Motors, Honda Research Institute, and Mercedes-Benz R&D North America.
Amid the various booths showcasing VR technologies was one by NASA Ames Research Center, which showed a VR demonstration on Mars.
Artificial intelligence (AI) is not new. In fact, it has so many false starts over the past 60 years. The term went into hibernation for a long time.
Research into AI began way back in Dartmouth College in 1956 and was constantly associated with being the next frontier in the 1980s when mainframe computers ruled and supercomputers were a ginormous investment that very few could afford.
Despite the research put in over the years, the technology never quite took off and fell flat in many instances.
I am attending the GPU Technology Conference (GTC) in San Jose, California this week. It’s a massive conference with more than 7,000 participants from all around the world.
After decades of covering and attending conferences, I have noticed an evolution of sorts. Here are five things I’ve observed and really liked about this GTC.
Rich content: Artificial intelligence is transforming our lives in many ways such as robotics, intelligent video analytics and driverless cars. IDC has reported that 80 percent of all applications will be using AI by 2020.
Tons of experiential booths: The who’s who of technology featured virtual reality applications across multiple industries, not just gaming,
Smoothness of registration process: All it took was under a minute for those who’ve pre-registered to enter their n
ames on a notebook computer and the tag is printed immediately.
Power points everywhere: This is not the Microsoft presentation software but plugs that help to charge devices. In today’s age, we’ve become dependent on our smartphones, notebook computers and other battery-powered devices — all of which require power. Having the power points available across the facilities is an excellent and thoughtful move. And having points that incorporate USB slots and LAN connections? Wow!
Candies, candies, candies: Admit it. There are times that we’ve yielded to the sleeping spirit while at conferences, especially after meals. Having a candy bar helps to provide the energy boost to keep participants awake.
With all these pluses, there is one area of improvement. There were many driverless cars on display. If only we can get to go for a ride in one.
Anyway, kudos to NVIDIA and the event organisers. I love GTC 2017!
Facebook is developing new artificial intelligent (AI) systems to help manage the vast amount of information — such as text, images and videos — generated daily so people can better understand the world and communicate more effectively, even as the volume of information increases.
It has worked with NVIDIA on Caffe2, a new AI deep learning framework that allows developers and researchers to create large-scale distributed training scenarios and build machine learning applications for edge devices.
Providing AI-powered services on mobile is a complex data processing task that must happen within the blink of an eye. Increasingly, the processing of lightning-fast AI services requires GPU-accelerated computing, such as that offered by Facebook’s Big Basin servers, as well as highly optimised deep learning software that can leverage the full capability of the accelerated hardware.
Many cars were on display at CES last week but perhaps one of the most significant announcements is the collaboration between Mercedes-Benz and NVIDIA to bring an NVIDIA AI-powered car to market.
NVIDIA founder and CEO Jen-Hsun Huang (right) and Mercedes-Benz Vice President of Digital Vehicle and Mobility Sajjad Khan (left) talked about this new development at the Mercedes Benz Inspiration talk.
“When our teams came together there was instant chemistry, and we share a common vision about how AI can change your driving experience, and make it more enjoyable. Mercedes-Benz and NVIDIA share a common vision of the AI car. At this point it is clear AI will revolutionise the future of automobiles,,” said Huang, who pointed out that the collaboration began three years ago.
NVIDIA has unveiled at CES a new Shield TV media streamer, which like its predecessors will not be available in the Asia-Pacific region. However, a separate version of Shield, with custom software optimised for China, will be available later this year.
The new device is an Android open-platform media streamer that is claimed to deliver unmatched experiences in streaming, gaming and AI.
Sporting a sleek, new design and now shipping with both a remote and a game controller, the new Shield delivers rich visual experience with support for 4K HDR and three times the performance of other streamers.
Singapore is renowned as a food paradise. And with so many mouth-watering dishes to pick from, sometimes even locals have difficulty identifying a specific dish.
Singapore Management University (SMU) is working on a food artificial intelligence (AI) application that is calling on a supercomputer to help with recognising the local dishes to achieve smart food consumption and healthy lifestyle.
The project, developed as part of Singapore’s Smart Nation initiative, requires the analysis of a large number of food photos.
NVIDIA’s new DGX SATURNV supercomputer is ranked the world’s most efficient — and 28th fastest overall — on the latest Top500 list of supercomputers.
Powered by new Tesla P100 GPUs, it delivers 9.46 gigaflops/watt — a 42 percent improvement from the 6.67 gigaflops/watt delivered by the most efficient machine on the Top500 list released last June.
Compared with a supercomputer of similar performance, the Camphore 2 system, which is powered by Xeon Phi Knights Landing, SATURNV is 2.3x more energy efficient.hat efficiency is key to building machines capable of reaching exascale speeds — that’s 1 quintillion, or 1 billion billion, floating-point operations per second. Such a machine could help design efficient new combustion engines, model clean-burning fusion reactors, and achieve new breakthroughs in medical research.
Just when we thought NVIDIA was done with the Pascal range of GPUs with the benchmark release of the GeForce GTX 1060 early this week, NVIDIA CEO Jen-Hsun Huang pulled off a major surprise with the announcement of the new NVIDIA Titan X at an artificial intelligence meeting in Stanford University.
The new NVIDIA Titan X, based on the new Pascal GPU architecture, is the biggest GPU ever built with a record-breaking 3,584 CUDA cores.
Here are the numbers that matter:
11 TFLOPS FP32
44 TOPS INT8 (new deep learning inferencing instruction)
3,584 CUDA cores at 1.53GHz (versus 3,072 cores at 1.08GHz in previous TITAN X)
Up to 60 percent faster performance than previous TITAN X
High performance engineering for maximum overclocking
Virtual reality (VR) was the talk of the town at Computex in Taipei a couple of weeks ago.
At the NVIDIA Experience Centre in Grand Hyatt Taipei, a never-ending queue of people waited for the opportunity to check out VR demos powered by the newly-launched NVIDIA GeForce GTX 1080 GPUs.
In the halls — both at TWTC and Nangang — many exhibitors were up in force with their own flavours of VR. At one booth, a visitor put on a harness to try virtual parachuting while in several others, they checked out virtual Grand Prix racing and other demos.
After weeks, if not months of rumours and false predictions, the announcement was finally made. NVIDIA finally revealed the much-awaited Pascal-based NVIDIA GeForce 1080.
What the rumours got correct was the new name of the card. What they missed was the launch date, which NVIDIA kept close to its hearts until CEO Jen-Hsun Huang made the announcement at a specially-gathered press event in Austin on Friday evening (Saturday morning Singapore time).
According to Huang, NVIDIA spent billions in research and development of Pascal and the new GPU.
NVIDIA has named Raymond Teh as Vice President of Sales and Marketing for the Asia Pacific region.
An IT veteran with more than 30 years of experience, Teh will lead the company’s APAC field sales efforts. He succeeds Francis Yu, who joined NVIDIA 12 years ago and is retiring later this year.
Teh most recently served as vice president, Asia-Pacific, for Vodafone Global Enterprise. He previously held senior regional management roles at BT, GXS, SAP, and i2 Technologies. He holds a BSc in computer science and a Masters in Statistics from the University of New South Wales, Australia.
NVIDIA has introduced the NVIDIA Tesla P100 GPU, an advanced hyperscale data centre accelerator that can enable a new class of servers that can deliver the performance of hundreds of CPU server nodes.
Today’s data centres process large numbers of transactional workloads, such as web services. But they are inefficient at next-generation artificial intelligence and scientific applications, which require ultra-efficient, lightning-fast server nodes.
Based on the new NVIDIA Pascal GPU architecture, the Tesla P100 provides the performance and efficiency needed to power the computationally demanding applications. It delivers over a 12x increase in neural network training performance compared with a previous-generation NVIDIA Maxwell-based solution.
At his opening keynote address at GTC in San Jose, Jen-Hsun Huang, CEO of NVIDIA made a slew of announcements, including the world’s first deep learning supercomputer to meet the unlimited computing demands of artificial intelligence (AI).
As the first system designed specifically for deep learning, the NVIDIA DGX-1 comes fully integrated with hardware, deep learning software and development tools for quick, easy deployment. It is a turnkey system that contains a new generation of GPU accelerators, delivering the equivalent throughput of 250 x86 servers.
The DGX-1 deep learning system enables researchers and data scientists to easily harness the power of GPU-accelerated computing to create a new class of intelligent machines that learn, see and perceive the world as humans do. It delivers unprecedented levels of computing power to drive next-generation AI applications, allowing researchers to dramatically reduce the time to train larger, more sophisticated deep neural networks.
Gamers in northern Thailand, specifically Chiangmai, will get to enjoy premium gaming experience with the opening of Xenith iCafe in the city.
All its 100 PCs are equipped with NVIDIA GeForce GTX 960 GPUs, which deliver advanced performance, power efficiency, and realistic gameplay based on the latest NVIDIA Maxwell technology. They also come with high quality Razer gaming gear – mouse, keyboard and headset – all tailored to give gamers the best experience.
Situated near Chiangmai University , the modern and trendy Xenith iCafe features two gaming zones – a comfort zone and a VIP zone to cater to gamers’ needs as well as to host gaming events. This is a new trend in the iCafe market where owners can better balance cost, performance and functionality while delivering the gaming experience that their customers demand.
The new NVIDIA DRIVE PX 2 is set to give driverless cars a major boost.
Touted at the world’s most powerful engine for in-vehicle artificial intelligence, it allows the automotive industry to use artificial intelligence (AI) to tackle the complexities inherent in autonomous driving. NVIDIA DRIVE PX2 utilises deep learning on NVIDIA’s advanced GPUs for 360-degree situational awareness around the car, to determine precisely where the car is and to compute a safe, comfortable trajectory.
“Drivers deal with an infinitely complex world. Modern artificial intelligence and GPU breakthroughs enable us to finally tackle the daunting challenges of self-driving cars,” said Jen-Hsun Huang, Co-founder and CEO of NVIDIA. “NVIDIA’s GPU is central to advances in deep learning and supercomputing. We are leveraging these to create the brain of future autonomous vehicles that will be continuously alert, and eventually achieve superhuman levels of situational awareness. Autonomous cars will bring increased safety, new convenient mobility services and even beautiful urban designs – providing a powerful force for a better future.”
NVIDIA is paving the way to virtual reality (VR) gaming experiences with the launch of its new VR-ready programme at CES.
Under the programme, PC and notebook makers and add-in card providers will deliver GeForce GTX VR Ready systems and graphics cards that deliver an immersive VR gaming experience. The programme minimises confusion regarding which equipment is necessary to play the range of VR games and applications increasingly coming to market.
Delivering a great VR experience demands seven times the graphics processing power of traditional 3D games and applications – driving framerates above 90 frames per second (fps) for two simultaneous images (one for each eye).
As the year heads to a close, the anticipation for virtual reality (VR) is gathering momentum. And NVIDIA has helped to raise the tempo by bringing VR to Southeast Asia in a closed door event for analysts, the media and enthusiasts.
Held on December 11 at Crown Plaza Changi Airport in Singapore, the event gave participants the opportunity to try out two of the hottest VR products — Oculus Rift and HTC Vive — powered by NVIDIA technologies.
Oculus Rift lets users immerse in three three-dimensional games (details are under embargo from the content owners), taking gaming to another level.
Leadtek Research has released the new NVS 810 professional graphics card, which is designed to provide high definition multi-screen output for large exhibits, outdoor advertisement walls and other digital image output applications.
The NVS 810 uses a dual-core Maxwell GPU coupled with 4GB of DDR3 memory. This single slot card fully integrates two GM107 units for a total of 1024 CUDA cores as well as eight sets of Mini Display Port 1.2 output ports, giving a single system the capacity of supporting 32-screen output.
It leverages the powerful functions of NVIDIA DesignWorks package including the outstanding technologies of Warp & Blend and Mosaic multi-display, offering advanced image management capabilities.
NVIDIA has released the 1.0 version of two powerful VR software development kits (SDKs) — NVIDIA GameWorks VR and NVIDIA DesignWorks VR — to help developers deliver VR games and applications.
Immersive VR requires seven times the graphics processing power compared to traditional 3D apps and games.
When used in conjunction with the company’s industry-leading GeForce and Quadro GPUs, these SDKs provide developers the tools to create VR experiences, increase performance, reduce latency, improve hardware compatibility, and accelerate 360-degree video broadcasts.
Accelerated systems, or GPU-powered systems, for the first time accounted for more than 100 on the list of the world’s 500 most powerful supercomputers. That’s a total of 143 petaflops, over one-third of the list’s total FLOPS.
NVIDIA Tesla GPU-based supercomputers comprise 70 of these systems – including 23 of the 24 new systems on the list – reflecting compound annual growth of nearly 50 percent over the past five years.
There are three primary reasons accelerators are becoming increasingly adopted for high performance computing.
Moore’s Law continues to slow, forcing the industry to find new ways to deliver computational power more efficiently.
Hundreds of applications – including the vast majority of those most commonly used – are now GPU accelerated.
Even modest investments in accelerators can now result in significant increases in throughput, maximising efficiency for supercomputing sites and hyperscale datacentres.
At PAX Australia held over the weekend in Melbourne, NVIDIA introduced the combat-ready Battlebox PC, which is designed especially for hardcore gamers.
A Battlebox PC is a beast of a gaming machine with a powerful combination of the NVIDIA GeForce GTX 980 Ti GPU horsepower, two-Way NVIDIA SLI and the best components. It is also VR Ready and supports 4K gaming and DX12 for better visual effects and rendering techniques.