Entelechy Asia turns five today. So much has changed since we launched in November 2012.
The need for deep learning skills is increasing as more and more companies and industries hop on the bandwagon. Launch a little more than a year ago, NVIDIA’s Deep Learning Institute (DLI) has already trained tens of thousands of students, developers and data scientists.
And the company is expanding its DLI offerings with:
- New partnerships: Team up with Booz Allen Hamilton and deeplearning.ai to train thousands of students, developers and government specialists in artificial intelligence (AI).
- New University Ambassador Program: Instructors worldwide can teach students critical job skills and practical applications of AI at no cost.
- New courses: More courses are added to teach domain-specific applications of deep learning for finance, natural language processing, robotics, video analytics, and self-driving cars.
With artificial intelligence (AI) being a hot topic this year, NVIDIA is organising its first AI-focused regional conference in Singapore on October 23 and 24.
The event will be held in two parts with the first day focusing on Deep Learning Institute (DLI) workshop where participants will received hands-on training on deep learningl and the second day filled with keynote addresses, panel discussion and three tracks. It is targeted at data scientists and senior decision makers in the field of AI in both public and private sectors.
“Singapore is aiming to be the world’s first smart nation and AI is playing a critical role. NVIDIA is well positioned to help drive the government’s Smart Nation initiative with the development of solutions based on AI. Our GPUs are making headlines across the world by enabling many breakthroughs in various industries using deep learning,” said Raymond Teh, Vice President of APAC sales and marketing at NVIDIA.
NVIDIA has teamed up with BINUS University and Kinetica to establish the first artificial intelligence (AI) research and development (R&D) centre in Indonesia.
Located at the university’s Anggrek Campus, the centre will support BINUS University’s aim to be the premier R&D hub for Al in Indonesia. Leveraging the power of NVIDIA’s GPUs, it will be a showcase of the commercial potential of GPU-accelerated deep learning applications.
“Today, we stand at the beginning of the AI computing era, ignited by a new computing model, GPU deep learning. This new model — where deep neural networks are trained to recognise patterns from massive amounts of data — has proven to be ‘unreasonably’ effective at solving some of the most complex problems in computer science. In this era, software writes itself and machines learn. Soon, hundreds of billions of devices will be infused with intelligence. AI will revolutionise every industry. NVIDIA provides the products and solutions to power this revolution,” said Raymond Teh, Vice President of APAC Sales and Marketing of NVIDIA.
NVIDIA is among a group of investors led by Chinese social media company Sina investing more than US$20 million in Chinese startup TuSimple.
Formed in 2015, TuSimple has more than 100 employees in R&D centres in Beijing and San Diego developing technology for autonomous long-distance freight delivery. It uses NVIDIA GPUs, NVIDIA DRIVE PX 2, Jetson TX2, CUDA, TensorRT, and cuDNN to develop its autonomous driving solution.
In June, the company successfully completed a 200-mile Level 4 test drive from San Diego to Yuma, Arizona, using NVIDIA GPUs and cameras as the primary sensor.
“I’m amazed at the quality of the papers presented. The project teams’ line of thinking and breakthrough concepts are refreshing,” exclaimed a leading artificial intelligence (AI) scientist at the International Conference on Machine Learning (ICML) in Sydney.
International Convention Centre Sydney was a massive hive of activities as 3,000 of the world’s top researchers, developers and students in AI gathered for ICML. The participants moved rapidly from one workshop to another and took great interest in the exhibition booths of top deep learning proponents such as NVIDIA, Google and Facebook.
With so many bright young talents. the event proved to be a good fishing ground for vendors as they held recruitment interviews at their booths, as well as posted openings on the board.
At the inaugural Audi Summit in Spain, Audi revealed that its flagship 2018 A8 features a multitude of high-tech wizardry powered by NVIDIA.
“The car of the future will make its occupants’ life easier with the help of artificial intelligence (AI),” declared Rupert Stadler, Chairman of the Board of Audi, as he introduced such A8 features as Audi AI Traffic Jam Pilot, Remote Park Pilot, Natural Voice Control and Swarm Intelligence.
The A8 is packed with NVIDIA powered systems, including revolutionary new user interfaces, a new infotainment system, a new virtual cockpit, and new rear seat entertainment options.
NVIDIA is investing in Deep Instinct, an Israeli-based startup that uses deep learning to thwart cyber attacks.
Deep Instinct uses a GPU-based neural network and CUDA to achieve 99 percent detection rates, compared with about 80 percent detection from conventional cyber security software. Its software can automatically detect and defeat the most advanced cyber attacks.
“Deep Instinct is an emerging leader in applying GPU-powered AI through deep learning to address cybersecurity, a field ripe for disruption as enterprise customers migrate away from traditional solutions. We’re excited to work together with Deep Instinct to advance this important field,” said Jeff Herbst, Vice President of Business Development of NVIDIA.
NVIDIA and Baidu have teamed up to bring artificial intelligence (AI) technology to cloud computing, self-driving vehicles and AI home assistants.
Baidu will deploy NVIDIA HGX architecture with Tesla Volta V100 and Tesla P4 GPU accelerators for AI training and inference in its data centres. Combined with Baidu’s PaddlePaddle deep learning framework and NVIDIA’s TensorRT deep learning inference software, researchers and companies can harness state-of-the-art technology to develop products and services with real-time understanding of images, speech, text and video.
To accelerate AI development, the companies will work together to optimise Baidu’s open-source PaddlePaddle deep learning framework on NVIDIA’s Volta GPU architecture.
NVIDIA is among six technology companies to receive a total of US$258 funding from the US Department of Energy’s Exascale Computing Project (ECP).
The funding is to accelerate the development of next-generation supercomputers with the delivery of at least two exascale computing systems, one of which is targeted by 2021.
Such systems would be about 50 times more powerful than the US’ fastest supercomputer, Titan, located at Oak Ridge National Laboratory.
SoftBank Group has taken a US$4 billion stake in NVIDIA, according to Bloomberg.
This dovetails nicely with SoftBank founder Masayoshi Son’s aim to become the biggest investor in technology over the next decade.
NVIDIA’s stocks tripled last year and is surging again this year. The chipmaker is banking on driving the artificial intelligence (AI) trend with its powerful graphics processing units (GPUs).
With all that rage of artificial intelligence (AI) powering driverless cars, the same technology can also be used to keep drivers safe. It can acts like a guardian angel and look our for danger (watch video above).
NVIDIA’s AI Co-Pilot technology uses sensor data from the cameras and microphone inside and outside a car to track the environment around the driver. When the AI notices a problem — such as the driver looking away from an approaching pedestrian — it could sound an alert.
Hundreds of thousands of computers in 150 countries have been hit by the WannaCry ransomware. While users are scampering around trying to fix their computers, the top of mind question is whether this could have been avoided. And if artificial intelligence could have predicted and prevented such an attack.
At GPU Technology Conference (GTC) in San Jose, California last week, Israeli firm Deep Instinct won the Most Disruptive Startup category in NVIDIA’s Inception Award. The firm is the first to use AI to predict and prevent malware attacks.
According David Eli, Chief Technology Officer (CTO) of Deep Instinct, more than a million new malware threats are released daily, but most antivirus software focuses on known threats.
His firm’s graphics processing unit (GPU)-accelerated deep learning software detects malware in real time. Trained on hundreds of millions of files, the neural network learns to detect more threats and then uses its experience to predict new attacks.
“Winning this prize is the ultimate recognition from the deep learning industry because deep learning and NVIDIA are synonymous,” said Eli.
This is the inaugural year of NVIDIA’s Inception Awards, which recognises startups in three categories – Hottest Emerging, Most Disruptive and Social Innovation. Winners received significant cash prizes and graphics processing unit (GPU) hardware to further accelerate their activities.
Emerging out of stealth mode in November 2015, Deep Instinct’s patent-pending application of deep learning to cybersecurity results in cutting-edge capabilities of unmatched accurate detection and real-time prevention.
Leveraging the capabilities associated with deep learning, Deep Instinct provides instinctive protection on any device, platform, and operating system. Zero-day and APT attacks are immediately detected and blocked before any harm can happen to the enterprise’s endpoints, servers, and mobile devices.
“Deep Instinct relies on end-to-end deep learning for all its advanced malware detection and prevention capabilities. The deep neural network is trained on hundreds of millions of malicious and legitimate files. To handle such large-scale training, Deep Instinct developed its proprietary deep learning infrastructure directly on NVIDIA’s GPU machines,” said Eli.
“The powerful capabilities of NVIDIA GPUs enable us to perform our training at a substantially faster speed compared to CPUs: while training the Deep Instinct brain on NVIDIA’s GPUs takes a little over a single day of training, the same task on CPUs would take more than three months,” he added.
“We are thrilled to be recognised by NVIDIA for what we believe is a groundbreaking application of GPUs. Being able to leverage powerful technological capabilities to apply deep learning to cybersecurity empowers enterprises with unprecedented, real-time protection from the next unexpected attack,” said Guy Caspi, Chief Executive Officer of Deep Instinct.
As a sign of its coming of age, the GPU Technology Conference (GTC) held annually in San Jose, California since 2009, is no longer a niche event but one that is drawing the who’s who of the technology industry.
NVIDIA’s shift of focus from being a visual computing company to an AI company has certainly played a big part in the expansion of the conference. It has attracted around 50 sponsors and 150 exhibitors on top of more than 7,000 participants.
However, it’s not the number of sponsors and exhibitors but rather the quality that is worthy of attention. The line-up of technology firms includes luminaries such as Adobe, Alibaba, Amazon, Autodesk, Cisco, Cray, Dell EMC, DreamWorks Animation, IBM Watson, Lenovo, Microsoft, Samsung Electronics, Verizon Labs, VMware, and Yahoo Research.
With AI being such a prime mover of autonomous vehicles, it is also not surprising that leading names in the automotive industry were also present — BMW Group, Chevron, ExxonMobil, Ford Motor Company, General Motors, Honda Research Institute, and Mercedes-Benz R&D North America.
Amid the various booths showcasing VR technologies was one by NASA Ames Research Center, which showed a VR demonstration on Mars.
Artificial intelligence (AI) is not new. In fact, it has so many false starts over the past 60 years. The term went into hibernation for a long time.
Research into AI began way back in Dartmouth College in 1956 and was constantly associated with being the next frontier in the 1980s when mainframe computers ruled and supercomputers were a ginormous investment that very few could afford.
Despite the research put in over the years, the technology never quite took off and fell flat in many instances.
By Edward Lim
I am attending the GPU Technology Conference (GTC) in San Jose, California this week. It’s a massive conference with more than 7,000 participants from all around the world.
After decades of covering and attending conferences, I have noticed an evolution of sorts. Here are five things I’ve observed and really liked about this GTC.
- Rich content: Artificial intelligence is transforming our lives in many ways such as robotics, intelligent video analytics and driverless cars. IDC has reported that 80 percent of all applications will be using AI by 2020.
- Tons of experiential booths: The who’s who of technology featured virtual reality applications across multiple industries, not just gaming,
- Smoothness of registration process: All it took was under a minute for those who’ve pre-registered to enter their n
ames on a notebook computer and the tag is printed immediately.
- Power points everywhere: This is not the Microsoft presentation software but plugs that help to charge devices. In today’s age, we’ve become dependent on our smartphones, notebook computers and other battery-powered devices — all of which require power. Having the power points available across the facilities is an excellent and thoughtful move. And having points that incorporate USB slots and LAN connections? Wow!
- Candies, candies, candies: Admit it. There are times that we’ve yielded to the sleeping spirit while at conferences, especially after meals. Having a candy bar helps to provide the energy boost to keep participants awake.
With all these pluses, there is one area of improvement. There were many driverless cars on display. If only we can get to go for a ride in one.
Anyway, kudos to NVIDIA and the event organisers. I love GTC 2017!
Facebook is developing new artificial intelligent (AI) systems to help manage the vast amount of information — such as text, images and videos — generated daily so people can better understand the world and communicate more effectively, even as the volume of information increases.
It has worked with NVIDIA on Caffe2, a new AI deep learning framework that allows developers and researchers to create large-scale distributed training scenarios and build machine learning applications for edge devices.
Providing AI-powered services on mobile is a complex data processing task that must happen within the blink of an eye. Increasingly, the processing of lightning-fast AI services requires GPU-accelerated computing, such as that offered by Facebook’s Big Basin servers, as well as highly optimised deep learning software that can leverage the full capability of the accelerated hardware.
Many cars were on display at CES last week but perhaps one of the most significant announcements is the collaboration between Mercedes-Benz and NVIDIA to bring an NVIDIA AI-powered car to market.
NVIDIA founder and CEO Jen-Hsun Huang (right) and Mercedes-Benz Vice President of Digital Vehicle and Mobility Sajjad Khan (left) talked about this new development at the Mercedes Benz Inspiration talk.
“When our teams came together there was instant chemistry, and we share a common vision about how AI can change your driving experience, and make it more enjoyable. Mercedes-Benz and NVIDIA share a common vision of the AI car. At this point it is clear AI will revolutionise the future of automobiles,,” said Huang, who pointed out that the collaboration began three years ago.
NVIDIA has unveiled at CES a new Shield TV media streamer, which like its predecessors will not be available in the Asia-Pacific region. However, a separate version of Shield, with custom software optimised for China, will be available later this year.
The new device is an Android open-platform media streamer that is claimed to deliver unmatched experiences in streaming, gaming and AI.
Sporting a sleek, new design and now shipping with both a remote and a game controller, the new Shield delivers rich visual experience with support for 4K HDR and three times the performance of other streamers.
Singapore is renowned as a food paradise. And with so many mouth-watering dishes to pick from, sometimes even locals have difficulty identifying a specific dish.
Singapore Management University (SMU) is working on a food artificial intelligence (AI) application that is calling on a supercomputer to help with recognising the local dishes to achieve smart food consumption and healthy lifestyle.
The project, developed as part of Singapore’s Smart Nation initiative, requires the analysis of a large number of food photos.
NVIDIA’s new DGX SATURNV supercomputer is ranked the world’s most efficient — and 28th fastest overall — on the latest Top500 list of supercomputers.
Powered by new Tesla P100 GPUs, it delivers 9.46 gigaflops/watt — a 42 percent improvement from the 6.67 gigaflops/watt delivered by the most efficient machine on the Top500 list released last June.
Compared with a supercomputer of similar performance, the Camphore 2 system, which is powered by Xeon Phi Knights Landing, SATURNV is 2.3x more energy efficient.hat efficiency is key to building machines capable of reaching exascale speeds — that’s 1 quintillion, or 1 billion billion, floating-point operations per second. Such a machine could help design efficient new combustion engines, model clean-burning fusion reactors, and achieve new breakthroughs in medical research.
Just when we thought NVIDIA was done with the Pascal range of GPUs with the benchmark release of the GeForce GTX 1060 early this week, NVIDIA CEO Jen-Hsun Huang pulled off a major surprise with the announcement of the new NVIDIA Titan X at an artificial intelligence meeting in Stanford University.
The new NVIDIA Titan X, based on the new Pascal GPU architecture, is the biggest GPU ever built with a record-breaking 3,584 CUDA cores.
Here are the numbers that matter:
- 11 TFLOPS FP32
- 44 TOPS INT8 (new deep learning inferencing instruction)
- 12B transistors
- 3,584 CUDA cores at 1.53GHz (versus 3,072 cores at 1.08GHz in previous TITAN X)
- Up to 60 percent faster performance than previous TITAN X
- High performance engineering for maximum overclocking
- 12 GB of GDDR5X memory (480 GB/s)
Virtual reality (VR) was the talk of the town at Computex in Taipei a couple of weeks ago.
At the NVIDIA Experience Centre in Grand Hyatt Taipei, a never-ending queue of people waited for the opportunity to check out VR demos powered by the newly-launched NVIDIA GeForce GTX 1080 GPUs.
In the halls — both at TWTC and Nangang — many exhibitors were up in force with their own flavours of VR. At one booth, a visitor put on a harness to try virtual parachuting while in several others, they checked out virtual Grand Prix racing and other demos.
After weeks, if not months of rumours and false predictions, the announcement was finally made. NVIDIA finally revealed the much-awaited Pascal-based NVIDIA GeForce 1080.
What the rumours got correct was the new name of the card. What they missed was the launch date, which NVIDIA kept close to its hearts until CEO Jen-Hsun Huang made the announcement at a specially-gathered press event in Austin on Friday evening (Saturday morning Singapore time).
According to Huang, NVIDIA spent billions in research and development of Pascal and the new GPU.
NVIDIA has named Raymond Teh as Vice President of Sales and Marketing for the Asia Pacific region.
An IT veteran with more than 30 years of experience, Teh will lead the company’s APAC field sales efforts. He succeeds Francis Yu, who joined NVIDIA 12 years ago and is retiring later this year.
Teh most recently served as vice president, Asia-Pacific, for Vodafone Global Enterprise. He previously held senior regional management roles at BT, GXS, SAP, and i2 Technologies. He holds a BSc in computer science and a Masters in Statistics from the University of New South Wales, Australia.
NVIDIA has introduced the NVIDIA Tesla P100 GPU, an advanced hyperscale data centre accelerator that can enable a new class of servers that can deliver the performance of hundreds of CPU server nodes.
Today’s data centres process large numbers of transactional workloads, such as web services. But they are inefficient at next-generation artificial intelligence and scientific applications, which require ultra-efficient, lightning-fast server nodes.
Based on the new NVIDIA Pascal GPU architecture, the Tesla P100 provides the performance and efficiency needed to power the computationally demanding applications. It delivers over a 12x increase in neural network training performance compared with a previous-generation NVIDIA Maxwell-based solution.
At his opening keynote address at GTC in San Jose, Jen-Hsun Huang, CEO of NVIDIA made a slew of announcements, including the world’s first deep learning supercomputer to meet the unlimited computing demands of artificial intelligence (AI).
As the first system designed specifically for deep learning, the NVIDIA DGX-1 comes fully integrated with hardware, deep learning software and development tools for quick, easy deployment. It is a turnkey system that contains a new generation of GPU accelerators, delivering the equivalent throughput of 250 x86 servers.
The DGX-1 deep learning system enables researchers and data scientists to easily harness the power of GPU-accelerated computing to create a new class of intelligent machines that learn, see and perceive the world as humans do. It delivers unprecedented levels of computing power to drive next-generation AI applications, allowing researchers to dramatically reduce the time to train larger, more sophisticated deep neural networks.
Gamers in northern Thailand, specifically Chiangmai, will get to enjoy premium gaming experience with the opening of Xenith iCafe in the city.
All its 100 PCs are equipped with NVIDIA GeForce GTX 960 GPUs, which deliver advanced performance, power efficiency, and realistic gameplay based on the latest NVIDIA Maxwell technology. They also come with high quality Razer gaming gear – mouse, keyboard and headset – all tailored to give gamers the best experience.
Situated near Chiangmai University , the modern and trendy Xenith iCafe features two gaming zones – a comfort zone and a VIP zone to cater to gamers’ needs as well as to host gaming events. This is a new trend in the iCafe market where owners can better balance cost, performance and functionality while delivering the gaming experience that their customers demand.
The new NVIDIA DRIVE PX 2 is set to give driverless cars a major boost.
Touted at the world’s most powerful engine for in-vehicle artificial intelligence, it allows the automotive industry to use artificial intelligence (AI) to tackle the complexities inherent in autonomous driving. NVIDIA DRIVE PX2 utilises deep learning on NVIDIA’s advanced GPUs for 360-degree situational awareness around the car, to determine precisely where the car is and to compute a safe, comfortable trajectory.
“Drivers deal with an infinitely complex world. Modern artificial intelligence and GPU breakthroughs enable us to finally tackle the daunting challenges of self-driving cars,” said Jen-Hsun Huang, Co-founder and CEO of NVIDIA. “NVIDIA’s GPU is central to advances in deep learning and supercomputing. We are leveraging these to create the brain of future autonomous vehicles that will be continuously alert, and eventually achieve superhuman levels of situational awareness. Autonomous cars will bring increased safety, new convenient mobility services and even beautiful urban designs – providing a powerful force for a better future.”
NVIDIA is paving the way to virtual reality (VR) gaming experiences with the launch of its new VR-ready programme at CES.
Under the programme, PC and notebook makers and add-in card providers will deliver GeForce GTX VR Ready systems and graphics cards that deliver an immersive VR gaming experience. The programme minimises confusion regarding which equipment is necessary to play the range of VR games and applications increasingly coming to market.
Delivering a great VR experience demands seven times the graphics processing power of traditional 3D games and applications – driving framerates above 90 frames per second (fps) for two simultaneous images (one for each eye).