Singapore’s AI agenda gets double boost!

NVIDIA Fellow Dr David Kirk
NVIDIA Fellow Dr David Kirk delivers the keynote address at the NVIDIA AI Conference.

Singapore’s aim to be an artificial intelligence (AI) hub has been boosted with two initiatives — the setting up of a shared AI platform for researchers and the awarding of scholarships to develop AI talents.

At the NVIDIA AI Conference in Singapore yesterday, NVIDIA and Singapore’s National Supercomputing Centre (NSCC) agreed to establish a platform to bolster AI capabilities among its academic, research and industry stakeholders and in support of AI Singapore (AISG), a national programme set up in May to drive AI adoption, research and innovation in Singapore.

Called AI.Platform@NSCC, it will provide AI training, technical expertise and computing services to AISG, which brings together all Singapore-based research and tertiary institutions, including the National University of Singapore (NUS), Nanyang Technological University (NTU), Singapore University of Design and Technology (SUTD), Singapore Management University (SMU), as well as research institutions in the Agency for Science, Technology and Research (A*STAR).

Continue reading “Singapore’s AI agenda gets double boost!”

Tantalising line-up of speakers at NVIDIA AI Conference

More than 1,000 participants attending the NVIDIA AI Conference in Singapore next week are in for a treat as the organisers are bringing in a tantalising line-up of speakers.

The two keynote speakers are Dr David B Kirk, NVIDIA Fellow and inventor of more than 60 patents and patent applications relating to graphics design; and Dr Wanli Min, AI scientist of Alibaba Cloud, who will touch on A Revolutionary Road to Data Intelligence.

Besides these two, there are special guest-of-honour Chng Kai Fong, Managing Director of Singapore’s Economic Development Board, and a panel discussion on AI for the Future of Singapore Economy.

Continue reading “Tantalising line-up of speakers at NVIDIA AI Conference”

NVIDIA to hold first AI-focused conference in Singapore in October

With artificial intelligence (AI) being a hot topic this year, NVIDIA is organising its first AI-focused regional conference in Singapore on October 23 and 24.

The event will be held in two parts with the first day focusing on Deep Learning Institute (DLI) workshop where participants will received hands-on training on deep learningl and the second day filled with keynote addresses, panel discussion and three tracks. It is targeted at data scientists and senior decision makers in the field of AI in both public and private sectors.

“Singapore is aiming to be the world’s first smart nation and AI is playing a critical role. NVIDIA is well positioned to help drive the government’s Smart Nation initiative with the development of solutions based on AI. Our GPUs are making headlines across the world by enabling many breakthroughs in various industries using deep learning,” said Raymond Teh, Vice President of APAC sales and marketing at NVIDIA.

Continue reading “NVIDIA to hold first AI-focused conference in Singapore in October”

ICML: Gathering of the brightest in AI

“I’m amazed at the quality of the papers presented. The project teams’ line of thinking and breakthrough concepts are refreshing,” exclaimed a leading artificial intelligence (AI) scientist at the International Conference on Machine Learning (ICML) in Sydney.

International Convention Centre Sydney was a massive hive of activities as 3,000 of the world’s top researchers, developers and students in AI gathered for ICML. The participants moved rapidly from one workshop to another and took great interest in the exhibition booths of top deep learning proponents such as NVIDIA, Google and Facebook.

With so many bright young talents. the event proved to be a good fishing ground for vendors as they held recruitment interviews at their booths, as well as posted openings on the board.

Continue reading “ICML: Gathering of the brightest in AI”

NVIDIA Tesla V100 surprise for world’s top AI researchers

Fifteen top AI research institutions of the NVIDIA AI Labs programmes were each presented with the Volta-based NVIDIA Tesla V100 GPU accelerator.

They were participating in Computer Vision and Pattern Recognition (CVPR) conference in Honolulu.

“AI is the most powerful technology force that we have ever known. I’ve seen everything. I’ve seen the coming and going of the client-server revolution. I’ve seen the coming and going of the PC revolution. Absolutely nothing compares,” said Jensen Huang, CEO of NVIDIA.

Continue reading “NVIDIA Tesla V100 surprise for world’s top AI researchers”

AI takes centrestage at ICML in Sydney

NVIDIA is bringing its wealth of artificial intelligence (AI) solutions and expertise to the International Conference on Machine Learning (ICML) in Sydney.

Held at Sydney International Convention Centre from August 6 to 11, the event is expected to attract up to 3,000 participants, primarily faculty, researchers and PhD students in machine learning, data science, data mining, AI, statistics, and related fields.

The NVIDIA booth (Level 2, The Gallery, Booth #4) will feature many firsts in Australia, such as demos on 4K style transfer, a deep neural network to extract a specific artistic style from a source painting, and then synthesises this information with the content of a separate video; self-driving auto using the Drive PX2 AI car computing platform; Deepstream SDK that simplifies development of high performance video analytics applications powered by deep learning; and NVIDIA Isaac, the AI-based software platform lets developers train virtual robots using detailed and highly realistic test scenarios.

Continue reading “AI takes centrestage at ICML in Sydney”

NVIDIA and Baidu team up on AI

NVIDIA and Baidu have teamed up to bring artificial intelligence (AI) technology to cloud computing, self-driving vehicles and AI home assistants.

Baidu will deploy NVIDIA HGX architecture with Tesla Volta V100 and Tesla P4 GPU accelerators for AI training and inference in its data centres. Combined with Baidu’s PaddlePaddle deep learning framework and NVIDIA’s TensorRT deep learning inference software, researchers and companies can harness state-of-the-art technology to develop products and services with real-time understanding of images, speech, text and video.

To accelerate AI development, the companies will work together to optimise Baidu’s open-source PaddlePaddle deep learning framework on NVIDIA’s Volta GPU architecture.

Continue reading “NVIDIA and Baidu team up on AI”

NVIDIA receives DOE funding for HPC research

NVIDIA is among six technology companies to receive a total of US$258 funding from the US Department of Energy’s Exascale Computing Project (ECP).

The funding is to accelerate the development of next-generation supercomputers with the delivery of at least two exascale computing systems, one of which is targeted by 2021.

Such systems would be about 50 times  more powerful than the US’ fastest supercomputer, Titan, located at Oak Ridge National Laboratory.

Continue reading “NVIDIA receives DOE funding for HPC research”

Taiwan: Home of GeForce!

At the keynote of NVIDIA AI Forum, NVIDIA CEO and Founder Jensen Huang called “Taiwan is the home of NVIDIA’s GeForce system”.

Video gaming is a US$100 billion industry and “GeForce PC gaming is the number one platform, nearly 200 million GeForce installed base,” declared Huang.

He announced the new NVIDIA Max-Q platform which lets gaming notebook makers produce faster, slimmer and quieter machines.

Continue reading “Taiwan: Home of GeForce!”

Voila, Volta!

NVIDIA CEO Jensen Huang announcing Tesla V100.

NVIDIA has pulled yet another trick out of its always-filled hat of technology goodies with the launch of Volta, the world’s most powerful GPU computing architecture. At his keynote address at GTC in San Jose, NVIDIA CEO Jensen Huang dubbed it “the next level of computer projects”.

Volta is created to drive the next wave of advancement in artificial intelligence (AI) and high performance computing.

The first Volta-based processor is the NVIDIA Tesla V100 data centre GPU, which brings extraordinary speed and scalability for AI inferencing and training, as well as for accelerating HPC and graphics workloads.

Continue reading “Voila, Volta!”

Rise of accelerated computing in data centres

Can’t say this was unexpected as NVIDIA retorts Google’s claim that its custom ASIC Tensor Processing Unit (TPU) was up to 30 times faster than CPUs and NVIDIA’s K80 G for inferencing workloads.

NVIDIA pointed out that Google’s  TPU paper has drawn a clear conclusion – without accelerated computing, the scale-out of AI is simply not practical.

The role of data centres has changed considerably in today’s economy. Instead of just serving web pages, advertising and video content, data centres are now recognising voices, detecting images in video streams and connecting users with information they need when they need it.

Continue reading “Rise of accelerated computing in data centres”

Singapore universities deploy deep learning supercomputers

First, it was Singapore Management University (SMU). Now two other Singapore universities — Singapore University of Technology and Design (SUTD) and Nanyang Technological University (NTU) — have also deployed the NVIDIA DGX-1 deep learning supercomputer for their research projects on artificial intelligence (AI).

SUTD will use the DGX-1 at the SUTD Brain Lab to further research into machine reasoning and distributed learning. Under a memorandum of understanding signed earlier this month, NVIDIA and SUTD will also set up the NVIDIA-SUTD AI Lab to leverage the power of GPU-accelerated neural networks for researching new theories and algorithms for AI. The agreement also provides for internship opportunities to selected students of the lab.

“Computational power is a game changer for AI research, especially in the areas of big data analytics, robotics, machine reasoning and distributed intelligence. The DGX-1 will enable us to perform significantly more experiments in the same period of time, quickening the discovery of new theories and the design of new applications,” said Professors Shaowei Lin and Georgios Piliouras, Engineering Systems and Design, SUTD.

Continue reading “Singapore universities deploy deep learning supercomputers”

RIKEN turns to NVIDIA supercomputer for deep learning research

RIKEN, Japan’s largest comprehensive research institution, will have a new supercomputer for deep learning research in April. Built by Fujitsu using 24 NVIDIA DGX-1 AI systems, the new machine will accelerate the application of artificial intelligence (AI) to solve complex challenges in healthcare, manufacturing and public safety.

Conventional high performance computing architectures are proving too costly and inefficient for meeting the needs of AI researchers. That’s why research institutions such as RIKEN are looking for GPU-based solutions that reduce cost and power consumption while increasing performance. Each DGX-1 combines the power of eight NVIDIA Tesla P100 GPUs with an integrated software stack optimised for deep learning frameworks, delivering the performance of 250 conventional x86 servers.

 

“We believe that the NVIDIA DGX-1-based system will accelerate real-world implementation of the latest AI technologies technologies as well as research into next-generation AI algorithms. Fujitsu is leveraging its extensive experience in high-performance computing development and AI research to support R&D that utilises this system, contributing to the creation of a future in which AI is used to find solutions to a variety of social issues,” said Arimichi Kunisawa, Head of the Technical Computing Solution Unit at Fujitsu.

 

 

Gunning for supercomputing supremacy in Japan

tsubame-3-0

Tokyo Institute of Technology plans to create Japan’s fastest AI supercomputer, which is will deliver more than twice the performance of its predecessor to slide into the world’s top 10 fastest systems.

Called Tsubame 3.0, it will use Pascal-based NVIDIA P100 GPUs that are nearly three times as efficient as their predecessors, to reach an expected 12.2 petaflops of double precision performance.

Tsubame 3.0 will excel in AI computation with more than 47 PFLOPS of AI horsepower. When operated with Tsubame 2.5, it is expected to deliver 64.3 PFLOPS, making it Japan’s highest performing AI supercomputer.

Continue reading “Gunning for supercomputing supremacy in Japan”

SMU uses NVIDIA DGX-1 supercomputer for food recognition project

salted-egg-yolkd-prawnsSingapore is renowned as a food paradise. And with so many mouth-watering dishes to pick from, sometimes even locals have difficulty identifying a specific dish.

Singapore Management University (SMU) is working on a food artificial intelligence (AI) application that is calling on a supercomputer to help with recognising the local dishes to achieve smart food consumption and healthy lifestyle.

The project, developed as part of Singapore’s Smart Nation initiative, requires the analysis of a large number of food photos.

Continue reading “SMU uses NVIDIA DGX-1 supercomputer for food recognition project”

NVIDIA DGX SATURNV ranked most efficient supercomputer

nvidia_dgx_saturnvNVIDIA’s new DGX SATURNV supercomputer is ranked the world’s most efficient — and 28th fastest overall — on the latest Top500 list of supercomputers.

Powered by new Tesla P100 GPUs, it delivers 9.46 gigaflops/watt — a 42 percent improvement from the 6.67 gigaflops/watt delivered by the most efficient machine on the Top500 list released last June.

Compared with a supercomputer of similar performance, the Camphore 2 system, which is powered by Xeon Phi Knights Landing, SATURNV is 2.3x more energy efficient.hat efficiency is key to building machines capable of reaching exascale speeds — that’s 1 quintillion, or 1 billion billion, floating-point operations per second. Such a machine could help design efficient new combustion engines, model clean-burning fusion reactors, and achieve new breakthroughs in medical research.

Continue reading “NVIDIA DGX SATURNV ranked most efficient supercomputer”

China’s world’s fastest supercomputer built without US chips

TaihulightChina has continued its lead in the race for the world’s fastest supercomputer with the Sunway TaihuLight, whose Linpack mark of 93 petaflops outperforms the former TOP500 champ, Tianhe-2, by a factor of three.

What’s more remarkable is that the new powerhouse is driven by a new ShenWei processor and custom interconnect, both of which were developed in China. This breaks a traditional reliance on US supercomputing technologies.

 

Located at the National Supercomputing Center in Wuxi, it will be used for research and engineering work in areas such as climate, weather and earth systems modeling, life science research, advanced manufacturing, and data analytics.

NVIDIA unveils world’s first deep learning supercomputer

NVIDIA DGX-1

At his opening keynote address at GTC in San Jose, Jen-Hsun Huang, CEO of NVIDIA made a slew of announcements, including the world’s first deep learning supercomputer to meet the unlimited computing demands of artificial intelligence (AI).

As the first system designed specifically for deep learning, the NVIDIA DGX-1 comes fully integrated with hardware, deep learning software and development tools for quick, easy deployment. It is a turnkey system that contains a new generation of GPU accelerators, delivering the equivalent throughput of 250 x86 servers.

The DGX-1 deep learning system enables researchers and data scientists to easily harness the power of GPU-accelerated computing to create a new class of intelligent machines that learn, see and perceive the world as humans do. It delivers unprecedented levels of computing power to drive next-generation AI applications, allowing researchers to dramatically reduce the time to train larger, more sophisticated deep neural networks.

Continue reading “NVIDIA unveils world’s first deep learning supercomputer”

Monash University launches M3 to accelerate research

M3 launch
Australian Chief Scientist Alan Finkel AO and Monash Professor Ian Smith get ready to press the red button to launch M3.

Monash University is taking research to another level with the launch of M3, the third-generation supercomputer available through the MASSIVE (Multi-modal Australian ScienceS Imaging and Visualisation Environment) facility.

Powered by ultra-high-performance NVIDIA Tesla K80 GPU accelerators, M3 will provide new simulation and real-time data processing capabilities to a wide selection of Australian researchers.

“Our collaboration with NVIDIA will take Monash research to new heights. By coupling some of Australia’s best researchers with NVIDIA’s accelerated computing technology we’re going to see some incredible impact. Our scientists will produce code that runs faster, but more significantly, their focus on deep learning algorithms will produce outcomes that are smarter,” said Professor Ian Smith, Vice Provost (Research and Research Infrastructure), Monash University.

Continue reading “Monash University launches M3 to accelerate research”

NVIDIA adds AI and supercomputing prowess to driverless cars

DRIVE PX_illustrationThe new NVIDIA DRIVE PX 2 is set to give driverless cars a major boost.

Touted at the world’s most powerful engine for in-vehicle artificial intelligence, it allows the automotive industry to use artificial intelligence (AI) to tackle the complexities inherent in autonomous driving. NVIDIA DRIVE PX2 utilises deep learning on NVIDIA’s advanced GPUs for 360-degree situational awareness around the car, to determine precisely where the car is and to compute a safe, comfortable trajectory.

“Drivers deal with an infinitely complex world. Modern artificial intelligence and GPU breakthroughs enable us to finally tackle the daunting challenges of self-driving cars,” said Jen-Hsun Huang, Co-founder and CEO of NVIDIA. “NVIDIA’s GPU is central to advances in deep learning and supercomputing. We are leveraging these to create the brain of future autonomous vehicles that will be continuously alert, and eventually achieve superhuman levels of situational awareness. Autonomous cars will bring increased safety, new convenient mobility services and even beautiful urban designs – providing a powerful force for a better future.”

Continue reading “NVIDIA adds AI and supercomputing prowess to driverless cars”

Accelerated systems account for more than 20% of TOP500 supercomputers

 

NVIDIAAccelerated systems, or GPU-powered systems, for the first time accounted for more than 100 on the list of the world’s 500 most powerful supercomputers. That’s a total of 143 petaflops, over one-third of the list’s total FLOPS.

NVIDIA Tesla GPU-based supercomputers comprise 70 of these systems – including 23 of the 24 new systems on the list – reflecting compound annual growth of nearly 50 percent over the past five years.

There are three primary reasons accelerators are becoming increasingly adopted for high performance computing.

  1. Moore’s Law continues to slow, forcing the industry to find new ways to deliver computational power more efficiently.
  2. Hundreds of applications – including the vast majority of those most commonly used – are now GPU accelerated.
  3. Even modest investments in accelerators can now result in significant increases in throughput, maximising efficiency for supercomputing sites and hyperscale datacentres.

Continue reading “Accelerated systems account for more than 20% of TOP500 supercomputers”

NVIDIA Jetson TX1 powers machine learning

Jetson_TX1 moduleAs the first embedded computer designed to process deep neural networks, the new NVIDIA Jetson TX1 is set to enable a new wave of smart devices. Drones will evolve beyond flying by remote control to navigating through a forest for search and rescue. Security surveillance systems will be able to identify suspicious activities, not just scan crowds. Robots will be able to perform tasks customised to individuals’ habits.

That’s what the credit-card sized module can do. It can harness the power of machine learning to enable a new generation of smart, autonomous machines that can learn.

Deep neural networks are computer software that can learn to recognise objects or interpret information. This new approach to program computers is called machine learning and can be used to perform complex tasks such as recognising images, processing conversational speech, or analysing a room full of furniture and finding a path to navigate across it. Machine learning is a groundbreaking technology that will give autonomous devices a giant leap in capability.

Continue reading “NVIDIA Jetson TX1 powers machine learning”

NVIDIA supercharges Microsoft Azure

Azure

Virtualisation just got a little turbo charge with the introduction of NVIDIA GPU-enabled professional graphics applications and accelerated computing capabilities to the Microsoft Azure cloud platform.

Microsoft is the first to leverage NVIDIA GRID 2.0 virtualised graphics for its enterprise customers.

Businesses will have a range of graphics prowess — depending on their needs. They can deploy NVIDIA Quadro-grade professional graphics applications and accelerated computing on-premises, in the cloud through Azure, or via a hybrid of the two using both Windows and Linux virtual machines.

Continue reading “NVIDIA supercharges Microsoft Azure”

Giving researchers instant feedback with immersive visualisation technologies

At GTC South Asia, Monash University shared how it has leveraged GPU technology to transform the way research is done. Entelechy Asia catches up with the university’s Professor Paul Bonnington (Professor and Director of E-research Centre) and Dr Wojtek James Goscinski (Coordinator of E-research Centre) to find out more about the deployment and how NVIDIA GPUs have made a great difference in research.

NVIDIA introduces Tesla K80

NVIDIA Tesla K80NVIDIA has unveiled the Tesla K80 dual-GPU accelerator designed for a wide range of machine learning, data analytics, scientific, and high performance computing (HPC) applications.

The Tesla K80 dual-GPU is the new flagship offering of the Tesla Accelerated Computing Platform, the leading platform for accelerating data analytics and scientific computing.

It combines the world’s fastest GPU accelerators, the widely used CUDA parallel computing model, and a comprehensive ecosystem of software developers, software vendors, and datacentre system OEMs.

Continue reading “NVIDIA introduces Tesla K80”

NVIDIA clinches Computex Best Choice Awards for Tegra K1 and GRID

Best Choice AwardNVIDIA has done the double by snaring the Computex Best Choice Award for its NVIDIA GRID technology and the Golden Award for the NVIDIA Tegra K1 mobile processor.

This is the sixth year running that NVIDIA has picked up the award, marking the  longest winning streak of any international Computex exhibitor. More than 475 technology products from nearly 200 vendors competed for this year’s recognition.

Tegra K1 is a 192-core super chip, built on the NVIDIA Kepler architecture — the world’s most advanced and energy-efficient GPU. Tegra K1’s 192 fully programmable CUDA cores deliver the most advanced mobile graphics and performance, and its compute capabilities open up many new applications and experiences in fields such as computer vision, advanced imaging, speech recognition and video editing.

Continue reading “NVIDIA clinches Computex Best Choice Awards for Tegra K1 and GRID”

NVIDIA unveils first mobile supercomputer for embedded systems

Jetson TK1 DevKitDubbed the world’s first mobile supercomputer for embedded systems, the NVIDIA® Jetson TK1 platform will enable the development of a new generation of applications that employ computer vision, image processing and real-time data processing.

It provides developers with the tools to create systems and applications that can enable robots to seamlessly navigate, physicians to perform mobile ultrasound scans, drones to avoid moving objects and cars to detect pedestrians.

With unmatched performance of 326 gigaflops – nearly three times more than any similar embedded platform – the Jetson TK1 Developer Kit includes a full C/C++ toolkit based on NVIDIA CUDA architecture, the most pervasive parallel computing platform and programming model. This makes it much easier to program than the FPGA, custom ASIC and DSP processors that are commonly used in current embedded systems.

Continue reading “NVIDIA unveils first mobile supercomputer for embedded systems”

New NVIDIA Tesla K40 speeds up supercomputing and big data analytics

NVIDIA Tesla K40The NVIDIA Tesla K40 GPU accelerator is arguably the world’s highest performance accelerator ever built. It is capable of delivering extreme performance to a wide range of scientific, engineering, high performance computing (HPC), and enterprise applications.

Providing double the memory and up to 40 percent higher performance than its predecessor, the Tesla K20X GPU accelerator, and 10 times higher performance than the fastest CPU, the Tesla K40 GPU is the world’s first and highest-performance accelerator optimised for big data analytics and large-scale scientific workloads.

Featuring intelligent NVIDIA GPU Boost technology, which converts power headroom into a user-controlled performance boost, the Tesla K40 GPU accelerator enables users to unlock the untapped performance of a broad range of applications.

Continue reading “New NVIDIA Tesla K40 speeds up supercomputing and big data analytics”

Beginning of the Digital Industrial Economy

GartnerWorldwide IT spending is forecast to reach US$3.8 trillion in 2014, a 3.6 percent increase from 2013, but it’s the opportunities of a digital world that have IT leaders excited, according to Gartner.

The beginning of the Digital Industrial Economy will make every budget an IT budget; every company a technology company; every business a digital leader, and every person a technology company.

“The Digital Industrial Economy will be built on the foundations of the Nexus of Forces (which includes a confluence and integration of cloud, social collaboration, mobile and information) and the Internet of Everything by combining the physical world and the virtual,” said Peter Sondergaard, Senior Vice President of Gartner and Global Head of Research.

Continue reading “Beginning of the Digital Industrial Economy”

NVIDIA acquires PGI

NVIDIANVIDIA has made further inroads into high performance computing (HPC) with the acquisition of The Portland Group (PGI), a leading independent supplier of compilers and tools.

Founded in 1989, PGI has a long history of innovation in HPC compiler technology for Intel, IBM, Linux, OpenMP, GPGPU, and ARM. Following the acquisition, it will continue to operate under the PGI name and develop OpenACC, CUDA Fortran and CUDA x86 for multicore x86, and GPGPUs. PGI will also continue to serve its customers, including chip makers, research labs and HPC computing centres.