Researchers working on sequencing the novel coronavirus and the genomes of people afflicted with COVID-19 now have a helping hand — NVIDIA is offering a free 90-day licence to Parabricks, which uses GPUs to accelerate by as much as 50 times the analysis of sequence data.
Amazon Web Services (AWS) has announced general availability of G4 instances, a new NVIDIA T4 GPU-powered Amazon Elastic Compute Cloud (Amazon EC2) instance designed to help accelerate machine learning inference and graphics-intensive workloads.
Take that, Intel! GPU giant NVIDIA has fired back at Intel, which reported that its Xeon Scalable processors outperforms NVIDIA GPU on ResNet-50 deep learning Inference.
It’s no dinosaur but the newly-announced NVIDIA Titan RTX, dubbed T-Rex, is certainly very powerful — to the tune of 130 teraflops of deep learning performance and 11 GigaRays of ray-tracing performance.
On June 13, Intel had the GPU world in a flurry when it tweeted “Intel’s first GPU coming in 2020”. The media were quick to post stories of this incoming new GPU, which would add interesting competition to a market dominated by NVIDIA with AMD a distant second.
Security is a growing concern among governments and organisations of all sizes. They must balance the need to provide access to the right people while keeping suspicious folks at bay. Any lapse can result in dire consequences that impact confidence in the country or company.
It’s been said that more data was generated in 2017 than in the previous 5,000 years. According to Statista, this figure will increase 10 times in less than a decade.
Cryptocurrency mining has been given a boost with the revelation that Samsung is working on chip just for that purpose.
China’s top technology companies are betting big on the NVIDIA Volta platform.
Alibaba Cloud, Baidu, and Tencent are incorporating NVIDIA Tesla V100 GPU accelerators into their data centres and cloud-service infrastructures to accelerate AI for a broad range of enterprise and consumer applications.
At the heart of the new Volta-based systems is the NVIDIA V100 data centre GPU. Built with 21 billion transistors, it provides a 5x improvement over the preceding NVIDIA Pascal architecture P100 GPU accelerators, while delivering the equivalent performance of 100 CPUs for deep learning. This performance surpasses by 4x the improvements that Moore’s law would have predicted over the same period of time.
Inspur, Lenovo and Huawei are using the NVIDIA HGX reference architecture to offer Volta-based accelerated systems for hyperscale data centres. Using HGX as a starter “recipe,” original equipment manufacturer and original design manufacturer partners can work with NVIDIA to more quickly design and bring to market a wide range of qualified GPU-accelerated AI systems for hyperscale data centres to meet the industry’s growing demand for AI cloud computing.
Google has expanded its NVIDIA GPU offerings on the Google Cloud Platform. These include:
- Performance boost with the public launch of NVIDIA P100 GPUs in beta
- NVIDIA Tesla K80 GPUs available on Google Compute Engine
- Introduction of sustained use discounts on both the Tesla K80 and P100 GPUs
According to a Google Cloud Platform blog, cloud GPUs can accelerate workloads including machine learning training and inference, geophysical data processing, simulation, seismic analysis, molecular modeling, genomics and many more high performance compute use cases.
NVIDIA is investing in Deep Instinct, an Israeli-based startup that uses deep learning to thwart cyber attacks.
Deep Instinct uses a GPU-based neural network and CUDA to achieve 99 percent detection rates, compared with about 80 percent detection from conventional cyber security software. Its software can automatically detect and defeat the most advanced cyber attacks.
“Deep Instinct is an emerging leader in applying GPU-powered AI through deep learning to address cybersecurity, a field ripe for disruption as enterprise customers migrate away from traditional solutions. We’re excited to work together with Deep Instinct to advance this important field,” said Jeff Herbst, Vice President of Business Development of NVIDIA.
NVIDIA is among six technology companies to receive a total of US$258 funding from the US Department of Energy’s Exascale Computing Project (ECP).
The funding is to accelerate the development of next-generation supercomputers with the delivery of at least two exascale computing systems, one of which is targeted by 2021.
Such systems would be about 50 times more powerful than the US’ fastest supercomputer, Titan, located at Oak Ridge National Laboratory.
Tokyo Institute of Technology plans to create Japan’s fastest AI supercomputer, which is will deliver more than twice the performance of its predecessor to slide into the world’s top 10 fastest systems.
Called Tsubame 3.0, it will use Pascal-based NVIDIA P100 GPUs that are nearly three times as efficient as their predecessors, to reach an expected 12.2 petaflops of double precision performance.
Tsubame 3.0 will excel in AI computation with more than 47 PFLOPS of AI horsepower. When operated with Tsubame 2.5, it is expected to deliver 64.3 PFLOPS, making it Japan’s highest performing AI supercomputer.
NVIDIA has updated its GPU-accelerated deep learning software that will double deep learning training performance.
With the new software, data scientists and researchers can supercharge their deep learning projects and product development work by creating more accurate neural networks through faster model training and more sophisticated model design.
The NVIDIA DIGITS Deep Learning CPU Training System version 2 (DIGITS 2) and NVIDIA CUDA Deep Neural Network library version 3 (cuDNN 3) provide significant performance enhancements and new capabilities.
Server vendors are leveraging the performance of NVIDIA graphics processor unit (GPU) accelerators for 64-bit ARM development systems for high performance computing (HPC).
ARM64 server processors were primarily designed for micro-servers and web servers because of their extreme energy efficiency. Coupled with GPU accelerators using the NVIDIA CUDA 6.5 parallel programming platform, they can now tackle HPC-class workloads.
GPUs provide ARM64 server vendors with the muscle to tackle HPC workloads, enabling them to build high-performance systems that maximise the ARM architecture’s power efficiency and system configurability.
NVIDIA has announced the first GPU-acceleration of Adobe Illustrator CC – enabling graphic artists to seamlessly interact with vector art at any resolution, and to smoothly pan and zoom significantly faster than previously possible. With this new feature powered by NVIDIA GPUs, Illustrator CC can now accelerate the entire canvas by up to 10-times faster than previously possible on Windows 7 or 8-based systems with compatible GPUs.
GPU acceleration in Illustrator CC specifically benefits the wide range of users working with 2D vector graphics, including graphic designers, illustrators and typographers creating media across web, print, mobile and more. While GPU acceleration is the norm in 3D content creation, those working in 2D have traditionally been restricted to driving all computing performance through the CPU. Now 2D artists and designers can experience fully interactive performance with even the most complex, high-resolution graphics.
This new Illustrator CC performance boost is based on an optimised NVIDIA technology called NV Path Rendering that is implemented as an extension to OpenGL, an open standard for graphics performance.
Xiaomi introduced a slew of products in Beijing yesterday and one that really stood out was Mi Pad, it first tablet powered by the ultra-fast NVIDIA Tegra K1 mobile processor.
Sporting a 7.9-inch display with 2,048 x 1,536 resolution, this tablet comes with very long battery life — its 6700 mAh battery is good for up to 1300 hours of standby time or 11 hours of video streaming. It features 8MP rear and 5MP front cameras, 2GB of RAM, and 16GB or 64GB of built-in memory. If more storage is needed, there’s a microSD slot.
What’s amazing is under the hood. Powering the Mi Pad is the 192-core NVIDIA Tegra K1 mobile processor, which is based on the parallel processing GPU architecture found in the world’s most powerful supercomputers.
Visitors to SIGGRAPH Asia last week were the first to see and catch the demo of NVIDIA GRID VCA in Asia. Essentially, NVIDIA GRID VCA is a powerful GPU-based system that runs complex applications such as Autodesk […]
IBM and NVIDIA plan to collaborate on GPU-accelerated versions of IBM’s wide portfolio of enterprise software applications — taking GPU accelerator technology for the first time into the heart of enterprise-scale data centres.
The collaboration aims to enable IBM customers to more rapidly process, secure and analyse massive volumes of streaming data.
“Harnessing GPU technology to IBM’s enterprise software platforms will bring advanced, in-memory processing to a wider variety of new application areas,” said Sean Poulley, Vice President of Databases and Data Warehousing at IBM. “We are looking at a new generation of higher-performance solutions to help data center customers overcome their most challenging computing problems.”
The NVIDIA Tesla K40 GPU accelerator is arguably the world’s highest performance accelerator ever built. It is capable of delivering extreme performance to a wide range of scientific, engineering, high performance computing (HPC), and enterprise applications.
Providing double the memory and up to 40 percent higher performance than its predecessor, the Tesla K20X GPU accelerator, and 10 times higher performance than the fastest CPU, the Tesla K40 GPU is the world’s first and highest-performance accelerator optimised for big data analytics and large-scale scientific workloads.
Featuring intelligent NVIDIA GPU Boost technology, which converts power headroom into a user-controlled performance boost, the Tesla K40 GPU accelerator enables users to unlock the untapped performance of a broad range of applications.
AMD fired the first salvo after a long period of silence to reignite the GPU battle. But, NVIDIA has hit back immediately and regained the crown with the GeForce GTX 780 Ti. Besides sheer performance, NVIDIA’s armoury also includes G-SYNC, GeForce Experience and ShadowPlay.
The new card delivers smooth frame rates at extreme resolutions for the latest and hottest PC games, including Assassin’s Creed IV — Black Flag, Call of Duty: Ghosts and Batman: Arkham Origins. It also this with cool, quiet operation that is critical to providing an immersive gaming experience.
Powering the GPU is the NVIDIA Kepler architecture, which provides an advanced, low-thermal-density design that translates into better cooling, quieter acoustics and record-breaking performance.
Visual computing is no longer just about gaming but has now permeated everyday lives, Dr Simon See, Director and Chief Solution Architect of NVIDIA, told a gathering of 450 start-ups, investors and R&D providers in digital media.
He pointed out that GPUs now help power the 3D web, location-based visualisation applications, creative content creation, computer vision, user interfaces, image recognition, HD video processing, and virtual worlds.
In his talk, Simon also discussed the vibrant ecosystem of developers that have adopted the CUDA GPU computing platform, how early stage companies can leverage the GPU for visual and other computing applications, and what global programmes NVIDIA offers to nurture and inspire innovation and business opportunities throughout these ecosystems.
NVIDIA has launched the NVIDIA GeoInt Accelerator, the world’s first GPU-accelerated geospatial intelligence platform to enable security analysts to find actionable insights quicker and more accurately than ever before from vast quantities of raw data, images and video.
The platform provides defence and homeland security analysts with tools that enable faster processing of high-resolution satellite imagery, facial recognition in surveillance video, combat mission planning using geographic information system (GIS) data, and object recognition in video collected by drones.
It offers a complete solution consisting of an NVIDIA Tesla GPU accelerated system, software applications for geospatial intelligence analysis, and advanced application development libraries.
By Edward Lim, Managing Consultant, CIZA Concept
Established in 2006 as a research institute at the National University of Singapore (NUS), the NUS Risk Management Institute (RMI) is dedicated to financial risk management. Its establishment was supported by the Monetary Authority of Singapore (MAS) under its program on Risk Management and Financial Innovation.
In 2009, RMI embarked on a non-profit Credit Research Initiative (CRI) in response to the financial crisis, with the intent to spur research and development in the critical area of credit rating. Besides being just a typical research project, it wanted to demonstrate the operational feasibility of its research and become a trusted source of credit information.
CRI currently covers more than 35,000 companies in 106 economies in Asia-Pacific, North America, Europe, Latin America, Africa, and the Middle East.
By Edward Lim, Managing Consultant, CIZA Concept
Founded by Rich Ho in Singapore in 2004, Richmanclub Studios is a motion picture production company. Its first official production was the short film, “The Alien Invasion” in 2004, which has been shown around the world. The film was the first Singaporean short film to be nominated for the “Chinese Oscar”, The Golden Horse Awards 2004 for Best International Digital Short Film.
The studio has also won the Special Technical Achievement Award (Hive Film Festival), Audience Favorite (Substation First Take) and the Asia-Pacific wide Gold Award-Digital Art (ACMSIGGRAPH ComGraph).
In 2011, Richmanclub Studios launched two additional departments – Richopus Music to offer music production services and IVI VFX to provide post-production services.
Barely a month after it introduced the first GeForce 7 series GPUs — the NVIDIA GeForce GTX 780 and the NVIDIA GeForce GTX 770 — NVIDIA has added another new card to its lineup. Priced at US$249, the new NVIDIA GeForce GTX 760 GPU, is designed to deliver extreme frame rates for this year’s hottest PC games, including Call of Duty: Ghosts, Watch Dogs, and Battlefield 4,.
According to NVIDIA, this would be its last GeForce 7 series card to be introduced this year. The new lineup now consists of:
- GeForce GTX TITAN/GeForce GTX 690
- GeForce GTX 780
- GeForce GTX 770
- GeForce GTX 760
- GeForce GTX 660
- GeForce GTX 650 Ti BOOST
- GeForce GTX 650 Ti
- GeForce GTX 650
Powered by an NVIDIA Kepler architecture-based GPU with an incredible 2.3 gigaflops of processing horsepower, the GeForce GTX 760 is more powerful than the next-generation game consoles expected by the end of the year.
NVIDIA has unleashed the full graphics potential of enterprise desktop virtualisation with the availability of NVIDIA GRID vGPU integrated into Citrix XenDesktop 7.
NVIDIA GRID vGPU technology addresses a challenge that has grown in recent years with the rise of employees using their own notebooks and portable devices for work. These workers have increasingly relied on desktop virtualisation technologies for anytime access to computing resources, but until now this was generally used for the more standard enterprise applications. Performance and compatibility constraints had made it difficult for applications such as building information management (BIM), product-lifecycle management (PLM) and video-photo editing.
Two decades ago, hardware-based graphics replaced software emulation. Desktop virtualisation solutions stood alone as the only modern computing form without dedicated graphics hardware. As a result, an already busy virtualised CPU limited performance and software emulation hampered application compatibility.
NVIDIA has reported revenue for Q1 of fiscal 2014, ended April 28, 2013, of US$954.7 million, down 13.7 percent from US$1.11 billion in Q4 of fiscal 2013.
GAAP earnings per diluted share were US$0.13, down 53.6 percent from US$0.28 in the prior quarter. Non-GAAP earnings per diluted share were US$0.18, down 48.6 percent from US$0.35 in the prior quarter.
As previously announced, NVIDIA plans to return in excess of $1 billion this fiscal year to shareholders in the form of share repurchases and quarterly dividend payments. During Q1, NVIDIA returned US$146.3 million to shareholders by repurchasing US$100 million of shares, and paying US$46.3 million of dividends, or US$0.075 per share.
What’s the sweet spot in pricing for gamers looking for a graphics card to play this year’s hottest games? While those who are more financially endowed will go for the highest end cards, such as the NVIDIA GeForce GTX 680 or 690, NVIDIA believes that most mainstream gamers are prepared to fork out around US$169.
Putting this belief into action, the company has just launched the NVIDIA GeForce GTX 650 Ti BOOST GPU, which is based on the NVIDIA Kepler architecture and equipped with 768 NVIDIA CUDA cores. This new product is available in two flavours – the 2GB version for US$169 and the 1GB configuration for US$149.
This introduction has led to a revision of pricing for other NVIDIA cards with the entry level
- GeForce GTX 650 going at US$109 and the
- GeForce GTX 650 Ti now priced at US$129.
NVIDIA has introduced the industry’s first visual computing appliance that enables businesses to deliver ultra-fast GPU performance to any Windows, Linux or Mac client on their network.
The NVIDIA GRID Visual Computing Appliance (VCA) is a powerful GPU-based system that runs complex applications such as those from Adobe Systems, Autodesk and Dassault Systèmes, and sends their graphics output over the network to be displayed on a client computer. This remote GPU acceleration gives users the same rich graphics experience they would get from an expensive, dedicated PC under their desk.
NVIDIA GRID VCA provides enormous flexibility to small and medium-size businesses with limited IT infrastructures. Their employees can, through the simple click of an icon, create a virtual machine called a workspace. These workspaces – which are, effectively, dedicated, high-performance GPU-based systems – can be added, deleted or reallocated as needed.