By KL Lim
At 4am this morning (Singapore time), while most folks in this part of the world were sleeping, NVIDIA Founder and CEO Jensen Huang kicked off his keynote address at GTC 2024 in San Jose.
The highy-anticipated keynote was not held at the San Jose McEnery Convention Center this time round. In the years since the absence of an onsite event due to the pandemic, Huang’s keynote this year was moved to SAP Center which can accommodate more than 11,000 participants.
“Accelerated computing has reached the tipping point — general purpose computing has run out of steam,” said Huang. “We need another way of doing computing… Accelerated computing is a dramatic speedup over general-purpose computing, in every single industry.”
His first announcement was the NVIDIA Blackwell platform designed to power the next generation of accelerated computing.
The new architecture was named after mathematician David Harold Blackwell of University of California, Berkeley who specialised in game theory and statistics, and was the first Black scholar inducted into the National Academy of Sciences.
Blackwell delivers 2.5x its predecessor Hopper’s performance in FP8 for training, per chip, and 5x with FP4 for inference. It features a fifth-generation NVLink interconnect that is twice as fast as Hopper and scales up to 576 GPUs.
“Generative AI is the defining technology of our time. Blackwell is the engine to power this new industrial revolution. Working with the most dynamic companies in the world, we will realise the promise of AI for every industry,” said Huang.
Amazon Web Services, Dell Technologies, Google, Meta, Microsoft, OpenAI, Oracle, Tesla, and xAI are among many expected to adopt Blackwell.

Also announced was the new NVIDIA GB200 Grace Blackwell Superchip (above) which connects two Blackwell NVIDIA B200 Tensor Core GPUs to the NVIDIA Grace CPU over a 900GB/s ultra-low-power NVLink chip-to-chip interconnect.
NVIDIA’s next-generation AI supercomputer will be powered by NVIDIA GB200 Grace Blackwell Superchips for superscale generative AI training and inference workloads. Featuring a new, highly efficient, liquid-cooled rack-scale architecture, the new DGX SuperPOD delivers 11.5 exaflops of AI supercomputing at FP4 precision and 240 terabytes of fast memory.
In his keynote, Huang also revealed new data centre products and demonstrated the revolutionary capabilities of GenAI, highlighting advances in semiconductors, software, services, simulation, and medical technology.
Dubbed the “Woodstock of AI”, GTC is now in its 15th year. With AI grabbing the world’s attention, it has become the industry’s most important AI conference, featuring 900 sessions and 300 exhibitors.
It was indeed worthwhile staying up and compromising sleep just to catch the keynote and get a glimpse of what is the come.
Watch the recording of Huang’s GTC keynote and register to attend sessions at GTC, which runs through March 21.
