When NVIDIA founder and CEO Jensen Huang takes the stage for a keynote at a serious laptop {industry} occasion, there’s little doubt that he’ll announce a number of improvements and enhancements from his industry-leading GPU firm. That is simply what he did this week to kick off Computex 2025 in Taipei, Taiwan.
Anybody who’s been to a serious occasion with Huang keynoting is probably going accustomed to him unveiling a slew of improvements to advance AI. Huang began the convention by stating how AI is revolutionizing the world. He then described how NVIDIA is enabling this revolution.
Huang’s ardour for the advantages that AI can ship is obvious within the new merchandise NVIDIA and its companions are quickly growing.
“AI is now infrastructure,” Huang mentioned. “And this infrastructure, identical to the web, identical to electrical energy, wants factories. These factories are basically what we construct at present.”
He added that these factories are “not the information facilities of the previous,” however factories the place “you apply power to it, and it produces one thing extremely worthwhile.” Many of the information centered on merchandise to construct greater, sooner and extra scalable AI factories.
Introducing NVLink Fusion
One of many greatest challenges in scaling AI is preserving the information flowing between GPUs and programs. Conventional networks cannot course of information reliably or quick sufficient to maintain up with the connectivity calls for. Throughout his keynote, Huang described the challenges of scaling AI and the way it’s a community concern.
“The way in which you scale isn’t just to make the chips sooner,” he mentioned. “There’s solely a restrict to how briskly you may make chips and the way huge you may make chips. Within the case of [NVIDIA] Blackwell, we even related two chips collectively to make it attainable.”
NVIDIA NVLink Fusion goals to resolve these limitations, he mentioned. NVLink connects a rack of servers over one spine and allows clients and companions to construct their very own customized rack-scale designs. The power for system designers to make use of third-party CPUs and accelerators with NVIDIA merchandise creates new prospects in how enterprises deploy AI infrastructure.
In line with Huang, NVLink creates “a straightforward path to scale out AI factories to thousands and thousands of GPUs, utilizing any ASIC, NVIDIA’s rack-scale programs and the NVIDIA end-to-end networking platform.” It delivers as much as 800 Gbps of throughput and options the next:
Powered by Blackwell
Computing energy is the gasoline of AI innovation, and the engine driving NVIDIA’s AI ecosystem is its Blackwell structure. Huang mentioned Blackwell delivers a single structure from cloud AI to enterprise AI in addition to from private AI to edge AI.
Among the many merchandise powered by Blackwell is DGX Spark, described by Huang as being “for anyone who want to have their very own AI supercomputer.” DGX Spark is a smaller, extra versatile model of the corporate’s DGX-1, which debuted in 2016. DGX Spark can be out there from a number of laptop producers, together with Dell, HP, ASUS, Gigabyte, MSI and Lenovo. It comes geared up with NVIDIA’s GB10 Grace Blackwell Superchip.
DGX Spark delivers as much as 1 petaflop of AI compute and 128 GB of unified reminiscence. “That is going to be your individual private DGX supercomputer,” Huang mentioned. “This laptop is probably the most efficiency you’ll be able to presumably get out of a wall socket.”
Designed for probably the most demanding AI workloads, DGX Station is powered by the NVIDIA Grace Blackwell Extremely Desktop Superchip, which delivers as much as 20 petaflops of AI efficiency and 784 GB of unified system reminiscence. Huang mentioned that is “sufficient capability and efficiency to run a 1 trillion parameter AI mannequin.”
New Servers and Knowledge Platform
NVIDIA additionally introduced the brand new RTX PRO line of enterprise and omniverse servers for agentic AI. A part of NVIDIA’s new Enterprise AI Manufacturing unit design, the RTX Professional servers are “a basis for companions to construct and function on-premises AI factories,” in keeping with an organization press launch. The servers can be found now.
Because the trendy AI compute platform is totally different, it requires a distinct kind of storage platform. Huang mentioned a number of NVIDIA companions are “constructing clever storage infrastructure” with NVIDIA RTX PRO 6000 Blackwell Server Version GPUs and the corporate’s AI Knowledge Platform reference design.
Accelerating Growth of Humanoid Robots
Robotics is one other AI focus space for NVIDIA. In his keynote, Huang launched Isaac GROOT N1.5, the primary replace to the corporate’s “open, generalized, absolutely customizable basis mannequin for humanoid reasoning and expertise.” He additionally unveiled the Isaac GROOT-Desires blueprint for producing artificial movement information — often called neural trajectories — for bodily AI builders to make use of as they practice a robotic’s new behaviors, together with methods to adapt to altering environments.
Huang used his high-profile keynote to showcase how NVIDIA continues to have a heavy foot on the expertise acceleration pedal. Even for a corporation as forward-looking as NVIDIA, it is unwise to let up as a result of the remainder of {the marketplace} is all the time making an attempt to out-innovate one another.