AMD would be the first to market with a brand new Extremely Ethernet-based networking card, and Oracle would be the first cloud service supplier to deploy it.
The announcement got here on the latest Advancing AI occasion, the place AMD launched its newest Intuition MI350 collection GPUs and introduced the MI400X, which shall be delivered subsequent 12 months. Ignored in that information blitz is the provision of the Pensando Pollara 400GbE community interface card, which marks the {industry}’s first NIC that’s compliant with the Extremely Ethernet Consortium’s (UEC) 1.0 specification.
AMD introduced Pollara in 2024, however it’s only simply starting to ship it. And simply because the 400Gb Pollara begins delivery, AMD additionally introduced a subsequent technology 800Gb card dubbed Vulcano, which can also be UEC-compliant. AMD’s announcement got here simply days after the UEC printed its 1.0 specification for Extremely Ethernet know-how, designed for hyper-scale AI and HPC knowledge facilities.
The UEC was launched in 2023 below the Linux Basis. Members embody main tech-industry gamers similar to AMD, Intel, Broadcom, Arista, Cisco, Google, Microsoft, Meta, Nvidia, and HPE. The specification contains GPU and accelerator interconnects in addition to assist for knowledge heart materials and scalable AI clusters.
AMD’s Pensando Pollara 400GbE NICs are designed for enormous scale-out environments containing 1000’s of AI processors. Pollara relies on customizable {hardware} that helps utilizing a totally programmable Distant Direct Reminiscence Entry (RDMA) transport and hardware-based congestion management.
Pollara helps GPU-to-GPU communication with clever routing applied sciences to scale back latency, making it similar to Nvidia’s NVLink c2c. Along with being UEC-ready, Pollara 400 affords RoCEv2 compatibility and interoperability with different NICs.
On the Advancing AI occasion, AMD CEO Lisa Su launched the corporate’s next-generation, scale-out AI NIC, Vulcano. Vulcano is totally UEC 1.0 compliant. It helps PCIe and twin interfaces to attach immediately each CPUs and GPUs, and it delivers 800 Gb/s of line charge throughput to scale for the most important techniques.
When mixed with Helios – AMD’s new customized AI rack design – each GPU within the rack is related via the high-speed, low-latency UA hyperlink, tunneled over commonplace Ethernet. The result’s a customized AI system similar to Nvidia’s NVL-72, the place 72 GPUs are made to appear to be a single processor to the system.
Oracle is the primary to line up behind Pollara and Helios, and it doubtless gained’t be the final. Oracle lags the cloud leaders AWS and Microsoft and solely has about 3% of the general public cloud market.