Why Ethernet matters for AI networking

Blue and orange sparks in a server room as a background

I have worked on the Ethernet protocol throughout my career. I have witnessed it at its infancy and have observed, and in some cases helped shape, its growth. It has both a simplicity and an openness to it that the world has embraced. Ethernet is now being called upon to help meet the new voracious networking demands of AI and I am proud to say that Nokia is answering this call. 

AI’s growing appetite: more compute, power, cooling … and a new network

We all agree that artificial intelligence is reshaping industries, daily life, and society at large. But that transformation comes at a price. For providers of AI services, the price is much more compute, much more power, much more cooling, and—crucially—a different approach to networking.

Whether you’re running a private AI cloud for an enterprise or offering GPU-as-a-Service (GPUaaS), the nature of AI training and inference forces you to rethink data-center networking. Modern AI workloads run on hundreds, thousands, even tens of thousands of GPUs that must act as a single, tightly-coupled “brain” or cluster. This happens in the backend “scale-out” part of the network, where zero packet loss and ultra-low latency are non-negotiable.

From InfiniBand to Ethernet: Why the shift happened

For years, InfiniBand was the go-to technology for AI and high-performance computing (HPC). Its protocol guarantees end-to-end traffic, delivering the loss-free fabric that early AI clusters needed.

But today the industry is shifting toward Ethernet. Why? Because Ethernet has been proven for decades in the world’s largest, most demanding networks, and it brings a host of advantages that align perfectly with the evolving AI landscape.

  • Broad ecosystem – Switches, NICs, test gear, SFPs, and open-source management platforms are all readily available.
  • Fast-paced evolution – New link speeds, optics, cabling, and protocol enhancements appear regularly.
  • Universal familiarity – Engineers across the globe understand Ethernet, making deployment and troubleshooting smoother.
  • Scalable with IP – Ethernet, combined with IP, can grow arbitrarily to support super-large networks.
  • Open, multivendor flexibility – You’re not locked into a single vendor; you can mix and match components that best fit your design.

These traits have turned Ethernet into a viable and often preferable alternative to InfiniBand for hyperscalers, cloud providers, and enterprises building AI-focused data centers.

How today’s Ethernet powers AI back-end networks

Today’s Ethernet-based AI back-end “scale-out” solution keeps the proven InfiniBand transport layer but encapsulates it in UDP, IP, and Ethernet—the result is RDMA over Converged Ethernet (RoCEv2).

To keep packet loss at bay, RoCEv2 relies on Data Center Quantized Congestion Notification (DCQCN), which blends two key techniques:

  1. Explicit Congestion Notification (ECN) – Marks packets when queues start to fill, signaling senders to throttle.

  2. Priority Flow Control (PFC) – Pauses traffic on specific priority lanes when thresholds are crossed.

Both mechanisms activate when leaf or spine switch queues exceed predefined limits, helping maintain a lossless environment AI training demands.

The Ultra Ethernet Consortium (UEC): Pushing Ethernet further

While RoCEv2 works well for many deployments, it struggles in massive AI clusters with complex topologies and bursty traffic. Issues such as head-of-line blocking from PFC and the lack of real-time congestion signaling can make the network feel fragile.

Enter the Ultra Ethernet Consortium (UEC). UEC is modernizing RDMA into a performant, open, and interoperable Ethernet-based full-communications stack designed specifically for AI and HPC at scale. Specification 1.0, released earlier this year, adds capabilities like advanced load-balancing, refined congestion-control algorithms, built-in security, and richer API support. The focus remains on the scale-out portion of the AI back-end, where the biggest performance gains are needed.

In short, UEC is addressing the very pain points that keep us up at night when we watch AI jobs stall.

Nokia’s role in shaping the future of AI networking

I’ve watched Nokia evolve into a true Ethernet champion over the past two decades. This expertise has been inherited by our data center fabric solution which has been proven in ultra-low latency and lossless transport for AI clusters for customers like Nscale.

What sets Nokia apart for me is its commitment to reliability, openness and programmability. The same APIs that let us automate data-center operations also let us experiment with new congestion-control schemes—exactly the kind of flexibility the UEC specification demands.

Nokia is actively participating in the UEC and is already building features based on Specification 1.0. Recent internal tests have demonstrated successful transmission of UET traffic on the 7220 IXR and 7250 IXR, underscoring Nokia’s early commitment to the implementing and evolving the standard.

For even more detailed information on Nokia’s commitment to the Ethernet standard, our commitment to the UEC, and Nokia’s AI-ready portfolio, we recently recorded a Packet Pushers TechByte that explains our position even more.

Closing thoughts

AI is rewriting the rules of what data center networks must deliver. By embracing Ethernet’s rich ecosystem, rapid innovation, and the forward-looking work of the UEC, we can build networks that keep pace with ever-growing AI demands.

Let’s keep the conversation going. If you’re curious about how Ethernet—backed by the UEC and how Nokia’s cutting-edge switches—can accelerate your AI projects, I’d love to hear about the challenges you’re facing.

Rudy Hoebeke

About Rudy Hoebeke

Rudy Hoebeke is Vice President of Product Management for Nokia's IP Service Routing portfolio with overall product management responsibility for the company’s IP/MPLS and multi-service routers and data center switches. He has over 20 years of experience in the communications and networking industry, in the areas of engineering, network design, technology & product strategy and product management. Rudy holds a MSc in Electrical Engineering from the University of Brussels (VUB) and a Masters in Business Administration from the Vlerick Leuven-Gent Business School.

Connect with Rudy on LinkedIn

 

Article tags