Powering the AI era: Why energy efficiency defines the future of data center networking

DCN racks in a server room

The artificial intelligence AI supercycle is reshaping the technology landscape faster than ever. As organizations race to understand how AI will impact their business, Cloud companies continue to invest in data center infrastructure at unprecedented scale. As the AI CapEx flows down from Cloud Providers to DC facilities and all the way to the semiconductor industry, the entire supply chain is being stressed to cope with increased demand. That stress reveals the weakest links in this chain, and it quickly becomes clear that one specific area is the gating factor for everything else.

The power problem

The densification of AI clusters leverage capacity increases both on the individual accelerator level, with FLOPS significantly improving across generations, as well as on the system level, with cluster sizes reaching many hundreds of thousands of xPUs. Even with improved efficiency gains on a relative basis, the total power consumption for a system will still grow.

Global data center electricity use is projected to double by 2030 (from ~415 TWh to ~945 TWh). Peak demand could reach 130 GW by 2028, which is comparable to the entire power capacity of some large countries. [1] Lead times for power access in major markets like Northern Virginia can take 3+ years to secure grid power for new data centers. [2]

Why power efficiency matters more than ever

First, power efficiency directly enables our customers to deploy more capable networks within existing power constraints. When your infrastructure consumes less energy per unit of throughput, you can pack more performance into the same footprint. This isn't just an incremental advantage—it's a fundamental enabler of growth.

Second, using fewer components to achieve the same or better performance dramatically improves system reliability. Every component removed from a system is one less potential point of failure. In mission-critical data center environments where downtime costs can reach thousands of dollars per minute, this reliability advantage translates directly to business value.

Power savings flow straight to the operational expense line, and in an era of rising energy costs, these savings compound year after year. When you consider the total cost of ownership for networking equipment—factoring in purchase price, power consumption, cooling requirements, and maintenance—energy-efficient solutions deliver substantially better economics over their operational lifetime.

Hardware innovation in action

These principles aren't just words on a page—they're driving concrete innovation across Nokia's portfolio. Consider the evolution from our FP4 chipset to the latest FP5 generation. We achieved a 75% reduction in power consumption while simultaneously increasing system throughput by nearly four times. This isn't an either-or proposition between performance and efficiency; it's proof that breakthrough engineering can deliver both.

As a testament to this, one of the leading IXs in Europe, NL-ix, claimed after implementing one of Nokia’s FP5-equipped routers, “The new Nokia equipment is faster and also greener. It helped us bring down the power consumption per Gbit from 0.9115 Watts to 0.1065 Watts.” [3]

Our commitment to efficiency extends to our latest data center switching platforms. Introducing our new family of 7220 IXR-H6 switches which supports a throughput of an impressive 102.4 Tbps and enables customers to deploy LPO/LRO pluggables, which further contributes to lowering overall power consumption by removing DSPs. As the industry looks at CPO for even further power savings, Nokia will continue to implement these innovations into our portfolio.

Perhaps the most striking example comes from our 7250 IXR series, which sets the industry benchmark for power efficiency. The 7250 IXR-6e, 10e and 18e represent the most power-efficient platforms in their class. The flagship 7250 IXR-18e, winner of Light Reading's Leading Lights 2025 Award, delivers 576 line-rate 800GE ports in a single chassis—a staggering amount of bandwidth—while maintaining a power envelope of 20 kilowatts. This is the same power required by some competitors for their 400GE equivalent systems.

Looking forward

At Nokia, we're committed to continuing this journey—delivering the highest quality, most efficient networking platforms to our customers. This translates not only into better economics, but also in helping them build a more sustainable future.

Learn more on our sustainability approach and energy-efficient data center switches.

Data sources

[1] Data Center Energy Consumption: How Much Energy Did/Do/Will They Eat? | Clean Energy Forum

[2] Data centers and AI: How the energy sector can meet power demand | McKinsey

[3] https://www.nl-ix.net/about/about-our-network/the-800g-story/

Igor Giangrossi

About Igor Giangrossi

Igor Giangrossi leads Hardware Product Management for Nokia's IP Division, where he drives product strategy across all customer segments. With over 25 years of experience, he brings a comprehensive perspective shaped by roles spanning the complete product lifecycle—from network user and technical support to pre-sales and product development. Igor combines technical depth with business leadership, holding a degree in electrical engineering from Instituto Mauá de Tecnologia and completing Harvard Business School's Program for Leadership Development.

Connect with Igor on LinkedIn

Article tags