How AI is disrupting the data center industry

Glowing CPU with AI inscription on the motherboard

AI as a gamechanger

Data Center Dynamics recently hosted a conversation featuring David Power, CTO at Nscale, Chadie Ghadie, global lead of advanced infrastructure solutions at Lenovo, and me, head of the data center networking business at Nokia. As with most conversations these days, we started by discussing artificial intelligence (AI) and its disruptive effects across the data center industry.

New AI challenges bring new cloud players

Super-large-scale cloud (and now AI) providers like AWS, Azure and GCP dominate the cloud landscape. Other infrastructure providers have at times tried to build their own versions of the cloud to address regionalized or industry-specific needs. But entry into this capital-intensive, high-skill, fast-moving space has been mostly impossible for anyone outside the privileged few. So as clouds shift from general compute to AI, the established leaders will continue their dominance, right?

Perhaps. These large-scale cloud and AI providers are driving a lot of the foundational model work atop which the rest of the industry is building, so they will continue to play a pivotal role. But the fast rise of AI clouds has exposed unmet challenges, and a new crop of players has emerged to meet the moment.

Enter neoclouds

David Power spoke about why the market has ushered in a new type of cloud provider, known as neoclouds. And interestingly, neoclouds aren’t seeking to replace their larger-scale counterparts, but rather to complement them. Nscale has been effective at partnering with the largest cloud providers, providing a set of skills tailored for the AI explosion. Without these skills, the cloud and AI providers themselves would struggle.

Anyone following the data center market understands the need for low-cost power and sufficient connectivity. Nscale has tapped into cost-effective pools of power in the Nordics and the southern US. But their role is more than acting as a conduit between the largest cloud and AI providers and the physical world.

It takes a village

While it might seem that it is an Nvidia world and we are all just living in it, the infrastructure stack required to make AI useful is much more than graphics processing units (GPUs), auxiliary processing units (XPUs), network interface cards (NICs) and storage (Figure 1).

Figure 1: Nscale and Nokia partnered to build a full AI stack

In our conversation, David talked to some of the challenges in navigating an environment that requires full-stack integration across a diverse set of ecosystem players. There is, of course, the need to integrate everything under an orchestration layer, which brings complex logistical challenges. Things like procurement, logistics and access to supply must all converge on timelines dictated by customers that are effectively paying to abstract a lot of that complexity.

Of course, this isn't the neoclouds' problem to solve alone. Chadie Ghadie spoke about how Lenovo handles reference architectures, which is to effectively provide blueprints for its neocloud partners to draw into their full-stack designs. By doing some of that cross-component testing and producing validated designs, Lenovo and Nokia are reducing effort and friction that would cause drag for a set of customers measuring outcomes by the moment.

Optionality is the currency of the cloud

We talked a bit about strategy as well. Nscale has a compelling strategy that underpins its full-stack ambitions. By integrating all layers, Nscale is able to optimize up and down the infrastructure. This is important because it allows the company to deliver a service that scales both performance-wise and financially.

Lost in a lot of the AI conversation at the market level is a simple fact: Nobody's primary problem is "I don't have enough AI." While AI is exciting, it is still just a tool that has to be wielded effectively. Capability and cost contribute to commercial success, and Nscale's approach allows it to build infrastructure that can durably anchor companies building around AI for years to come.

One of the strategic cornerstones for any company endeavoring to do the same must be optionality.

In a market where "new" is measured in days and weeks, it is impossible to plan precisely for the future. So much money is being poured into this space that innovation is moving faster than we have ever seen before. New ideas will emerge and fail. Preserving the technology and financial freedom to leverage those ideas is critical in building something that is future-proof.

We briefly discussed the role of standards and the ecosystem in providing an open playground where technology providers can come together. As a networking company, for instance, Nokia is keenly aware that Ethernet offers advantages over proprietary alternatives precisely because it allows many solutions to interoperate. And remember, competition remains the biggest lever to control costs. But this places a burden on the various collaboration bodies to move at the pace of adoption, not the pace of consensus.

What comes next?

In an environment that is dominated by change, the future promises more change. We talked about whether we think AI is a bubble or something more durable. Spoiler alert: We think the technology is here to stay even if there is eventually an industry reckoning on the good and bad ideas. 

We also talked briefly about how inferencing might change use cases, the role of massively distributed edge data centers, how advances in the WAN might impact how clusters are designed, and more.

The thing that struck me in this conversation is that it felt practical and grounded. The industry is moving beyond convincing itself that AI is worth pursuing. We are handing the reins over from the evangelists to the practitioners. We've already gone fast. It's now time to see if we can also go far.

Michael Bushong

About Michael Bushong

Vice President of Data Center at Nokia, where he has remit over strategy and go-to-market. Mike joined Nokia having served separately as the General Manager for Juniper Networks’ Data Center and Software businesses, during which time he led efforts in data center operations, multicloud, and automation, including leading the acquisition of Apstra. Mike has also led Brocade’s Data Center and Software businesses, having driven product and strategy for data center switching, automation, SDN, and NFV. Mike was also a pioneer in the software-defined networking space, responsible for the first carrier-grade implementation of OpenFlow, and having been a part of the executive team at high-flying SDN player Plexxi (later acquired by HPE).

Article tags