Can AI for networks be sustainable?

abstract image

The amount of energy consumed by AI is a growing concern. Dr. Alexandra Sasha Luccioni, a research scientist at Hugging Face and a prominent voice on the environmental footprint of AI, has led efforts to create tools like the AI Energy Score Leaderboard to measure and compare energy consumption of different models. She has stated that "Measuring AI's environmental impact isn't just a responsibility—it's the compass that ensures innovation that guides us toward a more sustainable future."

To quantify the scale of the problem, in Q2 2025, ChatGPT 4.0 received ~2.5 billion queries every day with an energy cost of at least 0.34 Wh per query, according to CEO Sam Altman. That works out to about 275 GWh of electricity per year, which is equivalent to the energy usage of an average American household over 120 years. More recent estimates put ChatGPT 5.0 at 18 Wh per query.  

This year, Gartner forecasts electricity demand for data centers to double by 2030. Given these kinds of numbers, it doesn’t seem overly cautious to question the sustainability of our current AI trajectory. Beyond electricity demand and the resources needed to produce it, there is also water for cooling, rare earth shortages and the opportunity costs—i.e., even if all the energy required was provided by renewables, we would have to ask whether those generation sources wouldn’t be more usefully employed powering transportation, heating homes and generally helping us to electrify our societies.

AI for networks

Along with energy consumption, AI is also driving a rapid expansion in network capacity, both inside and outside of the data center. Hyper-connectivity is one of the critical conditions for supporting AI and expanding its impact in real-world applications. From model training to the inferencing of AI agents, networks will be critical for connecting cloud, edge and endpoint devices (the ‘cloud continuum’) to provide end users (human and machine) with the AI capabilities they need, wherever they are.

Somewhat ironically, in order to support the AI-cloud continuum , networks will have to also embrace AI, becoming autonomous in many parts of their operations. 3GPP sees AI-nativeness as foundational in future networks, which is reflected in current discussions around the design of 6G.

Designing sustainability

The sustainability issues associated with AI, thus, will also impact network operations. Network operators are already conscious of energy costs for running servers, along with cooling and space requirements. Embracing AI can make network operations more energy efficient; this is one of its key use cases in 6G. But, as Jevons’ paradox predicts, these energy gains may simply go to scaling AI training and inferencing, with the total energy use only growing as a result. 

So, the question begging to be answered is, can we design AI for networks to use less energy? As it turns out, yes, there are several things we can do. These strategies include using smaller, purpose-built models, training models more efficiently, reducing the training set to only relevant data, and employing sparse, event-driven computation that mimics the functioning of the human brain. There are, in fact, several brain-inspired AI architectures being researched in an attempt to reduce energy use: spiking neural networks (SNN), liquid neural nets (LNN) and tiny hierarchical reasoning models (HRM) are some of the more well-known.  

Hardware architectures are also being looked at to improve power performance. IBM has been researching the use of analog chips that have shown a 14x improvement in energy use. Researchers at Tsinghua University have reported that their analog photonic chips, called ACCELs, are achieving energy efficiencies that are millions of times higher than today’s digital GPUs.  

Energy-efficient AI for networks

We’ve recently published a paper  discussing many of the approaches to energy saving s currently being explored by the broader industry. We propose a three-step guide to pursuing more sustainable approaches to AI use in networks.

Process

  1. Don’t apply AI indiscriminately to network functions. Determine if there is a real need. This will help you decide on the trade-off between energy required vs. benefit received.
  2. Once you’ve decided AI is worthwhile, do a thorough analysis of which techniques to use and optimize your design using compression techniques, optimized training approaches, specialized hardware (where available), and optimal software architecture approaches.
  3. Measure and monitor the performance of energy consumption as well as other metrics (we discuss several benchmarks that can be used). The results of these metrics should give you the necessary feedback to iterate on your technical choices and achieve greater energy optimization.

 

 

Looking ahead

Our network vision for AI systems is that they be energy-efficient while at the same time capable of scalable continual learning.

If you are interested in reading more about our research, start with our recently published paper, “Advancing AI: Sustainable AI for networks .”
 


 

Anne Lee

About Anne Lee

ANNE Y. LEE is a Bell Labs Fellow currently working as a CTO Partner in Bell Labs CTO on Technology Strategy and Architecture supporting the CTO and CEO by analyzing the evolution of key technologies (e.g. artificial intelligence) and the overall network architecture to identify new directions for the industry and the impact and requirements on the company's portfolio.  Prior to this, Anne was the CTO of IMS Innovations.  In this role, she initiated and drove next generation IP communications solutions such as WebRTC as well as worked on creating the vision and initiating the work for future IP communications.  She has also previously worked in both the Wireless CTO and Wireline CTO organizations.  Anne has over 30 years of experience at Nokia (AT&T, Lucent Technologies, Alcatel-Lucent, and Nokia).

She is the original IMS technical team leader for Lucent Technologies working closely with the standards team from the beginning – circa 1999 – to pioneer the multi-access, globally interoperable IP communications system being deployed worldwide today.  2019 was the 20th anniversary of IMS with over two billion devices deployed utilizing VoLTE, VoWiFi, fixed voip, and RCS services.  Adoption of IMS continues around the globe into 5G and beyond.

Anne is a technologist at heart and an electrical engineer and computer scientist by training.  She became a Bell Labs Fellow in 2005. 

For more information check out Anne’s LinkedIn Profile.

Article tags