The rise of the telco LLM
There are so many ways in which LLMs (large language models) are expected to change the world, and the list of use cases expands every day. In this blog, I’m going to argue for another: LLMs will make it easier to automate and monetize network operations.
Simplifying the complex
Reducing complexity is the clarion call and cherished dream of the AI revolution. We live in a technologically complex world, and we desperately need AI to better manage it. It’s no different with networks. As they’ve evolved, they have become both more capable and more byzantine. Networks are increasingly heterogeneous, and the number of configuration parameters, key performance indicators, logs, alarms and alerts has exploded.
How can large language models help address this? Consider applying the innate conversational nature of LLMs to networks. What if in the future, engineers or technicians could manage a network by simply talking to it? Or texting with it? If such an idea sounds far-fetched, recall that we have spent the last fifty years or so transforming network interfaces from hardware to software. DIP switches are largely a thing of the past. Software is language. To the extent that networks are now software-defined, this puts them right in the wheelhouse of an LLM.
More than chatbots
One of the first use cases for LLMs was translating between languages. Using them to generate videos of your dog walking down the runway in Milan probably wasn’t on anyone’s mind at the beginning. An LLM trained on network-speak should, therefore, have little trouble translating what the network is doing or deciphering where it is having problems. Properly trained and architected, an LLM-based AI agent should be able to process and interpret network chatter, tickets and alarms. Forensically, it can be used to figure out root causes, but in real-time it can also be used to spot anomalous behavior and assign tickets in a predictive way to forestall actual outages.
The critical KPI for any network engineer is network uptime, but increasingly it is also about meeting performance objectives. Network operators provide services at levels contractually agreed to and are financially penalized when they don’t. The network operations LLM should be able to read both the contractual terms of the SLA associated with a customer and translate that information into intents, which are then communicated to the network directly by the LLM. It should also be able to interpret the network telemetry to determine whether the service terms are being met and to initiate actions to address issues.
Telecom LLMs are coming
This is not simply a vision. It’s happening today. In 2024, Deutsche Telekom, e& Group, Singtel and Softbank formed the Global Telecom AI Alliance (GTAA) joint venture to create telco-specific LLM solutions. They want to use LLMs to identify and solve telco-specific business issues using local languages, as well as industry-specific terms and their own internal business jargon, none of which can be done quickly or easily with generic LLMs.
They also have a vision for personalized services that will drive a third-party application ecosystem. They want to leverage LLMs to not just lower OPEX but to create new business models for new revenue.
Recognizing this trend, the GSMA has already released several open-telco benchmarks to test telco LLMs during their development. These benchmarks will measure things like:
- Misinterpretation of telecom standards and policies
- Errors in network optimization and automation
- Ineffective fault detection and incident resolution
- Challenges in implementing AI-powered customer experience and service management
SK Telecom has announced a joint effort with Anthropic to “best meet the needs of telcos.” Other telcos are exploring similar paths. The floodgates are opening.
Talk to your network
Here are two of Nokia Bell Labs telco-LLM use cases that we’re researching. Talk to Your Network uses natural language capabilities that make interactive automatic health checks and a network digital assistant possible.
Nokia Digital Assistant “Talk to Your Network”

The second use case uses an LLM to augment troubleshooting including identifying anomalies and suggesting corrective actions for resolution (to learn more read our white paper “Advancing AI: Networks and LLMs”.
LLM agents will run the show
As these example use cases demonstrate, we are in early days with telco-LLMs. The future points us beyond them towards some fascinating possibilities, where LLM-based agents begin to run the network in ways that we’ve never considered seriously before.
In the future, as AI agents are increasingly employed in enterprise applications, smart cities and utility infrastructure, specific agents may talk directly to the network itself through network APIs or network MCP servers, arranging for near-real-time network services such as QoS, positioning, sensing and edge services. The network operator will have AI agents that service these client agents and negotiate and agree to terms based on the current state of the network. They will not only ensure that the network delivers the service but also explore least-cost alternatives, find and book edge services, or explore other revenue opportunities.
This use of LLMs moves the telco away from simply operating the network efficiently to moving it up the AI stack. Customer experience AI agents will be able to exploit emerging opportunities to monetize the value of networks as more than just connectivity, essentially making the differentiated capabilities of the network consumable for other AI agents and creating new services (and revenue) for telecommunication providers.