AI and RAN

What is AI and RAN?
AI and RAN can leverage computing synergies for AI and other RAN workloads to enable higher resource utilization and reduce overall TCO. AI and RAN also facilitates the expansion of Cloud RAN infrastructure to encompass multi-purpose, AI-accelerated computing.
AI and RAN vision
Device limitations, latency requirements and data sovereignty considerations together will drive some AI inference to the network edge. These AI workloads might share AI computing with workloads from AI for RAN and AI on RAN, resulting in a higher average utilization of processing units and therefore an accelerated return-on-investment (ROI) and potentially accelerated edge AI renewal cycles.
Agentic and robotic AI inference will need capacity also at times when human demand for network and inference capacity is low. This and potential offloading of AI training workloads will enable further increases in average AI processing utilization.
For mobile network operators, AI and RAN presents an opportunity to monetize spare AI capacity under a Platform-as-a-Service (PaaS) or GPUaaS model. It also allows moving parts of AI-RAN computation from the operator network to other providers’ edge AI capacity.
The AI computing power of smartphones, wearables, IoT devices and many robotic systems will remain limited by device battery, size and affordability. In consequence, many AI workloads will need to run in the cloud or at the network edge.
Additionally, immersive and real-time robotic AI applications will have stringent end-to-end latency requirements. To meet these requirements, network and computing latency need to be balanced: The shorter the network latency, the more processing time the AI workload will have – and vice versa.
AI and RAN today
Now is the time to lay the engineering and business foundation that will impact future value creation, once other drivers, such as immersive consumer services and real-time robotic use cases, will drive AI computing to the far network edge, where the base stations are.
Nokia’s anyRAN approach for Cloud RAN and purpose-built RAN enables the extension of the processor pool from conventional CPU cores to AI-optimized processing units such as GPUs.
SoftBank and Nokia jointly demonstrated AI and RAN at Mobile World Congress 2025, proving that RAN and other AI workloads can jointly run on a common multi-purpose AI computing platform.