When AI becomes physical: The network as a distributed nervous system

Abstract blue and purple flowing waves with gradients and fluid motion background

In our previous posts, we described two critical shifts. AI is reshaping traffic patterns, making them continuously evolving, increasingly interactive, uplink-intensive, and burst-driven. At the same time, networks must evolve from static automation to governed autonomy, capable of closed-loop sensing, decision, and execution to keep pace with this new reality.

A third shift is now underway. AI applications no longer simply consume connectivity. They increasingly depend on the network to execute inference, coordinate distributed intelligence, and generate environmental awareness. In the AI era, the network does not just carry intelligence. It becomes an integral part of it.

This transformation is visible across three emerging classes of applications.

  1. AI Over the Network: Generative and Contextual Intelligence

Generative AI may appear to resemble traditional network usage, where prompts are sent and responses returned. Yet the operational characteristics are fundamentally different. AI-native traffic is highly bidirectional, latency-sensitive, and context-aware. Inference increasingly occurs outside centralized hyperscale data centers, placed at the edge to reduce round-trip delay, preserve privacy, and optimize resource utilization. AI agents communicate with other AI agents, forming dynamic service graphs that evolve in real time.

Immersive extended reality systems further illustrate this shift. Advanced XR devices rely on real-time scene understanding, spatial mapping, and AI-enhanced rendering, often combining on-device processing with edge and cloud inference. Maintaining motion-to-photon consistency and context-aware interaction introduces stringent latency and synchronization requirements that extend beyond traditional media streaming models.

These workloads introduce new requirements:

  • Distributed model placement and lifecycle management: Orchestrating where models execute based on demand, latency budgets, and resource availability.
  • Intent-driven connectivity instantiation: Dynamically establishing secure, policy-compliant paths and service endpoints that adapt in real time to model placement and application intent.
  • Real-time congestion awareness: Preserving AI response quality and preventing stutter in agentic workflows.
  • Deterministic performance: Providing a stable foundation for interactive and real-time experiences.
  • Network-exposed context: Leveraging location, device capability, and radio state to sharpen inference accuracy.

While concurrent execution of RAN and third-party AI workloads, often referred to as AI-and-RAN, is technically achievable on many hardware substrates, the reality of modern AI innovation is shaped by software gravity. Mature software stacks, tools and ecosystems have coalesced around certain accelerated platforms, and replicating this breadth has proven difficult even for well-resourced silicon vendors. For a network to function as an open platform for innovation, one that attracts developers, partners and applications, it must align with these established foundations. Without that alignment, a custom accelerator risks becoming a proprietary island rather than a generative ecosystem. This gravitational pull extends beyond the radio domain. As AI permeates the core, edge, sensing and orchestration layers, the entire network stack must align with the dominant AI development ecosystems that are shaping global innovation. Otherwise, telecom infrastructure risks fragmenting into isolated AI domains disconnected from the broader AI economy. Alignment with widely adopted accelerated compute platforms ensures that the network evolves as part of that ecosystem, rather than alongside it.

This architectural convergence of connectivity and compute is precisely what we are demonstrating at Mobile World Congress this year. Our AI-on-RAN use cases show inference executing directly on network-integrated GPU infrastructure, proving that the network can dynamically adapt to AI workloads while supporting distributed execution models. This evolutionary shift establishes the technical foundation for deeper transformation.

  1. AI Within the Network: Physical Intelligence and Real-Time Control

The requirements escalate when AI systems begin to move machines rather than generate content. Physical AI systems, including autonomous vehicles, robotics, industrial automation, and coordinated drones, operate in tight perception, decision, and action loops. Data flows are continuous and introduce much higher uplink demands than traditional human-centric traffic, with rich sensor data often dominating control-oriented downlink flows. Latency variation directly impacts control stability, and synchronization across devices becomes essential. Packet loss is not merely a quality issue. It can become a safety issue.

To illustrate the shift from AI "Over" to AI "Within" the network, consider a swarm of mobile robots tasked with transporting a fragile aircraft component across a factory. If each robot independently uses the network to reach a vision model at the edge, AI is moving over the network. But when those robots must move in perfect, millisecond-level synchronization to avoid damaging the load, the network becomes the control loop itself. In this scenario, we expect more than just low latency from the infrastructure. We require the network to provide a shared temporal context and synchronous air-interface scheduling that allows the swarm to act as a single, coordinated organism. The network functions as the distributed backplane for these "reflexes," ensuring that an adjustment in one robot is instantly and deterministically compensated for by the others. Achieving this level of deterministic coordination requires tightly integrated slicing, transport synchronization and cross-domain orchestration operating under unified intent control. This is not a property of radio performance alone, but of the end-to-end system architecture.

The network therefore participates directly in the control loop. It must provide:

  • Ultra-low latency with bounded jitter: Ensuring consistent timing for precise physical movement.
  • Deterministic scheduling: Using intent-based slicing to guarantee performance for critical workloads.
  • Cross-device synchronization: Coordinating multiple agents to act as a cohesive system.
  • Rapid local inference: Enabling reflex-like responses required in safety-critical environments.

We are bringing this nervous system functionality to life at MWC through physical AI scenarios supported by agentic slicing and autonomous operations. Intent-based slicing dynamically provisions differentiated resources, while Autonomous Network Fabric capabilities coordinate multi-domain responses in real time, supported by deep observability and AI-driven orchestration. This is not a bandwidth upgrade. It is a structural expansion of the network’s role in cyber-physical systems.

  1. AI From the Network: Sensing-Native Systems

An even more significant transformation occurs when the network becomes a source of intelligence. Integrated Sensing and Communications (ISAC) introduces the capability for radio infrastructure to sense aspects of the physical environment while simultaneously communicating. Combined with AI processing, this enables the network to produce environmental awareness in real time.

This transition introduces a new set of sensory requirements:

  • High-precision localization and tracking: Enabling centimeter-level accuracy for identifying the position and velocity of objects.
  • Sensing-aware network scheduling: Dynamically prioritizing radio resources to maintain sensing resolution without degrading communication.
  • Real-time environment mapping: Constructing high-resolution digital replicas of physical spaces using simultaneous localization and mapping (SLAM).
  • Integrated signal processing: Utilizing shared, accelerated compute to perform complex tasks like clutter removal and waveform analysis directly in the RAN.

We are demonstrating this architectural pivot at MWC through multiple practical scenarios. By leveraging both mid-band and higher-frequency spectrum for distinct sensing use cases, we demonstrate how communication waveforms can also be used for environmental reflection analysis, with sensing data processed in real time on shared GPU-based platforms integrated into the AI-RAN domain. These demonstrations, spanning drone detection, movement sensing, parking analytics and environmental monitoring, show that the network is no longer only hosting AI workloads. It is generating the data streams that applications consume.

ISAC redefines the boundary between communication and perception. When sensing, inference and communication share a common accelerated compute foundation, the boundary between them collapses architecturally. Sensing is no longer an adjunct capability layered onto the network; it becomes an intrinsic function of the AI-native infrastructure itself. This represents a new paradigm for infrastructure.

Architectural Convergence

AI-native networks cannot emerge from isolated enhancements at the radio layer alone. They require coordinated transformation across compute, transport, control, orchestration and monetization domains, operating as an integrated system rather than as optimized components. As AI moves over, within, and from the network, the infrastructure must converge around a unified set of capabilities that function as a single system:

  • AI-RAN (Sensing & Compute): The fundamental layer where telecom and AI workloads coexist on shared, accelerated platforms to enable sensing and high-performance radio.
  • Distributed AI Compute (Placement): A tiered execution fabric spanning RAN, edge, core, and cloud to ensure inference happens at the optimal location for latency and privacy.
  • AI-Core (Intelligence): Intent-based core automation that translates complex application demands into specific, real-time network behaviors.
  • Intelligent Transport & Data Center Fabric (Connectivity Substrate): A programmable, high-performance IP, optical, and data center networking foundation that provides deterministic latency, synchronized timing, and deep telemetry to support distributed AI execution and sensing workloads.
  • Agentic Autonomy (Logic): Goal-oriented, context-aware decision making that adapts dynamically to meet intents, moving beyond pre-programmed scripts to achieve "reflex-like" responses.
  • Cross-Domain Orchestration (Coordination): The operational fabric that translates intent into synchronized action across RAN, transport, core, and edge.
  • Governed Autonomy (Safety): The "Glass Box" framework that ensures all autonomous actions remain transparent, bounded, and reversible within carrier-grade limits.
  • Programmable Monetization (Value): A horizontal engine that exposes these performance primitives as billable assets through Network-as-Code and outcome-based contracts.

These elements collectively form what can be understood as a distributed nervous system. Sensing provides awareness. Edge inference enables rapid response. Orchestration coordinates action across domains. Governance ensures stability and trust.

Our early embodiments of this architecture at MWC, from AI-RAN executing Layer 1 and AI workloads concurrently to L4+ autonomous operations, illustrate a coherent trajectory toward AI-native networks.

Toward AI-Native 6G

6G is not simply the next performance milestone. It must treat distributed compute and sensing as first-class architectural primitives, not incremental enhancements to radio throughput or spectrum efficiency. It is being designed as the first generation where communication, sensing, and AI-native compute are deeply integrated. As applications evolve from digital assistants to physical intelligence systems, the network must provide more than throughput. It must provide perception, coordination, and governed autonomy.

Traffic has changed. Autonomy has become essential. Applications now demand networks that can sense, decide, and coordinate in real time. Delivering this transformation requires deep integration across radio, core, IP, optical, edge compute and autonomous operations to unify sensing, distributed AI execution and intent-driven control under a single architectural vision. At Nokia, we are aligning our full portfolio and research roadmap toward this AI-native architecture.

In the AI era, the network becomes foundational infrastructure for intelligence itself. It becomes the distributed nervous system of an increasingly intelligent world. At MWC, we are showing early embodiments of this nervous system across all three dimensions: over, within, and from the network.

This is the third in a series. You can read our previous posts here: Why AI-native traffic demands a new network architecture and The Glass Box Imperative: Governing AI network autonomy.

Oğuz Sunay

About Oğuz Sunay

Oğuz is the CTO Fellow for AI at Nokia, with 25+ years of experience at the intersection of AI, edge cloud, and wireless networking. His work bridges research and real-world deployment at scale, from architecting Intel’s data center AI stack to co-founding Ananki (acquired by Intel) and serving as co-PI on the DARPA Pronto program. He is the sole inventor on 40+ awarded patents, co-author of two technical books on 5G and edge systems, and a contributor to 3GPP and O-RAN. 

Pallavi Mahajan

About Pallavi Mahajan

Pallavi Mahajan, Nokia’s Chief Technology and AI Officer, leads Nokia Bell Labs, Technology and AI Leadership, and Group Security to drive innovation in core technologies, strengthen AI and security capabilities, and create differentiation through open ecosystems and strategic partnerships. With deep expertise in networks, software, and AI, she has scaled multi-billion-dollar portfolios and shaped industry-defining shifts at Intel, HPE, and Juniper Networks. A holder of six patents and a passionate advocate for women in tech and grassroots sports, Pallavi champions collaboration to unlock the next wave of growth.

Connect with Pallavi on LinkedIn

Article tags