Skip to main content
Sep 16 2015

Don’t let latency make you dizzy

This blog is by Thomas Kaps at Nokia Networks.Twitter: @nokianetworks

3 ways to reduce latency to milliseconds

Gone are the days of simply keeping subscribers happy with great voice services — back when it was ‘good to have’ latency low enough to ensure zero lag in voice communication, typically in the range of 100 milliseconds. The under 20 crowd rarely seems to use phones to talk anymore... but lag certainly wasn’t something which operators actually measured in order to benchmark the customer experience of data services. This has now changed drastically with the advent of real-time applications and data communication that demand instant responses from the network to create an enjoyable experience.

In fact, some data services are now even more sensitive to latency than voice, like the common applications of music or video streaming that we do on the move nowadays versus simple downloading. Likewise, apps requiring instant collaboration and sharing of content from the cloud are quite latency-sensitive, too. And just think about the real-time applications that go beyond voice, video and music — the fast approaching world of connected driving, augmented and virtual reality, net gaming and tactile interaction — which substantially raise the bar for latency requirements. Did you know that if the end-to-end latency is more than a few milliseconds, virtual reality applications would make you feel spacey and dizzy so you couldn't enjoy this great experience at all?

In addition, many advanced network concepts and features rely on the availability of very low latency network connections in the range of single-digit milliseconds. For example, front- and backhaul for Centralized or Cloud RAN and inter-site or inter-cell coordination algorithms that deliver higher capacity and performance cannot function without such speedy links.

Throughput, proximity, control - 3 ways to shrink latency

Throughput: Throughput, or ‘effectively usable bandwidth’, is of course key for low latency communication because the fatter the pipe, the faster the back-and-forth transmission. Good news is that throughput increase and latency reduction are now important design criteria, as we progress from one generation of technology evolution to the next. LTE reduces round-trip-time (RTT) by 50% compared to the previous technology generation and 5G will reduce it further to single-digit milliseconds. In addition, throughput can be improved with known techniques such as Carrier Aggregation or Coordinated Multi Point (CoMP), in addition to deploying LTE, Single RAN and Small Cells. Increasing fiber availability will also provide bandwidth galore.

Proximity: The only absolute constraints on latency are distance and the speed of light. A user located in Europe accessing a server in the US will face a 50 ms round-trip time, due simply to the physical distance involved, no matter how speedy the network is. The only way to improve this is to reduce the distance between devices and the content and applications they are accessing. Bringing data storage and processing closer to the user can be achieved through approaches such as Mobile Edge Computing as demonstrated by Nokia's Liquid Applications solution. Another promising development is Software Defined Networking (SDN). When applied to mobile network functions such as gateways, SDN can bring applications and network resources closer to the end user or even fast forward traffic to local nodes, bypassing central nodes when relevant. Introducing flat IP and distributed cloud-based architectures can also help improve latency by reducing the number of hops and elements required for packets to traverse through.

Control: Besides the physical and structural measures for low latency explained above under Proximity and Throughput, software-driven measures that provide ‘dynamic network control’ can also help improve latency at a session/application level. For example, software applications ensure that heterogeneous resources are applied intelligently on a case-to-case basis in real time depending on the network situation, application needs, and desired QoE. Functionalities such as Application aware RAN and Network aware applications can improve latency for specific applications. Nokia's Dynamic Experience Management (DEM) delivers required QoE at a session level, including measured and perceived latency, through a fast loop control of measurement, analytics, decisions and actions all in real-time. DEM combined with network-level orchestration across all domains will pave the way to get from optimized reaction to automated prediction and control of latency dynamically.

By complementing this three-pronged approach to reduce latency with Nokia’s world-class Network Planning and Optimization, human perceived latency will become a thing of the past in our networks. In fact, it will give operators the opportunity to differentiate, personalize and even monetize exceptional user experience!

With 5G, we are already working towards another step change to reduce latency below 1 millisecond to ensure that latency will never make you dizzy again!

Read more in our White Paper: Reducing latency to milliseconds, published as part of our Technology Vision 2020.

Please share your thoughts on this topic by replying below – and join the Twitter discussion with @nokianetworks using #NetworksPerform #mobilebroadband #TechVision2020 #innovation #customerexperience.

About Thomas Kaps

Thomas Kaps is responsible for the latency and energy topics of Technology Vision 2020 as well as Idea and Innovation Management at Nokia. He has held many positions in telecommunications including research, systems engineering, development and sales.

Article Tags