Skip to main content

Why distribution is important in NFV

Network functions virtualization (NFV) promises service providers improvements in terms of automation, responsiveness, and operational costs. Virtualization and cloud concepts have revolutionized datacenters in the IT domain, and NFV brings them into telecoms.

The centralized data center model from the IT world doesn’t translate directly to telecommunications networks, however. This is because the services and applications that run on telecoms networks have requirements that are directly affected by the distribution of network resources. Since they can’t be completely centralized, a different model is required to realize NFV’s full potential.


NFV marks the beginning of a new era in telecommunications networking. Virtualizing network functions on top of an industry-standard server infrastructure – typically a private carrier cloud – provides a radically new technology for building networks.

Virtual network functions (VNF) are often broken down into multiple components that run on different virtual machines, each of which can be placed in the same or different locations. Virtualization means that network functions — and even components of network functions — are no longer tied to specific physical locations. We can now distribute network functions throughout a geographic area:

  • in regional data centers
  • in metro areas
  • in neighborhoods
  • at customer premises
  • on mobile devices

How did we get here?
Classical telecommunications networks are highly distributed. In the United States, for instance, there are roughly 20,000 central office buildings, each of them hosting one or more telephone switches (Figure 1).

Decentralized, hierarchical switching center topologies were historically necessary. This was partly because of the limitations of analog voice switching technology, and partly because of traffic patterns. The population was not as mobile. Most phone calls were local and could be handled locally. Local switching centers managed most of the load, and the long distance capacity, which was scarce and expensive, was utilized only if really necessary.

This resulted in a network topology (Figure 2) that was able to reach down to the household level – and much later, to mobile base stations – while minimizing the number of international trunks and exchanges.

But the network architectures that served so well for nearly a century of voice communications can’t meet the challenges today’s service providers face. They need to be able to rapidly adjust to fluctuations in demand, bring new services to market in hours or days, and be prepared to accommodate new services or applications that haven’t even been thought of yet.


As the IT world virtualized, it found that a small number of warehouse-size data centers are more cost-effective than many small, widely spread ones. This is because companies that build data centers do not have to build and operate local access networks. In addition, bandwidth is now cheap and plentiful, making the geographical placement of data centers extremely flexible.

Amazon Web Services, the largest cloud provider in the world, is a good example of a highly centralized architecture, serving its customers from as few as 9 locations worldwide. However, it is the nature of Amazon’s business – largely web, content, and transactional applications – that make this architecture optimal. These types of applications and services can tolerate the latency created by data packets having to travel long distances between user equipment and servers.


In contrast to IT clouds, such as Amazon’s, distribution matters in NFV networks. Many carrier applications have needs that are ill-suited to a centralized architecture. These demands are related to network offload, low latency and jitter, availability, security, and regulations.

Network offload
In the past, voice traffic dominated the networks, but today, video and data traffic use the majority of network capacity[1]. Both benefit from a distributed architecture, but for different reasons. Streaming video content from a central source to each viewer is inefficient. Content distribution and multi-casting are the main ways of overcoming this inefficiency, and both benefit from a hierarchical, distributed architecture.

Latency and jitter
An obvious issue with centralized deployments is signal latency. In telecommunication networks, data travels at the speed of light, which is very fast but can still be noticeable over long distances. Additional latency is caused by switches, routers, and other network equipment along the way.

A comparatively large portion of latency happens within wireline or wireless access networks. The smallest latencies occur with fiber access while mobile networks can cause round trip delays of up to several hundred milliseconds.

But latency is not the only critical factor for voice services, or any other full-duplex, real-time traffic, such as video conferencing. Jitter (delay variation) also needs to be contained. To recreate a continuous speech flow in the presence of jitter, the signal needs to be buffered at the destination. If the jitter is high, these buffers will cause an unnatural delay in two-way conversations.

Reliability and availability
Service reliability and availability are further reasons for distributing network resources, including disaster survivability. One type of risk is a “smoking hole” scenario or some larger geographically defined disaster. The 2011 Tohoku tsunami, for example, prompted Japanese service providers to thoroughly review and change their disaster protection measures to reduce possible impact of future similar occurrences.

Distribution helps to restore service quickly to users affected by an incident. These installations do not necessarily have to be small in size and very large in number. They do need to be independent, and geographically far enough apart (over 1,000 km) that not all of them are affected by the same disaster.

Security and regulatory concerns
Distribution can be both a risk and an opportunity for improved security[2]. Distributed networks can be riskier because they have more locations and network connections to attack. Once attackers have infiltrated one location, they may be able to spread to others and even attack critical management and orchestration functions.

Conversely, distribution can also mitigate risk. Carefully distributed NFV applications can quarantine localized attacks, leaving the vast majority of nodes and users unaffected. With proper security measures in place, attacks can be detected automatically and infected elements isolated and restored, while remaining elements continue to operate.

Government regulation may be another reason for distribution at the national or groups of nations level. Beyond the reliability and availability, critical infrastructure may have to be contained within national boundaries, as is the case in the EU for authentication services that store personal data.


For the reasons outlined, most real-world NFV deployments require service providers to geographically distribute virtualized functions for some services. As mentioned, most carriers also operate access networks. Both fixed and wireless access networks have limited placement flexibility.

The trend in mobile networks towards using small cell radio and sensor networks to improve coverage and capacity means that network elements are placed closer to users than ever. And latencies in mobile access networks, even LTE, dictate that some services need to be highly distributed. Other functions — call them service functions — have more flexibility, and variables beyond proximity to users will determine what level of distribution will optimize the balance between customer experience and cost.

Video services and virtual content delivery networks (vCDN)
Today, the majority of network capacity for consumer services is used for video downloads or streaming. This is also the fastest growing service and will put increasing strain on current network architectures.

Content delivery networks (CDNs) are the principal way to offload video traffic on international and some wide area networks. Instead of repeatedly streaming or downloading the same content from a central server, CDNs cache the content closer to the subscriber. The load that video-on-demand traffic places on the network can be further reduced by using specialized CDNs or by distributing the video platforms to cache the content even closer to the users than is the case today.

Virtual radio access network (vRAN)
The vRAN is one of the most latency-sensitive NFV applications identified by the ETSI NFV Industry Specification Group. Signal latencies between remote radio heads and baseband units need to be in the range of microseconds to a few milliseconds. This limits the fiber distance between them to less than about 40 km.

Virtual customer premises equipment (vCPE)
Consumer and business CPE, such as DSL routers, firewalls and set-top boxes, are the most numerous network elements. Service providers are interested in virtualizing CPE functions to avoid costly truck rolls for maintenance or upgrades.

When functions are moved out of the CPE and into the network, resilience and performance become major concerns because so many more customers can be affected by outages. In most cases, this means subscribers are grouped into regional clusters, with each cluster having access to the resources of neighboring clusters as a form of redundancy. Management of this distribution, therefore, becomes very important.


Many service providers have taken the advent of NFV as an opportunity to start rethinking their network architectures. Service providers are looking to simplify architectures and move toward a more consistent and flexible model.

For the reasons discussed above -- network offload, latency, jitter, reliability, availability and security -- service providers will choose a multi-tier network architecture giving them the flexibility to distribute network functions optimally (Figure 4). In reality, existing organizational structures and ownership will also influence technical architecture.

This article is excerpted from the Alcatel-Lucent strategic white paper entitled “Why distribution matters in NFV” .

Related Material

Maintaining Service Quality in the Cloud TechZine articleVideo Shakes up the IP Edge white paperNFV Insights Series: Business Case for Moving DNS to the Cloud white paperNFV Insights Series: Providing Security in NFV - Challenges and Opportunities white paper


  1. [1] Video shakes up the IP Edge white paper
  2. [2] NFV Insights Series: Providing security in NFV - Challenges and opportunities white paper

To contact the author or request additional information, please send an email to

Andreas Lemke

About Andreas Lemke

Andreas Lemke joined Alcatel-Lucent from the German National Institute for Integrated Publication and Information Systems (IPSI). While at Alcatel-Lucent, Andreas has held various positions in research, product and solution management, technology management and marketing with a continuous focus on the next generation of network innovation. Currently Andreas is a leading NFV industry evangelist heading up the marketing efforts for the CloudBand™ NFV platform. Andreas holds degrees in computer science from the University of Stuttgart, Germany, and a Ph.D. in Computer Science from the University of Colorado, Boulder, USA.

Article tags