Skip to main content
May 17 2013

Fiber Access Innovations That Pay Off

Cutting the cost to connect

With smarter traffic management in fiber access networks, service providers can accelerate adoption rates and return on fiber investments. Smart traffic management innovations give service providers greater control over individual traffic flows and their unique quality of service (QoS) requirements. With more control over traffic flows, service providers are in a better position to connect more fiber subscribers. For example, they can offer:

  • Tiered pricing options to attract more subscribers to entry-level fiber services and create new upsell opportunities as subscribers become hooked on fiber speeds.
  • Flexible models for sharing the network between different service providers, based on agreed QoS level and fair billing. This will increase the subscriber base through wholesale or even allow for sharing of risk and investment in common network infrastructure.
  • Better quality of experience (QoE) for popular video services delivered in any form — managed or over-the-top.

Increasing take-up rates for fiber services has proven financial benefits. According to Alcatel-Lucent research, increasing the number of fiber to the home (FTTH) subscribers by just 5%, from 20% to 25%:

  • Reduces capital expenditures (CAPEX) per subscriber by 15%
  • Reduces payback time by 20%

Every network has a fiber future

Building a better business case for fiber access networks is important for every service provider. No matter what the service provider’s strategy for network evolution, every network has a fiber future:

The smart traffic management capabilities, which will help service providers ensure their fiber investments pay off, were previously found only on high-end edge routers and aggregation switches. Now they have come to fiber access switches, because only the access node has the full view of the traffic on the last mile. The smart traffic manager is delivered on advanced gigabit passive optical network (GPON) line cards in a three-step process:

  1. Traffic classification
  2. Queuing and buffering acceptance
  3. Scheduling

Step 1: Traffic classification

To rate limit or control traffic for individual users, traffic flows must be identified on a per-service and a per-user basis. Until recently, there hasn’t been an efficient way to bring this capability to the access network. Now, a new classification function in the network processor on GPON line cards can classify traffic according to specific or general criteria and put it in the appropriate queue. The network processor parses the headers of the frames to determine the QoS for each packet, the flow it belongs to and which treatment it should get. To derive this information, the network processor automatically detects the different fields in the packet headers, extracts them, then matches the fields against predefined patterns. The network processor can make an exact match to a specific value such as a Differentiated Services Code Point (DSCP) value. Or, it can match against more general criteria, such as packets with a Layer 4 destination or source port that are in a certain numerical range. This level of traffic classification goes well beyond the much simpler functionality typically provided. Today, most fiber access technology can extract only a certain number of fields from a very limited number of different encapsulations. Then, only an exact match is attempted and not a range check, as the latter would imply a substantial degradation in packet handling performance (if possible at all).

Step 2: Queuing and buffering acceptance

After the traffic is classified, the next step is to determine how it should be queued:

  • Is it high-priority traffic — yes or no?
  • Is there enough space available to buffer the traffic?

Managing traffic on a per-service, per-user basis means buffer queues must also be managed on a per-service, per-user basis. This requires storing traffic in thousands of queues while still maintaining line rate transmission — another major challenge that has kept smart traffic management technology from being deployed on access network equipment. Now, new technology advances mean that time-critical queue context can be stored locally on the network processor hardware. This approach is much more efficient from both a power and a throughput perspective than having to access this data on an external device. Buffer acceptance techniques that are used to determine whether queues can accommodate additional frames are also now possible on advanced GPON line cards. Key techniques include:

  • Tail drop
  • Random early discard

Tail drop In the tail drop technique, subsequent frames are dropped when a queue reaches a certain threshold for a number of frames or bytes. The tail drop technique is often applied to voice traffic because voice traffic queues are typically not that deep. A deep queue means that frames are experiencing a long delay and voice cannot tolerate delays. As a result, voice queues are typically only about 10 frames deep. When voice frames arrive at a regular pace, it is rarely necessary to drop frames. Random early discard Random early discard is a more sophisticated buffer acceptance technique that works well in Transmission Control Protocol (TCP) environments and helps to avoid synchronization in the network. In the random early discard technique, a packet is intelligently dropped when congestion is imminent, but has not yet happened. To understand the benefits of random early discard, it’s helpful to understand how TCP works. In TCP, the source sends packets one after another and expects an acknowledgement for each packet sent. If the acknowledgement does not arrive after the timeout thresholds are reached, the source slows down packet transmission. If even a single packet is not acknowledged, packet transmission is slowed down gradually. However, if a number of consecutive packets are not acknowledged, the source slows down packet transmission quite dramatically, that is, it enters the slow start state. As network conditions improve, the TCP source gradually increases the speed at which it sends frames. At a certain point, the queue will once again reach its threshold. If the tail drop buffer acceptance technique is active on that queue, a certain number of consecutive frames will be dropped from that point onward. As a result, the receive side will not receive the frames and will not send an acknowledgement. The TCP source will detect the lack of response and conclude there is severe congestion in the network. It will once again transition to the slow start state then increase transmission speed. The random early discard buffer acceptance technique helps to avoid these large swings in transmission speeds. When a threshold is about to be reached, a single packet is proactively dropped. If the queue continues to fill, the drop probability is gradually increased. With proactive packet dropping, the TCP source receives the signal that there is congestion and is able to slow down packet transmission to avoid overflowing the buffer at the congestion point. Gradually slowing down avoids the large swings in packet transmission speed that normally occur during congestion. When only the tail drop technique is used, the TCP sources tend to oscillate between minimum and maximum transmission rates. The oscillation effect is further amplified if a high number of TCP flows are multiplexed in the same queue. In this case, all flows sharing the queue experience the same dramatic transmission speed fluctuations when any flow in the queue reaches its threshold. However, when the random early discard technique is used, there are typically only small fluctuations around an optimal transmission rate. Through proper selection and configuration of either of these branch and cut (BAC) algorithms for each queue, service providers can achieve differentiated treatment of the different traffic flows. However, in some cases it is desirable to even differentiate between packets belonging to one service flow. Placing these frames in different queues is not an option as this would give rise to reordering issues. For example, to minimize the impact on QoE for video services during network congestion, different drop eligibility should be applied to P-frames that contain delta information versus I frames or main frames. Typical solutions cannot distinguish between these different video frame types, meaning video frames are indiscriminately dropped during congestion.

Step 3: Scheduling

After the frames are in their respective queues, the next step is prioritization — deciding from which queue the next frame will be selected. The goal here is to prioritize and schedule frame transmission in a way that minimizes the negative impact that congestion has on QoS and QoE. To achieve this goal, the technology selects frames from queues with delay-sensitive flows before it selects frames from delay-tolerant queues, such as those where high-speed Internet (HSI) data is stored. These actions help to ensure that premium and delay-sensitive traffic, such as video traffic, is given a higher priority and delivered with better QoS. Most solutions that exist today cannot distinguish between flows with different priorities in HSI traffic. As a result, flows that require low loss or low delay for high QoE are treated the same as flows that can tolerate delays and lead to service degradation.

Shared networks demand individual control

Passive optical networks (PONs) are shared networks. All subscribers are vying for the bandwidth for each of their services. As demand for data services grows and the number of users grow, the ability to distinguish among traffic flows, apply different priority levels, and intelligently discard frames will become increasingly important. And the access node is the best place to embed this intelligence, as it has a unique view of last mile traffic. With smart traffic management capabilities enabled by hardware and software innovations, service providers can deliver the QoS and QoE that individual traffic flow needs and individual subscribers expect. Service providers will be in a better position to connect more fiber subscribers, and they can build a better business case for fiber access investments — whatever their fiber future. To contact the authors or request additional information, please send an e-mail to networks.nokia_news@nokia.com.

About Ana Pesovic

Ana heads the Fixed Networks Fiber marketing activities in Nokia. She built up extensive international telecom experience, with positions in sales, pre-sales and R&D in Germany, Spain, Portugal, Belgium and India. Ana has a Masters Degree in Informatics and Computer Science from the University of Belgrade. As member of the Board of Directors of the FTTH Council Europe, she’s a strong advocate of Fiber.

Tweet us @nokianetworks