Skip to main content
Jan 12 2011

An Innovative Tool for Measuring Video Streaming QoE

Bell Labs has developed new tools for more precise measuring of the quality of experience (QoE) of video streaming services, particularly video quality. Video has greater emotional impact and renders visual complexity better than any other medium. Successful video delivery requires an understanding of these twin factors, and how best to shape the video experience to deliver them. Delivering an outstanding QoE is critical for video streaming service providers. It is a differentiating factor in attracting new users, but also the best way to enhance multimedia revenue streams. Better QoE also affects how consumers choose services and handsets. In the mobile, multimedia value chain, it also gives content providers the assurance that their content can be viewed as intended. Rapid analysis of video traffic on the network is important because it allows content owners and distributors to quickly detect impaired performance and take corrective actions as necessary. Alcatel-Lucent research shows that 12% of monthly service calls received per household dealt with video quality issues; bad QoE was the highest individual cause of churn. There is also a correlation between video user engagement and parameters like bit-rate transitions and rebuffering events, and to a lesser extent, the number of dropped frames. Measuring and monitoring video quality KPIs is critical because user numbers and engagement drive multimedia revenues. Furthermore, delivering HD and 3D content to the home is also the key to increasing average revenue per user (ARPU), given that the price of HD video on demand (VoD) content is 15% higher than SD content, and analysis of recent retail 3D movie prices shows that they generally cost $1.00 to $2.50 more than HD titles. Other new kinds of paid applications, such as mobile PVR or the mobile extension of existing IPTV services, can be successfully monetized only if QoE is close to perfection.

Defining the indefinable

Delivering an outstanding QoE is critical for video streaming service providers. It is a differentiating factor in attracting new users and significant in reducing churn, because users will stick around as long as their perceived QoE matches their expectations. However, perception of QoE is inherently subjective and levels of expectation vary between users. If a perceived QoE does not meet those expectations, the end user will happily change to another service provider. From the end user’s point of view, the overall quality of experience of a video streaming service will mainly depend on a few key criteria:

  • Video quality
  • Audio quality
  • Speed of service access and channel switching
  • Frequency of service interruption
  • Price and pricing model
  • After-sales service responsiveness

Current measures don’t measure up

Due to the mixture of variables involved in the overall perception of quality, it is extremely difficult for operators to measure QoE accurately. Nevertheless, one of the first criteria to tackle is video quality. Measuring video quality is not simple as it depends on various factors such as the viewing conditions, the type of device used and the viewing environment (for example, end-user expectations may be different if they are at home versus on the move). The most commonly used metric to measure video quality is called the Mean Opinion Score (MOS). It consists of soliciting user opinions to rate video quality after compression and transmission. MOS was originally conceived to evaluate the audio quality of compressed speech in the telecommunications world. It yields a rating of between 1 (bad) and 5 (excellent) for perceived content quality. The MOS method requires an extensive battery of tests in a controlled environment with a large panel of users. As this is time consuming and cumbersome, there has been a trend toward using objective perceptual video quality algorithms without recruiting end users for tests. The goal in such objective quality assessments is to develop quantitative measures that can automatically predict perceived image quality. Very often, objective methods for video quality assessment try to estimate an objective MOS by predicting the subjective quality ratings that would be obtained from a panel of human viewers. Various objective video quality tools have been developed and may be classified into three categories:

  • Full Reference (FR) methods compute the quality difference between a ‘perfect’ version of the image/video and a ‘distorted’ version
  • No Reference (NR) methods estimate the quality of the signal without any knowledge of the ‘perfect’ version
  • Reduced Reference (RR) methods have access to partial information regarding the ‘perfect’ version to compare to the quality of the distorted signal

Furthermore, objective video quality assessment algorithms can be classified as pixel-based, bit-stream-based or a combination of both. Pixel-based metrics require the complete decoding of the received video sequence, whereas bit-stream metrics only perform a parsing of the encoded video stream in order to estimate visual quality. Hybrid quality metrics use a combination of pixel information and bit-stream processing. Tools based on FR methods require access to the original video — this prevents them from being deployed in most practical situations. Moreover, they require a great deal of processing power and are only used as tools for designing image- and video-processing algorithms for in-lab testing. They cannot be deployed for monitoring inline service. RR methods are less complex in terms of processing overhead as they use a reduced amount of information. However, accessing even partial information from the original video is still not possible in most practical situations. From a real-time monitoring point of view, it is clear that NR bit-stream-based video quality metrics are the most interesting, seeing as they do not require access to the original video sequence and the decoding of the received video stream. Since NR methods do not require any reference information, they can be used in real time, and the processing cost is not as high as for the full reference approaches. However, the price for this flexibility is reduced accuracy in the quality predictions.

In general, ‘traditional’ tools are unable to simulate the behavior of all kinds of devices because they do not include a means for simulating a buffer player. Due to this constraint, such tools cannot analyze the video quality that would be restored by a given terminal connected to a network operator. This means that the current MOS scores provided by these tools can never truly reflect reality. New tools to measure QoE accurately

Research by Bell Labs has led to the development of new tools and algorithms to assess QoE more accurately. Chief among these is the Alcatel-Lucent video inspector tool. Mobile video comes through the wide heterogeneity of content provider platforms, protocols, smartphones, mobile OS and video apps. The first challenge for operators is to process and follow all the new video protocols/codecs, client applications and devices. The video inspector tool is able to assess the end-user experience for a wide variety of standard and proprietary streaming solutions, and was designed to be easily extendable with new protocols/codecs by leveraging the protocol analysis facilities of the Wireshark tool and the wide decoding capabilities of FFMPEG/VLC players. The video inspector tool currently supports the most commonly used video delivery protocols, RTP/RTSP streaming protocols, Apple HTTP Live Streaming (HTTP Adaptive Streaming running on the Apple iPhone/iPad) and HTTP Flash progressive download. The architecture is summarized in Figure 1.

The video inspector tool simulates the buffering model and end-user video rendering, enabling the analysis of the relationship between low network/protocol layer issues and any degradation of the end-user experience (Figures 2 and 3 depict examples of buffer filling with buffer models of 3 secs and 15 secs for RSTP streaming protocol).

It also offers a KPI/KQI (Key Performance Indicator/Key Quality Indicator) framework that covers all the layers from the IP layer through to multimedia transport, the buffer model, codec layer and end-user perception, and flexible graphs for in-depth technical assessments and troubleshooting (Figure 4). By using a lightweight analysis procedure that requests only a single capture point, the video inspector tool enables both offline and live capture in the labor in a real, deployed network.

The main goal of the tool is to monitor the video quality of a network operator, but it does so in a way that provides a diagnostic analysis of the possible root cause of unsatisfactory video quality. It offers an objective analysis of user complaints and reduces the time required to analyze network issues and resolve them because it allows the simultaneous checking of several parameters that can cause impaired/freeze frame situations. The video inspector tool is one component of the Video Quality of Experience Assessment service provided by Alcatel-Lucent. Its key benefit is that it can handle, in a single tool, the plurality of existing video delivery systems (for example, YouTube videos, video streaming on the Apple iPhone, and 3GPP RTSP streaming). Further assessment methods for video impairment are under development.

Making the difference

Video quality can make or break a service deployment. The video inspector tool is a key component of a range of services that Alcatel-Lucent provides today to assess quality impairments and offer corrective action plans for both new and existing deployments. They are available as part of the company’s Multimedia Integration portfolio, which also includes end-to-end video monitoring, mobile network optimization services, and integration of content delivery solutions to provide consumers with a consistent and high-quality video experience. To contact the authors or request additional information, please send an email to networks.nokia_news@nokia.com.

About Pascal Hubbe
Pascal Hubbe is responsible of a video quality laboratory in the MMI practice of MIT in Alcatel Lucent in Villarceaux, France. He received a Master’s degree in Signal Processing from Orleans University, France in 1990 and has worked for several years in the development of the operating system for smart card for GSM (SIM card) inside Solaic/Schlumberger. In 1996, he joined Alcatel Mobile Phone company as SIM card expert and was the lead representative for Alcatel at the ETSI Standard (SMG9 Group). Since 2005, he is leading video quality expertise on mobile TV solution for wireless networks. His interest is focused on service offeri related to the video quality of experience for operators.