EBU TECH 3337 A Proposed Time-Stamped Delay Factor (TS-DF) algorithm for measuring Network Jitter on RTP Streams Source: N/IMP Status: Information 1 Geneva January 2010
Page intentionally left blank. This document is paginated for two-sided printing
Tech 3337 TS-DF algorithm for Network Jitter measurement on RTP streams Contents Abstract... 5 1. Introduction... 5 2. Network Jitter Measurement Overview... 6 2.1 Jitter calculation in RFC 3550...6 2.2 RFC 4445...6 3. Time-Stamped Delay Factor (TS-DF)... 7 3.1 Method of measurement...7 4. Summary... 8 5. Informative References... 8 6. Acknowledgements... 8 3
TS-DF algorithm for Network Jitter measurement on RTP streams Tech 3337 Page intentionally left blank. This document is paginated for two-sided printing 4
Tech 3337 TS-DF algorithm for Network Jitter measurement on RTP streams A Proposed Time-Stamped Delay Factor (TS-DF) algorithm for measuring Network Jitter on RTP streams EBU Committee First Issued Revised Re-issued NMC 2010 Keywords: Network Jitter, RTP Stream, Time-Stamped Delay Factor, TS-DF Abstract This document defines a time-stamped delay factor (TS-DF) algorithm that can be used as a tool to measure IP network jitter and its cumulative effect for applications such as video and audio streaming. This algorithm is suitable for measuring IP network jitter in MPEG Transport streams (MPEG TS) over IP, voice over IP, as well as uncompressed video (e.g. SDI) over IP. It aims to address the measurement problems that the MDI Delay Factor (RFC 4445) has when measuring variable bit rate (VBR) media streams. In constant bit rate (CBR) streams, the algorithm produces comparable results to the MDI Delay Factor. Because timestamp information is required for the calculation (typically timestamp field of RTP header), this algorithm only applies to streams carried over time-stamped protocols; it is not suitable for raw UDP traffic applications. 1. Introduction With the wide adoption of IPTV and voice-over-ip services, an increasing amount of video and voice traffic is carried over IP networks. To enable effective use and management of these services, there has been a strong demand from both network providers and service providers to understand network conditions such as delay, jitter, and packet loss rate. RFC 4445 [1] was published in 2006 and proposes a Media Delivery Index (MDI) as a diagnostic tool that could measure both the instantaneous and longer-term behaviour of networks carrying streaming media. One of the drawbacks of the MDI jitter calculation, however, is that it is based on the use of constant bit rate mode (CBR), in which it is assumed that the inter-packet gap is constant. In reality, more and more media, and especially video, is transmitted using variable bit rates (VBR) to save bandwidth. The aim of this document is to propose a new method of measuring network jitter which is effective for both CBR and VBR media streams. To achieve this, the RTP protocol must be used, because the measurement correlates the time-stamp field in the RTP header with the arrival time of the IP packets. Several IPTV implementation guidelines recommend or mandate the use of RTP as it offers significant advantages over raw UDP, including time-stamp and sequence control [2], [3]. 5
TS-DF algorithm for Network Jitter measurement on RTP streams Tech 3337 2. Network Jitter Measurement Overview One method commonly used for measuring network jitter in real-time transmission of media over IP (typically multicast, unicast or broadcast applications) is that defined in RFC 3550 [4], which is based on correlating the time-stamps found in the RTP header with the arrival times of the IP packets. A second method consist of calculating the Media Delivery Index:Delay Factor (MDI:DF) defined in RFC 4445, which has the added advantage of being protocol-independent. It is based on measuring the flow imbalance between the nominal media rate and the arrival rate of the traffic. An estimation of the cumulative jitter could be derived from this parameter. Both methods are described here, along with their limitations when applied to variable bit rate streams. 2.1 Jitter calculation in RFC 3550 RTP protocol is often used in conjunction with a RTCP control protocol. In RTCP the receiver has the ability to send network performance reports to the transmitter. One of the reported metrics is jitter and RFC 3550 defines an algorithm for jitter calculation that is based on the concept of the Relative Transit Time (RTT) between pairs of consecutive packets. If D(i-1, i) is the Relative Transit Time of two consecutive packets, jitter is calculated according to the following equation, defined in [4]: J(i) = J(i-1) + ( D(i-1,i) - J(i-1))/16 Note that in this equation the absolute value of the RTT is used. In consequence, both packets arriving late and packets arriving early (according to their RTTs) increase the jitter value. In reality, the sign of the RTT is important for buffer analysis purposes, as late arrivals and early arrivals have opposite effects. In addition to using the absolute value of D, the gain factor of 1/16 used in the equation effectively creates a low-pass filter that removes high-frequency variations of the jitter. For the purposes of measuring the cumulative effect of network jitter and its buffer implications, the peak values are important. Therefore, this jitter calculation is not effective in detecting the buffer overflow or underflow conditions that a decoder might encounter, as required by network operators and service providers. 2.2 RFC 4445 RFC 4445 was published in 2006 and defines a Media Delivery Index (MDI) measurement that can be used as a diagnostic tool or as a quality indicator for monitoring a network intended to deliver applications such as streaming media, MPEG video, Voice over IP, or other information sensitive to arrival time and packet loss. The MDI measurement consists of two components, the Delay Factor (DF) and the Media Loss Rate (MLR). The Delay Factor is based on measuring the flow imbalance between the nominal media rate and the arrival rate of the traffic. The calculation of MDI:DF is based on the concept of a virtual buffer. This buffer has a fill rate, dictated by the arrival of IP packets, and a drain rate, derived from the nominal rate of the stream. This method has proved very effective for constant bit rate (CBR) streams, where the drain rate is constant and therefore known at all times. In VBR streams, the bit rate varies over time, typically from a few hundred kbit/s to 5-6 Mbit/s in a few seconds in a statistic multiplexing environment. Because of this variation, the DF calculation in VBR streams has an inherent error that can be of the order of 100 ms. Therefore, MDI:DF can potentially produce misleading results as it varies with stream data rather than with network 6
Tech 3337 TS-DF algorithm for Network Jitter measurement on RTP streams conditions. Some approximations have been proposed to adapt the Delay Factor calculation described in RFC 4445 to variable bit rate streams, but they include computing an average of the media rate to feed into the DF calculation. The error introduced by this approximation can be quite significant, which makes it ineffective for measuring network jitter and its cumulative effect. The proposed method of Time-stamped Delay Factor described below avoids decoding the transport stream, therefore reducing the complexity and also allowing the measurement to be effective for non-transport stream payload types. 3. Time-Stamped Delay Factor (TS-DF) Network jitter can be significant, and a receiver must compensate by inserting delay into its playout buffer so that packets held up by the network can be processed. To minimize the playout delay, it is necessary to study the properties of the jitter and to use these to derive the minimum suitable playout delay [5]. To circumvent the limitations of the two jitter measurements described above, a new method of measuring jitter has been proposed that is effective for both CBR and VBR streams. This new method is based on correlating arrival times of network packets with the time-stamp field in the RTP header. The algorithm can be adapted to other network protocols provided that they include time-stamp data. An example of such protocol is Time-Stamped Transport Stream, defined in [6]. There are various reasons why protocols with time-stamp data are recommended over raw UDP or other minimalist protocols. These include the ability of readily de-jittering and re-ordering packets at the receiver. De-jittering variable bit rate streams with no time-stamp data available is a highly complex task which involves constant examination of PCRs and interpolation of bit rates. 3.1 Method of measurement The Time-stamped Delay Factor measurement is based on the Relative Transit Time defined in RFC 3550. The chosen measurement period is 1 second, which for typical stream rates of 1 Mbit/s and above provides a long enough period to measure jitter. In this algorithm, the first packet at the start of the measurement period is considered to have no jitter and is used as a reference packet. For each subsequent packet, i, which arrives within the measurement period, the Relative Transit Time between this packet and the reference packet is calculated as. D(i,0) = (R(i) R(0)) (S(i) S(0)) At the end of the measurement period, the maximum and minimum values of D are extracted from the list of D values, and the Time-stamped Delay Factor is calculated as: TS-DF = D(Max) D(Min) Unlike the jitter equation in RFC 3550, this algorithm does not use the 1/16 smoothing factor and therefore gives a very accurate instantaneous result. By using a one second calculation window, it does not take into account any long-term drift effect. Compared with MDI:DF in RFC 4445, Time-stamped DF takes into consideration both transmission information as well as received information, therefore any variation introduced by variable bit rate will not be presented in the final jitter calculation. Simulations and real case test results show that TS-DF gives a comparable result for CBR streams. 7
TS-DF algorithm for Network Jitter measurement on RTP streams Tech 3337 The only exception to this is the case of packet loss, to which the Time-stamped Delay Factor measurement is not sensitive. The second measurement in the MDI ratio, Media Loss Rate, is the index that specifically measures packet loss. TS-DF only takes into consideration jitter introduced by the network. 4. Summary The new network jitter measurement tool, Time-stamped Delay Factor (TS-DF) described here is able to measure both constant bit rate and variable bit rate streams for media over IP. This algorithm can be applied not only to Transport Stream based traffic over IP but also other media traffic such as uncompressed streams over IP or voice over IP, as long as time-stamp information is present in the protocol headers. 5. Informative References [1] RFC 4445 A Proposed Media Delivery Index (MDI) [2] EBU Video contribution over IP (VCIP), Recommendation xxx (still in preparation) [3] IP Broadcast guidelines (IPTVFJ STD-0004 v1.1), IPTV Forum Japan [4] RFC 3550 RTP: A Transport Protocol for Real-Time Applications [5] RTP - Audio and Video for the Internet, Colin Perkins, ISBN 0-672-32249-8 [6] DLNA Networked Device Interoperability Guidelines, Volume 2: Media Format Profiles 6. Acknowledgements The authors gratefully acknowledge the contributions of the EBU IP Measurement (IPM) working group, together with Andrew Lipscombe of BBC Research, Chris Prowse of Tektronix, and Thomas Kernen of Cisco. 8