An experimental study of the efficiency of Explicit Congestion Notification

Similar documents
Tuning RED for Web Traffic

RED behavior with different packet sizes

Low pass filter/over drop avoidance (LPF/ODA): an algorithm to improve the response time of RED gateways

Random Early Detection (RED) gateways. Sally Floyd CS 268: Computer Networks

Buffer Requirements for Zero Loss Flow Control with Explicit Congestion Notification. Chunlei Liu Raj Jain

A Survey of Recent Developments of TCP. Sally Floyd ACIRI (AT&T Center for Internet Research at ICSI) October 17, 2001

Synopsis on. Thesis submitted to Dravidian University for the award of the degree of

Investigating the Use of Synchronized Clocks in TCP Congestion Control

Internet Research Task Force (IRTF) Category: Experimental. S. Ostermann. Ohio University. March 2014

Transport Protocols for Data Center Communication. Evisa Tsolakou Supervisor: Prof. Jörg Ott Advisor: Lect. Pasi Sarolahti

Network Working Group Request for Comments: 4774 BCP: 124 November 2006 Category: Best Current Practice

Request for Comments: S. Floyd ICSI K. Ramakrishnan AT&T Labs Research June 2009

RECHOKe: A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks

Exploiting the Congestion Control Behavior of the Transmission Control Protocol

Effective Utilization of Router Buffer by Threshold Parameter Setting Approach in RED

Traffic Management using Multilevel Explicit Congestion Notification

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Investigating the Use of Synchronized Clocks in TCP Congestion Control

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

A NEW CONGESTION MANAGEMENT MECHANISM FOR NEXT GENERATION ROUTERS

CPSC 826 Internetworking. Congestion Control Approaches Outline. Router-Based Congestion Control Approaches. Router-Based Approaches Papers

Analyzing the Receiver Window Modification Scheme of TCP Queues

Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes

TCP so far Computer Networking Outline. How Was TCP Able to Evolve

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Transmission Control Protocol. ITS 413 Internet Technologies and Applications

On the Transition to a Low Latency TCP/IP Internet

Congestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University

A Self-Configuring RED Gateway

A Survey on Quality of Service and Congestion Control

A Report on Some Developments in TCP Congestion Control Mechanisms

Congestion / Flow Control in TCP

Performance Evaluation of Controlling High Bandwidth Flows by RED-PD

The Present and Future of Congestion Control. Mark Handley

A Report on Some Recent Developments in TCP Congestion Control

Appendix B. Standards-Track TCP Evaluation

CS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP

Improving Internet Congestion Control and Queue Management Algorithms. Wu-chang Feng March 17, 1999 Final Oral Examination

RD-TCP: Reorder Detecting TCP

THE TCP specification that specifies the first original

DiffServ over MPLS: Tuning QOS parameters for Converged Traffic using Linux Traffic Control

Fast Retransmit. Problem: coarsegrain. timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission

ARTICLE IN PRESS. Delay-based early congestion detection and adaptation in TCP: impact on web performance

UNIT IV TRANSPORT LAYER

Transport Layer TCP / UDP

UNIT IV -- TRANSPORT LAYER

Congestion Control and Resource Allocation

CE693 Advanced Computer Networks

Router participation in Congestion Control. Techniques Random Early Detection Explicit Congestion Notification

AN IMPROVED STEP IN MULTICAST CONGESTION CONTROL OF COMPUTER NETWORKS

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

On Standardized Network Topologies For Network Research Λ

DiffServ over MPLS: Tuning QOS parameters for Converged Traffic using Linux Traffic Control

Improving the Ramping Up Behavior of TCP Slow Start

Chapter 23 Process-to-Process Delivery: UDP, TCP, and SCTP

TCP Algorithms and Properties to Avoid Congestion

Quantifying the Effects of Recent Protocol Improvements to Standards-Track TCP * (Extended Version)

Analysis of Dynamic Behaviors of Many TCP Connections Sharing Tail Drop/RED Routers

A Report on Some Recent Developments in TCP Congestion Control

Denial of Service Attacks in Networks with Tiny Buffers

Analysis of FTP over SCTP and TCP in Congested Network

sequence number trillian:1166_==>_marvin:3999 (time sequence graph)

CS 356: Computer Network Architectures Lecture 19: Congestion Avoidance Chap. 6.4 and related papers. Xiaowei Yang

ENSC 835 project TCP performance over satellite links. Kenny, Qing Shao Grace, Hui Zhang

Congestion Control Without a Startup Phase

TCP in Asymmetric Environments

Analysis of the interoperation of the Integrated Services and Differentiated Services Architectures

INTERNATIONAL JOURNAL OF RESEARCH IN COMPUTER APPLICATIONS AND ROBOTICS ISSN

IFTP-W: A TCP-Friendly Protocol for Multimedia Applications over Wireless Networks

Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks

On the Effectiveness of CoDel for Active Queue Management

Quantifying the effects of recent protocol improvements to TCP: Impact on Web performance

ENSC 835 project (2002) TCP performance over satellite links. Kenny, Qing Shao Grace, Hui Zhang

ENSC 835 project (2002) TCP performance over satellite links. Kenny, Qing Shao Grace, Hui Zhang

Congestion and its control: Broadband access networks. David Clark MIT CFP October 2008

The Network Layer and Routers

TFRC and RTT Thresholds Interdependence in a Selective Retransmission Scheme

Performance Evaluation of SCTP with Adaptive Multistreaming over LEO Satellite Networks

An Extension to the Selective Acknowledgement (SACK) Option for TCP

A Framework For Managing Emergent Transmissions In IP Networks

8. TCP Congestion Control

Network Working Group Request for Comments: 4336 Category: Informational UCL E. Kohler UCLA March 2006

PERFORMANCE COMPARISON OF TRADITIONAL SCHEDULERS IN DIFFSERV ARCHITECTURE USING NS

Network Management & Monitoring

A New Fair Window Algorithm for ECN Capable TCP (New-ECN)

Data Center TCP(DCTCP)

Overview. TCP & router queuing Computer Networking. TCP details. Workloads. TCP Performance. TCP Performance. Lecture 10 TCP & Routers

TCP based Receiver Assistant Congestion Control

Improving TCP Performance over Wireless Networks using Loss Predictors

Estimating Arrival Rates from the RED Packet Drop History

Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015

Fairness Evaluation Experiments for Multicast Congestion Control Protocols

Congestion Control in Datacenters. Ahmed Saeed

Packet Marking for Web traffic in Networks with RIO Routers

Fore ATM Switch ASX1000 D/E Box (0 to 20000km) ACTS (36000km)

On the State of ECN and TCP Options on the Internet

Networks Fall This exam consists of 10 problems on the following 13 pages.

image 3.8 KB Figure 1.6: Example Web Page

Mobile Transport Layer

SCTP over Satellite Networks

Transcription:

2011 Panhellenic Conference on Informatics An experimental study of the efficiency of Explicit Congestion Notification Stefanos Harhalakis, Nikolaos Samaras Department of Applied Informatics University of Macedonia Thessaloniki, Greece v13@acm.org, samaras@uom.gr Vasileios Vitsas Department of Informatics TEI of Thessaloniki Thessaloniki, Greece vitsas@it.teithe.gr Abstract Explicit Congestion Notification (ECN) is an addition to the Internet Protocol and the Transmission Control Protocol (TCP) which aims in improving the performance of TCP and other transport layer protocols. ECN is only meaningful when combined with bottleneck links that use Active Queue Management (AQM). Random Early Detection () and other AQM mechanisms have been subject of criticism regarding their ability to improve the behavior of bottleneck links. This paper is an experimental study of Drop-Tail, and +ECN when operating over a constantly congested link. The paper studies the performance and the efficiency of TCP under constant congestion when using those methods and shows that ECN actually improves the efficiency of TCP without harming its performance. Index Terms Active Queue Management; Random Early Detection; Internet Protocol; Transmission Control Protocol; Explicit Congestion Notification I. INTRODUCTION Internet traffic traverses through intermediate routers and their interface queues which, in most cases, are First-In-First- Out (FIFO) queues or Active Queue Management (AQM) [1] queues and consist a shared resource [15]. While FIFO queues are simplistic drop-tail queues, AQM queues are more sophisticated and attempt to proactively indicate congestion to endpoints. Random Early Detection () [1] is the most common AQM method with wide vendor support. When using AQM, packet drops are the most common approach to congestion indication. They are used to inform endpoints that congestion is imminent and that they should take immediate action to reduce their transmission rate. This approach is compatible with most transport layer protocols (like the Transmission Control Protocol [13]) that appropriately react to congestion indication. Explicit Congestion Notification (ECN) [14] is an addition to the Internet Protocol [12] that can be used with AQM queues in order to reduce or even eliminate packet drops for congestion indication. When using ECN, packets that carry data from a transport layer protocol that supports ECN are marked instead of being dropped by an AQM queue, when the queue needs to proactively indicate congestion. Currently, ECN may only be used by transport layer protocols that (a) implement congestion control and (b) are appropriately extended to take advantage of ECN. Previous work has shown that the benefits of using AQM by itself on congested links may be minor or non-existing [2], [7], [3] both for the Internet provider and the end users. That work focuses mostly on AQM s usage when it is not combined with ECN and examines whether there are performance or fairness benefits from the transition from FIFO to AQM. However, the benefits of using with ECN are still valid and no major concerns seem to have been expressed so far. This paper presents the results of an experimental study of ECN usage. It shows that ECN can be used to practically eliminate retransmissions and achieve nearly 100% efficiency even on heavy loaded links. ECN s performance seems to remain constant with either a small or large number of background TCP connections. The results show that when using FIFO queues or queues without ECN, the efficiency drops to nearly % while when using and ECN the efficiency remains constant and it is approximately 100%. The rest of the paper is organized as follows: section II briefly introduces the required background for AQM, and ECN while sections III and IV describe the experiment and the experimental facility respectively. Section V presents and discusses the results and section VI includes the conclusions. II. BACKGROUND FIFO is a simplistic approach to queuing that is also known as Drop Tail and it is best viewed as a single buffer. Whenever there is available buffer space, data are queued. When the buffer space is exhausted, excess data are dropped. This method is efficient against variations of traffic rates and traffic bursts and allows the interfaces to reach their maximum transmission rate because data are always available for transmission. However, in the case of persistent congestion, FIFO queues (drop-tail queues) are considered to be inadequate [1]. Because of the way congestion control is accomplished on the Internet, FIFO queues result in global synchronization meaning that traffic rate is reduced altogether and then increased again instead of being maintained constant. The reaction of congestion control mechanisms also lags behind any congestion indication because of the end-to-end delay (Round-Trip Time - RTT). This has the side effect of congestion not being addressed early enough and thus causing excess packet drops which are considered harmful 978-0-7695-4389-5/11 $26.00 2011 IEEE DOI 10.1109/PCI.2011.76 122

both because they needlessly consume router and endpoint resources and because they slow down data transfers. Active Queue Management (AQM) attempts to alleviate the problems of drop-tail queues by providing congestion indication to endpoints before buffer space is exhausted. Random Early Detection () is the most common AQM mechanism with wide vendor adoption. In order to avoid congestion and prevent global synchronization probabilistically starts dropping packets whenever the average queue length reaches a soft-limit and becomes a drop-tail mechanism whenever the queue length reaches a hard limit [4]. This way indicates congestion when it starts to occur instead of waiting for the queue to become full. Because of the probabilistic early congestion indication, compensates for the end-to-end delay allowing for the indication to reach the transmitting endpoint before the queue becomes full. In contrast with FIFO queues, where packets are dropped when buffer space is exhausted, queues drop packets in order to indicate congestion even when there is available buffer space. This is based on the assumption that endpoints will react to congestion indication on time and will lower their transmission rates in order to avoid congestion. Explicit Congestion Notification (ECN) is an addition to the Internet Protocol (IP) that complements AQM allowing for congestion indication without dropping packets. When ECN is used, intermediate routers mark instead of dropping packets unless they have reached their maximum queue size. This effectively reduces packet drops causing fewer retransmissions which result in better network efficiency and protocol behavior. ECN s specification dictates that only packets that carry an ECN capable upper layer protocol should be marked instead of being dropped. Those IP packets are distinguished by an ECN Capable Transport (ECT) codepoint that is set by the transmitting node using the two bits next to the DiffServ IP header field. When those packets traverse AQM-based queues they are marked with the Congestion Encountered (CE) codepoint instead of being dropped, unless there is no available buffer space. This marking is handled by the upper layer protocol at the receiver which informs the transmitter for impeding congestion. In the case of the Transmission Control Protocol (TCP), this is accomplished by using the ECN Echo (ECE) flag. Non-ECT traffic is never marked and is always dropped in order to indicate impeding congestion. ECN requires support from the transport protocol and may only be used with IP packets that carry data from such a protocol. More specifically, ECN may only be used when the transport protocol supports congestion control and is willing to take advantage of ECN. In the case of TCP this willingness is determined at the initial three-way-handshake where ECN usage is negotiated. In the case of Stream Control Transmission Protocol (SCTP) [16], a similar approach is taken by including a reserved TLV (type 0x8000) in the INIT and INIT ACK chunks. III. EXPERIMENT DESCRIPTION The purpose of the experiment was to study, with ECN and Drop-Tail and to compare their efficiency. In order to perform the tests we used the experimental facility that is shown in section IV. The experiment was performed using some fixed values and some varying parameters. Constant parameters were: For Drop-Tail: FIFO in byte mode () Buffer size: 307 KBytes For : Minimum threshold: 175 KBytes (20% of the limit) Maximum threshold: 614 KBytes (70% of the limit) Hard limit: 877 KBytes Burst: 229 packets Maximum drop/mark probability: 10% Average packet size: 1400 bytes For the Internet emulation: Delay: 50 ms Delay jitter: 10ms Delay correlation: 25% For the TCP connections: The cubic [11] congestion control algorithm Selective Acknowledgments (SACKs) [10] TCP Timestamps for proper Round Trip Time Measurement (RTTM) [8] The queue sizes for and were calculated in order to result in 300ms delay when the link was congested. For, the Burst was calculated using the formula: Burst = 2 MIN th + MAX th 3 AvgP acket as proposed by the tc command manual page. However, since the link was constantly congested, this value had no practical effect on the results. Parameters that were varied were: The number of background TCP connections: 2, 10, 20, 40 and 80 The ratio between ECN enabled and non ECN enabled background TCP connections: We used 100%, 50% and 0% (all, half and none). The bottleneck s link queue type: and byte mode FIFO (). The measurements were performed on an additional TCP connection. When using, the tests were performed both with and without ECN. During the tests we measured: The TCP goodput. The total bytes transmitted. By measuring the TCP goodput and the transmitted bytes we were able to determine TCP s retransmissions and to calculate its efficiency. In order to perform the measurements we implemented a client-server program using the C language and the Pcap library. The program used all the required methods in order to provide accurate results. Great care was 123

Fig. 1: Testing facility setup taken in order for the timings and measurements to be correct when waiting for the ACK of the last FIN segment of each connection. For each test that was ran we took measurements only for one TCP connection. IV. FACILITY SETUP The setup of the facility is shown in figure 1 and consists of: Two servers constantly transmitting background traffic. Two clients receiving the background traffic. Three routers that provide the underlying network (BN1, BN2, GW). A 10Mbps bottleneck link between BN1 and BN2 with either or queues. A 100Mbps link that emulates Internet delays between BN2 and GW with queues. The local network of Department of Informatics of TEI of Thessaloniki and a 100Mbps connection to it via which the clients were communicating with the servers. The above setup was used in order to: 1) Have well-behaved background traffic: Well behaved traffic is considered all traffic that appropriately reacts to congestion indication (e.g. TCP traffic, Stream Control Transmission Protocol [16] traffic, Datagram Congestion Control Protocol [9] traffic and traffic that supports TCP-Friendly Rate Control [5]). This was accomplished by using a number of parallel TCP connections that constantly downloaded data. The TCP connections were using the Cubic congestion control algorithm [11] which is the default in modern Linux systems. 2) Have a bottleneck link where the data rate is limited by the hardware capabilities and not by software (i.e. traffic shaping). 3) Have the ability to emulate Internet delays in order to test background TCP traffic against both low and high Bandwidth-Delay-Product (BDP) paths, since different BDP paths may result in different protocol behavior. 4) Have separate clients with distinct roles. This means distinguishing ECN enabled and not ECN enabled clients. All experiments were performed using the Debian/GNU Linux operating system with the 2.6.32 kernel for the clients, the servers and the intermediate routers. By using the Linux operating system for intermediate routers, clients and servers we were able to measure and monitor traffic in great detail and ensure that the selected parameters were correct (e.g. there was no global synchronization). The administering of the interface queues is part of the traffic control (tc) facility of the Linux operating system. The actual queues are called queuing disciplines (qdiscs) and they are attached to network interfaces. Every network interface is required to have an attached qdisc which by default is a packet-based FIFO qdisc. For the experiments the following qdiscs were used: The qdisc at the endpoints of link B (bottleneck link) for the AQM tests. The Linux kernel implements byte-mode. The (byte FIFO) qdisc at the endpoints of link B for the FIFO tests. The Network Emulator (netem) [6] qdisc in order to add artificial delays and emulate the Internet. Remote access to the machines was performed using out-ofband connections that didn t traverse the bottleneck link. We also considered the effects of the Ethernet driver ring buffer: while it is possible to fully customize the qdiscs of each interface, another buffer may exist at a lower level. The underlying networking driver most probably includes a buffer (referred to as ring-buffer) of its own which is large enough to affect the results. The Intel e1000 driver for example uses by default a 128 packet buffer. The size of this ring buffer in e1000 driver can be reduced to 48 packets (but not less) which still causes 54ms latency when filled. In order to get accurate results this value was also considered when determining the parameters (i.e. queue size thresholds were downshifted by that amount of packets). V. RESULTS Figure 2 shows the efficiency of TCP when operating over a congested link. The X axis indicates the number of background TCP connections and the Y axis indicates the 124

100 (a) Efficiency when background traffic was not using ECN (a) TCP goodput when background traffic was not using ECN 100 (b) Efficiency when 50% of background traffic was using ECN 100 (c) Efficiency when all of the background traffic was using ECN Fig. 2: TCP efficiency (b) TCP goodput when 50% of background traffic was using ECN (c) TCP goodput when all of the background traffic was using ECN Fig. 3: TCP goodput efficiency percentage. The efficiency is calculated as: Efficiency = U sef ulbytes T ransf erredbytes where UsefulBytes is the number of useful bytes that were transmitted and T ransf erredbytes is the number of total bytes that were transmitted. Measurements were performed at the transmitting node and only account for the actual payload and not for the TCP, IP and frame headers. The three graphs show what happens (a) when the background TCP connections do not use ECN (figure 2a), (b) when half of the background connections use ECN (2b) and (c) when all of the background connections use ECN (figure 2c). The graphs show that: 1) When using, efficiency is improved by about 1 percentage point. 2) When using along with ECN, retransmissions are actually eliminated and TCP becomes nearly 100% efficient. Furthermore, the graphs show that ECN usage by the background traffic, does not significantly affect TCP s efficiency (i.e. the efficiency is not altered when the background traffic is using ECN). Figure 3 shows the amount of data that the monitored TCP session transferred (a) when the background TCP connections did not use ECN (figure 3a), (b) when half of the background connections used ECN (3b) and (c) when all of the background connections used ECN (figure 3c). All graphs of figure 3 use logarithmic scale for the Y axis. By studying the graphs we infer that ECN did not cause any performance drop but instead resulted in increased performance in some cases. 125

VI. CONCLUSIONS AQM and ECN seem as a great improvement for the Internet over Drop Tail queues. The ability to proactively indicate congestion without actually dropping packets allows for more efficient usage of congested Internet resources without causing any performance drop. This paper presented experimental results that showed that TCP becomes 100% efficient when ECN is being used even on very congested links. While the efficiency of TCP drops to nearly % when using Drop Tail (FIFO) and queues, it remains constant when using and ECN. Future work should further study the efficiency of by varying other parameters like the targeted link delay, the Internet delay and the congestion control algorithm. REFERENCES [1] B. Braden, D. Clark, J. Crowcroft, B. Davie, S. Deering, D. Estrin, S. Floyd, V. Jacobson, G. Minshall, C. Partridge, L. Peterson, K. Ramakrishnan, S. Shenker, J. Wroclawski, and L. Zhang. Recommendations on Queue Management and Congestion Avoidance in the Internet. RFC 2309 (Informational), April 19. [2] Christof Brandauer, Gianluca Iannaccone, Christophe Diot, Thomas Ziegler, Serge Fdida, and Martin May. Comparison of tail drop and active queue management performance for bulk-data and web-like internet traffic. Computers and Communications, IEEE Symposium on, 0:0122, 2001. [3] Mikkel Christiansen, Kevin Jeffay, David Ott, and F. Donelson Smith. Tuning red for web traffic. IEEE/ACM Trans. Netw., 9:249 264, June 2001. [4] Sally Floyd and Van Jacobson. Random early detection gateways for congestion avoidance. IEEE/ACM Transactions on Networking, 1(4):397 413, 1993. [5] M. Handley, S. Floyd, J. Padhye, and J. Widmer. TCP Friendly Rate Control (TFRC): Protocol Specification. RFC 3448 (Proposed Standard), January 2003. Obsoleted by RFC 5348. [6] S. Hemminger. Network emulation with netem. In Linux Conf Au, April 2005. [7] Gianluca Iannaccone, Martin May, and Christophe Diot. Aggregate traffic performance with active queue management and drop from tail. SIGCOMM Comput. Commun. Rev., 31:4 13, July 2001. [8] V. Jacobson, R. Braden, and D. Borman. TCP Extensions for High Performance. RFC 1323 (Proposed Standard), May 19. [9] E. Kohler, M. Handley, and S. Floyd. Datagram Congestion Control Protocol (DCCP). RFC 4340 (Proposed Standard), March 2006. Updated by RFCs 5595, 55. [10] M. Mathis, J. Mahdavi, S. Floyd, and A. Romanow. TCP Selective Acknowledgment Options. RFC 2018 (Proposed Standard), October 19. [11] PFLDNet 2005. CUBIC: A New TCP-Friendly High-Speed TCP Variant, 2005. [12] J. Postel. Internet Protocol. RFC 791 (Standard), September 11. Updated by RFC 1349. [13] J. Postel. Transmission Control Protocol. RFC 793 (Standard), September 11. Updated by RFCs 1122, 3168. [14] K. Ramakrishnan, S. Floyd, and D. Black. The Addition of Explicit Congestion Notification (ECN) to IP. RFC 3168 (Proposed Standard), September 2001. [15] Rayadurgam Srikant. The Mathematics of Internet Congestion Control (Systems and Control: Foundations and Applications). SpringerVerlag, 2004. [16] R. Stewart, Q. Xie, K. Morneault, C. Sharp, H. Schwarzbauer, T. Taylor, I. Rytina, M. Kalla, L. Zhang, and V. Paxson. Stream Control Transmission Protocol. RFC 20 (Proposed Standard), October 2000. Obsoleted by RFC 40, updated by RFC 3309. 126