ECE 333: Introduction to Communication Networks Fall 2001

Similar documents
Transmission Control Protocol. ITS 413 Internet Technologies and Applications

CSC 634: Networks Programming

Computer Networking Introduction

CS 43: Computer Networks. 19: TCP Flow and Congestion Control October 31, Nov 2, 2018

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control

Transport layer. UDP: User Datagram Protocol [RFC 768] Review principles: Instantiation in the Internet UDP TCP

Transport layer. Review principles: Instantiation in the Internet UDP TCP. Reliable data transfer Flow control Congestion control

6.1 Internet Transport Layer Architecture 6.2 UDP (User Datagram Protocol) 6.3 TCP (Transmission Control Protocol) 6. Transport Layer 6-1

TCP: Overview RFCs: 793, 1122, 1323, 2018, 2581

Department of Computer and IT Engineering University of Kurdistan. Transport Layer. By: Dr. Alireza Abdollahpouri

Lecture 4: Congestion Control

TCP. CSU CS557, Spring 2018 Instructor: Lorenzo De Carli (Slides by Christos Papadopoulos, remixed by Lorenzo De Carli)

Transport Protocols and TCP

CS321: Computer Networks Congestion Control in TCP

Chapter III: Transport Layer

Chapter 7. The Transport Layer

COMP/ELEC 429/556 Introduction to Computer Networks

Lecture 8. TCP/IP Transport Layer (2)

TCP Congestion Control

TCP Congestion Control

05 Transmission Control Protocol (TCP)

Congestion control in TCP

CS457 Transport Protocols. CS 457 Fall 2014

TCP Congestion Control

Transport Layer PREPARED BY AHMED ABDEL-RAOUF

Transport Layer. -UDP (User Datagram Protocol) -TCP (Transport Control Protocol)

Flow and Congestion Control (Hosts)

Congestion Control. Principles of Congestion Control. Network-assisted Congestion Control: ATM. Congestion Control. Computer Networks 10/21/2009

TCP Overview. Connection-oriented Byte-stream

Lecture 3: The Transport Layer: UDP and TCP

CS4700/CS5700 Fundamentals of Computer Networks

Information Network 1 TCP 1/2. Youki Kadobayashi NAIST

Advanced Computer Networks

Congestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University

Flow and Congestion Control Marcos Vieira

ETSF10 Internet Protocols Transport Layer Protocols

Transport Layer. Application / Transport Interface. Transport Layer Services. Transport Layer Connections

Two approaches to Flow Control. Cranking up to speed. Sliding windows in action

Answers to Sample Questions on Transport Layer

ECE 610: Homework 4 Problems are taken from Kurose and Ross.

CSCD 330 Network Programming Winter 2015

Outline. TCP: Overview RFCs: 793, 1122, 1323, 2018, steam: r Development of reliable protocol r Sliding window protocols

Outline. TCP: Overview RFCs: 793, 1122, 1323, 2018, Development of reliable protocol Sliding window protocols

CSCD 330 Network Programming

TCP Service Model. Today s Lecture. TCP Support for Reliable Delivery. EE 122:TCP, Connection Setup, Reliability

Fall 2012: FCM 708 Bridge Foundation I

Chapter 3- parte B outline

Fundamentals of Computer Networks ECE 478/578. Transport Layer. End- to- End Protocols 4/16/13. Spring Application. Application.

Network Technology 1 5th - Transport Protocol. Mario Lombardo -

Congestion Control. Principles of Congestion Control. Network assisted congestion. Asynchronous Transfer Mode. Computer Networks 10/23/2013

CS3600 SYSTEMS AND NETWORKS

Advanced Computer Networks

CSCI-1680 Transport Layer II Data over TCP Rodrigo Fonseca

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Multiple unconnected networks

Problem. Chapter Outline. Chapter Goal. End-to-end Protocols. End-to-end Protocols. Chapter 5. End-to-End Protocols

CMSC 417. Computer Networks Prof. Ashok K Agrawala Ashok Agrawala. October 25, 2018

TCP: Transmission Control Protocol UDP: User Datagram Protocol TCP - 1

CNT 6885 Network Review on Transport Layer

Sequence Number. Acknowledgment Number. Data

bitcoin allnet exam review: transport layer TCP basics congestion control project 2 Computer Networks ICS 651

ADVANCED COMPUTER NETWORKS

Correcting mistakes. TCP: Overview RFCs: 793, 1122, 1323, 2018, TCP seq. # s and ACKs. GBN in action. TCP segment structure

TCP and Congestion Control (Day 1) Yoshifumi Nishida Sony Computer Science Labs, Inc. Today's Lecture

Announcements Computer Networking. Outline. Transport Protocols. Transport introduction. Error recovery & flow control. Mid-semester grades

TSIN02 - Internetworking

CS419: Computer Networks. Lecture 10, Part 3: Apr 13, 2005 Transport: TCP performance

TSIN02 - Internetworking

CS 43: Computer Networks. 18: Transmission Control Protocol October 12-29, 2018

Chapter 24. Transport-Layer Protocols

TCP Congestion Control. Lecture 16. Outline. TCP Congestion Control. Additive Increase / Multiplicative Decrease (AIMD)

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

User Datagram Protocol

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Guide To TCP/IP, Second Edition UDP Header Source Port Number (16 bits) IP HEADER Protocol Field = 17 Destination Port Number (16 bit) 15 16

CC451 Computer Networks

Connection-oriented (virtual circuit) Reliable Transfer Buffered Transfer Unstructured Stream Full Duplex Point-to-point Connection End-to-end service

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

CSC 4900 Computer Networks: TCP

CMPE 150/L : Introduction to Computer Networks. Chen Qian Computer Engineering UCSC Baskin Engineering Lecture 10

7. TCP 최양희서울대학교컴퓨터공학부

Congestion Avoidance and Control. Rohan Tabish and Zane Ma

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control

CSCI Topics: Internet Programming Fall 2008

Transport Protocols & TCP TCP

Transport Protocols. Raj Jain. Washington University in St. Louis

CS 356: Introduction to Computer Networks. Lecture 16: Transmission Control Protocol (TCP) Chap. 5.2, 6.3. Xiaowei Yang

Outline Computer Networking. Functionality Split. Transport Protocols

TCP Review. Carey Williamson Department of Computer Science University of Calgary Winter 2018

ECE697AA Lecture 3. Today s lecture

The aim of this unit is to review the main concepts related to TCP and UDP transport protocols, as well as application protocols. These concepts are

UNIT IV -- TRANSPORT LAYER

Flow and Congestion Control

9th Slide Set Computer Networks

CS118 Discussion 1A, Week 4. Zengwen Yuan Dodd Hall 78, Friday 10:00 11:50 a.m.

Transport Layer TCP / UDP

Topics. TCP sliding window protocol TCP PUSH flag TCP slow start Bulk data throughput

User Datagram Protocol (UDP):

ITS323: Introduction to Data Communications

6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long

Transcription:

ECE 333: Introduction to Communication Networks Fall 2001 Lecture 28: Transport Layer III Congestion control (TCP) 1 In the last lecture we introduced the topics of flow control and congestion control. Both of these refer to methods that are used to regulate the rate at which a source can send data. Flow control often denotes techniques that regulate a source's rate based on the buffer space at the destination (independent of the network). Congestion control denotes techniques that regulate a source's rate based on congestion in the subnet. 1 We saw that in TCP, flow control is implemented by including a "receive window" in each TCP segment sent over the reverse link. This window is then used by the sliding window ARQ protocol to provide flow control. Today we will discuss congestion control. First we look at the problem of congestion and some general approaches to congestion control. Then we describe how congestion control is implemented in TCP. 1 This terminology is not universal and in many cases a distinction is not made between these two issues. Indeed, both flow and congestion control can be viewed as parts of the same problem. 2

Congestion Congestion arises when the total load on the network becomes too large. This leads to queues building up and to long delays (recall the M/G/1 model). If sources retransmit messages, then this can lead to even more congestion and eventually to congestion collapse. This is illustrated in the figure below. Notice, as the offered load increase, the number of packets delivered at first increases, but at high enough loads, this rapidly decreases. 3 For example, originally the Internet (TCP/IP) did not implement congestion control. This led to a series of congestion collapses starting in 1986. In Oct. of 1986, the data throughput between Lawrence Berkeley Laboratory and UC Berkeley dropped from 32 kbps to 40 bps 2 [V. Jacobson, "Congestion Avoidance and Control", Proc. of SIGCOMM '88.] To avoid this type of behavior, congestion control was incorporated into TCP. The first order goal of congestion control is to avoid this type of congestion collapse. While avoiding congestion, it is still desirable to get the best performance possible. Congestion control can also be used to ensure that users get a desired QoS (i.e. throughput, delay). Another issue that congestion control needs to address is ensuring fairness between different sessions; in other words to sessions with the same requirements should receive equitable treatment from the network. (Exactly what is a fair treatment can be interpreted in several ways.) 2 V. Jacobson, "Congestion Avoidance and Control", Proc. of SIGCOMM '88 4

Approaches to Congestion Control Congestion control may be addressed at both the network level and the transport layer. At the network layer possible approaches include Packet dropping - when a buffer becomes full a router can drop waiting packets - if not coupled with some other technique, this can lead to greater congestion through retransmissions. Packet scheduling - certain scheduling policies may help in avoiding congestion - in particular scheduling can help to isolate users that are transmitting at a high rate. Dynamic routing - when a link becomes congested, change the routing to avoid this link - this only helps up to a point (eventually all links become congested) and can lead to instabilities (see Lecture 23). Admission control/traffic policing - Only allow connections in if the network can handle them and make sure that admitted sessions do not send at too high of a rate - only useful for connection-oriented networks. 5 An approach that can be used at either the network or transport layers is Rate control - this refers to techniques where the source rate is explicitly controlled based on feedback from either the network and/or the receiver. For example, routers in the network may send a source a "choke packet" upon becoming congested. When receiving such a packet, the source should lower it rate. These approaches can be classified as either "congestion avoidance" approaches, if they try to prevent congestion from ever occurring, or as "congestion recovery" approaches, if they wait until congestion occurs and then react to it. In general, "an ounce of prevention is worth a pound of cure." Different networks have used various combinations of all these approaches. Traditionally, rate control at the transport layer has been used in the Internet, but new approaches are beginning to be used that incorporate some of the network layer techniques discussed above. 6

Congestion control in TCP TCP implements end-to-end congestion control. V. Jacobson proposed the basic TCP congestion control algorithm in 1988. IP does not provide any explicit congestion control information; TCP detects congestion via the ACK's from the sliding-window ARQ algorithm used for providing reliable service. Specifically, assume that the RTT for a session is known, so that the proper time-out time can be used. When the source times out before receiving an ACK, the most likely reason is because a link became congested. TCP uses this as an indication of congestion. In this case, TCP will slow down the transmission rate. TCP controls the transmission rate of a source by varying the window size used in the sliding window protocol (this is the same technique used for flow control). Recall, (see lecture 10) that for a window of size of W packets, the effective rate under a sliding window protocol (ignoring errors) is given by R eff Wd (assuming that W <1 +(IR)/d) d IR Here I is the round-trip time, and d is the number of bits per packet. Thus a smaller value of W will result in a smaller effective rate. 7 Timeout time Before discussing how TCP adjusts the rate when a timeout occurs. We first look at how TCP chooses a value for the timeout time. The timeout time should be set approximately equal to the round trip time, i.e. the time from when a segment is transmitted until an ACK for the segment is received. Choosing to small of a time-out time will result in unneeded retransmissions, and choosing too large a value will result in long delays before a packet is retransmitted. However, at the transport layer, the round trip time will vary with each pair of host (or more precisely with each route between any pair of hosts). The round-trip time will also depend on the queueing delays within the network; these will vary with each segment sent. In TCP the time-out value is based on estimates of the round trip time (RTT). Specifically, for each packet sent the RTT is measured. Let Est_RTT be the current estimate of the RTT and let Sample_RTT be a new measured value. The new estimate of the RTT is then formed as follows: Est_RTT = (Est_RTT) + (1- ) Sample_RTT Here is a parameter between 0 and 1; a typical value is 0.125. Thus the transmitter's estimate of the RTT is a moving average of the sample RTT's. 8

The timeout value used by TCP is equal to the Est_RTT plus a margin to account for variations due to changes in queueing delays. This margin is related to the variance of the RTT. In other words, a connection with a large variance in the RTT will have a larger margin. Specifically, in TCP Timeout = Est_RTT +4 (Dev) where Dev is also recursively calculated for each new sample using Dev = (1- ) Dev + Sample_RTT - Est_RTT (This quantity does not give the true variance or standard deviation, but instead a related quantity called the mean deviation. This appears to have been chosen because it is easier to calculate. ) When Sample_RTT - Est_RTT is large, the Dev will increase, leading to a larger margin for the timeout as desired. 9 Next we look at how TCP changes the rate at which a source can send data. This is done by varying the window size used by the sliding-window algorithm. The window size is the largest number of unacknowledged data bytes that a source can send at any time. Recall, in TCP data is viewed as a byte stream, and all sequence numbers refer to byte number. The largest segment that a TCP implementation can send is referred to as the Maximum Segment Size (MSS), the value for the MSS can vary with implementation. For congestion control, TCP requires each source to keep track of a congestion window, WC, which indicates the maximum number of MSSsized segments that the source can send, based on congestion. The window size used by TCP is then given by min(wc(mss), Rw), where Rw is the receive window used for flow control (see Lecture 27). In the following we will assume that Rw is very large and can be ignored - this way we can focus on the behavior of the congestion window. We also assume that the transmitter always has data available to send, and that each segment sent contains MSS bytes. With these assumptions we describe the basic idea of TCP congestion control. 10

TCP Congestion control TCP tries to adaptively find the "right" congestion window size for the network. If window is too large, network will become congested; if too small then inefficient. More over, the "right" window size may change over time, depending on network load. When segments are successfully acknowledged, without the sender timing out, TCP increases the window size. Whenever TCP times-out while waiting for a segment it decreases the window size. In this manner TCP continually adapts the window size. In addition to the congestion window, the sender also keeps track of a parameter called the threshold; we denote this parameter by TC. Both WC and TC are adapted over time. Initially a connection sets WC=1. Assume that TC is some value greater than 1. When segments are acknowledged before the transmitter times out TCP increases the window size using the following rules: If W C TC then set WC = WC +1 for every successful ACK that is received. If W C TC then set WC = WC +1 after WC successful ACK's are received. 11 We see that starting at WC = 1, the congestion window will increase by one for every ACK that is received until it becomes larger than the threshold, this phase is called slow start. After the congestion window becomes larger than the threshold, the congestion window is increased by one for every congestion window worth of ACKs that are received; this phase is called the congestion avoidance phase. Assume that in approximately one RTT, a congestion window worth of data can be sent and acknowledged (this is reasonable if the time to send a windows worth of data is much less than the RTT). With this assumption, during the slow start phase, the congestion window will double during each RTT, i.e. it will grow exponentially fast. During the congestion avoidance phase, the window will increase by 1 during each RTT; in this case, it is growing at a linear rate. When the transmitter times out, it does the following: Set TC = WC/2 and set WC = 1. In other words, the threshold is set half of the value of the congestion window before the time out occurred, and the congestion window is reduced to 1 (causing the algorithm to re-enter the slow start phase). 12

An example of this algorithm is shown below. The slow start phase is shown in blue and the congestion avoidance phase is shown in green. In this figure one time out occurred as indicated. Here TC(1) indicates the initial threshold and TC(2) indicates the threshold after the time out occurs. W C Time out occurs T C (1) T C (2) Time An actual trace of this algorithm is shown next. 13 Again the vertical axis is the congestion window and the horizontal axis is time. 14

Notice, if congestion is detected, then at the end of the next slow start phase, the congestion window will be 0.5 times it value before congestion occurred. Thus, if we ignore the slow start phase then TCP simply increases the congestion window additively each time no congestion is detected and decreases the congestion window multiplicatively each time congestion is detected. For this reason, TCP is described as an Additive-Increase, Multiplicative-Decrease (AIMD) algorithm. This type of algorithm has been shown to have some nice fairness properties. We illustrate this with an example of two TCP connections sharing a single link, we assume that both sessions have the same MSS and RTT, and both always have data to send. Assume that the link has a capacity of C MSSsegments, (i.e. if the sum of rates (window sizes) of the two users exceed C then congestion sets in). The figure below illustrates how an AIMD algorithm drives the two users to a fair (equal) allocation of the link. The axes are the congestion windows of each session. Both increase additively until exceeding the capacity then decrease multiplicatively towards the origin. This behavior drives the congestion windows toward the fair (equal) rate line over time. 15 C W C 1 =W C 2 W C 2 (session 2) W C 1 (congestion window, session 1) C Versions of TCP 16

The preceding description of TCP congestion control roughly corresponds to the version of TCP called "Tahoe". Several other versions exist. For example in TCP Reno, the algorithm is modified to support fastretransmission, where a packet can be retransmitted after three duplicate ACKs are received (before timing out). A fast retransmission is followed by a fast-recovery, where TCP basically skips the slow start period. Another version of TCP is called "Vegas." In Vegas the size of the congestion window is based on the history of RTT's observed by the sender. For example, when the change in the RTT becomes too large the sender decreases the congestion window. The idea here is to detect the onset of congestion before a time out occurs and achieve higher throughput. TCP header 17 The header fields for TCP are shown below. 16 bits 16 bits Source port Destination port Sequence Seque0nce number number Acknowledgement number TCP header length U R G Checksum A C K P S H R S T S Y N F I N Window size Urgent pointer Options (0 or more 32 bit words) Data (optional) 18

Header length is measured in 32 bit words. The URG,ACK, PSH, RST, SYN, and FIN fields are one bit flags. The ACK bit indicates that the Acknowledgement field is a valid acknowledgement. The RST bit is used to reset a connection. The SYN bit is used during connection establishment. A connection request packet has SYN=1, ACK =0, a connection accept packet has SYN=1, ACK=1, and the sequence number of the connection request packet is in the Acknowledgement field. The FIN bit is used to release connection. The PSH and URG fields are rarely used. As in UDP, the TCP checksum is calculated using the TCP header, data and part of the IP header. 19