Congestion control in TCP

Similar documents
Transmission Control Protocol. ITS 413 Internet Technologies and Applications

CS519: Computer Networks. Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control

Lecture 14: Congestion Control"

Computer Networks. Course Reference Model. Topic. Congestion What s the hold up? Nature of Congestion. Nature of Congestion 1/5/2015.

CSCI Topics: Internet Programming Fall 2008

Bandwidth Allocation & TCP

Computer Networking Introduction

ADVANCED COMPUTER NETWORKS

Transport Layer (Congestion Control)

Lecture 14: Congestion Control"

ECE 333: Introduction to Communication Networks Fall 2001

Operating Systems and Networks. Network Lecture 10: Congestion Control. Adrian Perrig Network Security Group ETH Zürich

Where we are in the Course. Topic. Nature of Congestion. Nature of Congestion (3) Nature of Congestion (2) Operating Systems and Networks

Network Management & Monitoring

Congestion Control. Principles of Congestion Control. Network-assisted Congestion Control: ATM. Congestion Control. Computer Networks 10/21/2009

Lecture 21: Congestion Control" CSE 123: Computer Networks Alex C. Snoeren

CS321: Computer Networks Congestion Control in TCP

Chapter 24 Congestion Control and Quality of Service 24.1

Communication Networks

Congestion Collapse in the 1980s

Principles of congestion control

Congestion Control. Principles of Congestion Control. Network assisted congestion. Asynchronous Transfer Mode. Computer Networks 10/23/2013

Department of EECS - University of California at Berkeley EECS122 - Introduction to Communication Networks - Spring 2005 Final: 5/20/2005

Fast Retransmit. Problem: coarsegrain. timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control

Outline Computer Networking. TCP slow start. TCP modeling. TCP details AIMD. Congestion Avoidance. Lecture 18 TCP Performance Peter Steenkiste

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness

Lecture 15: Transport Layer Congestion Control

ECE 610: Homework 4 Problems are taken from Kurose and Ross.

Chapter III: Transport Layer

II. Principles of Computer Communications Network and Transport Layer

UNIT IV -- TRANSPORT LAYER

Congestion Control. Queuing Discipline Reacting to Congestion Avoiding Congestion. Issues

Congestion Control 3/16/09

Chapter 3 Transport Layer

Reliable Transport II: TCP and Congestion Control

image 3.8 KB Figure 1.6: Example Web Page

Transmission Control Protocol (TCP)

CS CS COMPUTER NETWORKS CS CS CHAPTER 6. CHAPTER 6 Congestion Control

CSE 123A Computer Networks

Transport Layer PREPARED BY AHMED ABDEL-RAOUF

TCP Congestion Control

TCP Congestion Control

Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015

The Transport Layer Congestion control in TCP

Transport Layer Congestion Control

Chapter III. congestion situation in Highspeed Networks

Lecture 11. Transport Layer (cont d) Transport Layer 1

Good Ideas So Far Computer Networking. Outline. Sequence Numbers (reminder) TCP flow control. Congestion sources and collapse

Congestion Control in Communication Networks

CS3600 SYSTEMS AND NETWORKS

Flow and Congestion Control

CSE 461. TCP and network congestion

CS4700/CS5700 Fundamentals of Computer Networks

TCP. CSU CS557, Spring 2018 Instructor: Lorenzo De Carli (Slides by Christos Papadopoulos, remixed by Lorenzo De Carli)

Problems and Solutions for the TCP Slow-Start Process

Transport Layer (Congestion Control)

Reliable Transport II: TCP and Congestion Control

TCP Congestion Control. Housekeeping. Additive Increase/Multiplicative Decrease. AIMD (cont) Pick up folders for exam study Exam next Friday, Nov.

Lecture 8. TCP/IP Transport Layer (2)

Overview. TCP & router queuing Computer Networking. TCP details. Workloads. TCP Performance. TCP Performance. Lecture 10 TCP & Routers

TCP Congestion Control 65KB W

Chapter 3 Transport Layer

CMSC 417. Computer Networks Prof. Ashok K Agrawala Ashok Agrawala. October 30, 2018

TCP over Wireless PROF. MICHAEL TSAI 2016/6/3

TCP Congestion Control. Lecture 16. Outline. TCP Congestion Control. Additive Increase / Multiplicative Decrease (AIMD)

Computer Networks. Sándor Laki ELTE-Ericsson Communication Networks Laboratory

There are 10 questions in total. Please write your SID on each page.

Congestion Control in TCP

COMP/ELEC 429/556 Introduction to Computer Networks

6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long

Lecture 4: Congestion Control

Flow and Congestion Control (Hosts)

Intro to LAN/WAN. Transport Layer

8. TCP Congestion Control

CSC 4900 Computer Networks: TCP

ECEN Final Exam Fall Instructor: Srinivas Shakkottai

CS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP

Fall 2012: FCM 708 Bridge Foundation I

Chapter 3 Transport Layer

UNIVERSITY OF OSLO. Faculty of mathematics and natural sciences. INF3190/INF4190 Data Communications. All printed and written material, calculator

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control

CS457 Transport Protocols. CS 457 Fall 2014

Congestion Control. Brighten Godfrey CS 538 January Based in part on slides by Ion Stoica

******************************************************************* *******************************************************************

Network Management & Monitoring Network Delay

Congestion Avoidance and Control. Rohan Tabish and Zane Ma

Network Protocols. Transmission Control Protocol (TCP) TDC375 Autumn 2009/10 John Kristoff DePaul University 1

Congestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University

Advanced Computer Networks

Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks

Congestion Control End Hosts. CSE 561 Lecture 7, Spring David Wetherall. How fast should the sender transmit data?

Performance Consequences of Partial RED Deployment

Transport layer issues

CSCD 330 Network Programming Winter 2015

Mid Term Exam Results

Wireless TCP Performance Issues

TCP Congestion Control

CS 349/449 Internet Protocols Final Exam Winter /15/2003. Name: Course:

Performance Analysis of TCP Variants

Transcription:

Congestion control in TCP If the transport entities on many machines send too many packets into the network too quickly, the network will become congested, with performance degraded as packets are delayed and lost. Controlling congestion to avoid this problem is the combined responsibility of the network and transport layers. Congestion occurs at routers, so it is detected at the network layer. However, congestion is ultimately caused by traffic sent into the network by the transport layer. The only effective way to control congestion is for the transport protocols to send packets into the network more slowly. The Internet relies heavily on the transport layer for congestion control, and specific algorithms are built into TCP and other protocols. Bandwidth Allocation: Instead of just avoiding congestion, it should be the aim of allocating good bandwidth to transport layer entities that use the network. A good bandwidth would mean good performance Efficiency and Power Efficient does not mean providing equal bandwidth across transport entities, i.e. if we have four entities and network link of bandwidth 100-Mbps, it is wrong to think that each will get a share of 25-Mbps. They should usually get less since traffic is usually bursty. See the figure below In figure (a) initially the the number of received packets (goodput) increases at the the same rate, but as load approaches capacity, goodput rises more gradually. This falloff is because bursts of traffic can occasionally mount up and cause some losses at buffers inside the network. If the transport protocol is poorly designed and retransmits packets that have been delayed but not lost, the network can enter congestion collapse.

In Fig. ( b) Initially the delay is fixed, representing the propagation delay across the network. As the load approaches the capacity, the delay rises, slowly at first and then much more rapidly. This is again because of bursts of traffic that tend to mound up at high load. The delay cannot really go to infinity, except in a model in which the routers have infinite buffers. Instead, packets will be lost after experiencing the maximum buffering delay. For both goodput and delay, performance begins to degrade at the onset of congestion. Intuitively, we will obtain the best performance from the network if we allocate bandwidth up until the delay starts to climb rapidly. This point is below the capacity. To identify it, Kleinrock (1979) proposed the metric of power, where power = delay / load Power will initially rise with offered load, as delay remains small and roughly constant, but will reach a maximum and fall as delay grows rapidly. The load with the highest power represents an efficient load for the transport entity to place on the network. Convergence: The congestion control algorithm should converge quickly to the fair allocation of bandwidth. Let us assume that flow-1 uses maximum of bandwidth being only the flow. When flow-2 starts the bandwidth is equally divided between flow-1 and flow-2. Now if at next instant a flow-3 starts and need 20% bandwidth quite less than its share, the flow-1 and flow-2 should adjust and converge quickly to 40% each to give a share of 20% to third flow. Now if at some instant in future one of flow say flow-2 leaves and flow-3 remaining unchanged, flow-1 shold again converge quickly to 80%. Thus at all times the total allocated

bandwidth is approximately 100%, so network is fully used, and competing flows get equal treatment. How to Regulate the sending Rate: There are number of strategies for regulating the sending flow rate: The two main factors limiting the sending rate are: i. Flow control - there is insufficient buffering at the receiver. ii. Congestion control that is there is insufficient capacity in the network Figure (a) is the case of flow control, as long as the sender sends more packets than the buffer (bucket) can store, there is no problem of loss of packets Figure (b) shows another story, the limit here is not put by the buffer (bucket) size, but the internal network capacity, if too many packets come too fast, the funnel (input side buffer) will overflow leading to packet loss. Both case may look similar i.e. they lead to loss but their cause and solutions are different. Since both problems can arise, so the transport layer will need to run both solutions and slow down the flow if either occur.

Then what is the way the transport layer regulate the flow is by way of feedback returned by the network. Type of feedback control mechanism i. In the explicit, precise design is when routers tell the sources the rate at which they may send. ii. In explicit imprecise design, routers set bits on packets that experiences congestions to warn the sender to slow down, but they do not tell them how much to slow down. iii. In other design there is no explicit signal, rather FAST TCP is used to measure the round trip delay and use it as metric to avoid congestion iv. Most prevalent congestion control prevailing in internet is TCP with droptail or RED routers, packet loss is inferred and used to signal that the network has become congested. TCP Congestion Control We have learnt that it is upto the transport layer to adjust the sender rate based on some feedback Congestion window: A congestion window is whose size is number of bytes the sender may have in the network at any time. The corresponding rate is the windo size divided by the round-trip time of the connection. TCP adjusts the size of window as per AIMD (Additive increase multiplicative decrease) rule. Some highlights are: I. Two windows (A congestion window and a flow window) are maintained. II. They specifies the number of bytes the receiver can buffer. III. Both windows are tracked in parallel IV. Number of bytes that may be sent is smaller of two. Windows.

V. Thus the effective window is the smaller of what the sender thinks and what the receiver think is all right. VI. TCP will stop sending the data if either of the two windows is temporarily full. It means, if the receiver says send 64Kbytes, but if the sender knows the burst of more than 32Kbytes clog the network, it will send 32KB. On the other hand if the receiver says send 64 KB and the sender knows that bursts of up to 128 KB get through effortlessly, it will send the full 64 KB requested. Slow Start 1. When a connection is established, the sender initializes the congestion window to a small initial value of at most four segments. 2. The sender then sends the initial window. 3. The packets will take a round-trip time to be acknowledged. 4. For each segment that is acknowledged before the retransmission timer goes off, the sender adds one segment s worth of bytes to the congestion window. Plus, as that segment has been acknowledged, there is now one less segment in the network. 5. The upshot is that every acknowledged segment allows two more segments to be sent. The congestion window is doubling every roundtriptime. 6. This algorithm is called slow start. 7. but it is exponential growth 8. Slow start is shown in Fig. In the first round-trip time, the sender injects one packet into the network (and the receiver receives one packet). 9. Two packets are sent in the next round-trip time, then four packets in the third round-trip time.

10. Slow-start works well over a range of link speeds and round-trip times, and uses an ack clock to match the rate of sender transmissions to the network path. Disadvantage of Slow Start I. A slow start causes exponential growth, eventually it will send too many packets into the network too quickly. II. Soon, queues will build up in the network. III. When the queues are full, one or more packets will be lost. IV. After this happens, the TCP sender will time out when an acknowledgement fails to arrive in time. Slow start Threshold I. In this a slow start threshold is initially set to arbitrary high value to the size of the flow control window, so that it will not limit the connection. II. TCP keeps increasing the congestion window in slow start until a timeout occurs or the congestion window exceeds the threshold (or the receiver s window is filled). III. Whenever a packet loss is detected, for example, by a timeout, the slow start threshold is set to be half of the congestion window and the entire process is restarted. IV. The idea is that the current window is too large because it caused congestion previously that is only now detected by a timeout. Half of the window, which was used successfully at an earlier time, is probably a better estimate for a congestion window that is close to the path capacity but will not cause loss. V. In previous example, growing the congestion window to eight packets may cause loss, while the congestion window of four packets in the previous RTT was the right value. The congestion window is then reset to its small initial value and slow start resumes. VI. Whenever the slow start threshold is crossed, TCP switches from slow start to additive increase. Fast Retransmission: I. After it fires, the slow start threshold is still set to half the current congestion window, just as with a timeout. II. Slow start can be restarted by setting the congestion window to one packet. III. With this window size, a new packet will be sent after the one roundtrip time that it takes to acknowledge the retransmitted packet along with all data that had been sent before the loss was detected.