Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015

Similar documents
Assignment 10: TCP and Congestion Control Due the week of November 14/15, 2012

Bandwidth Allocation & TCP

Congestion Control 3/16/09

Lecture 14: Congestion Control"

ECE 610: Homework 4 Problems are taken from Kurose and Ross.

Congestion control in TCP

ECE 333: Introduction to Communication Networks Fall 2001

Congestion Control. Queuing Discipline Reacting to Congestion Avoiding Congestion. Issues

Computer Networks. Course Reference Model. Topic. Congestion What s the hold up? Nature of Congestion. Nature of Congestion 1/5/2015.

TCP Congestion Control. Lecture 16. Outline. TCP Congestion Control. Additive Increase / Multiplicative Decrease (AIMD)

CS CS COMPUTER NETWORKS CS CS CHAPTER 6. CHAPTER 6 Congestion Control

Lecture 14: Congestion Control"

CS 349/449 Internet Protocols Final Exam Winter /15/2003. Name: Course:

Communication Networks

Reliable Transport II: TCP and Congestion Control

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Computer Networks (Fall 2011) Homework 2

CSE 461. TCP and network congestion

Reliable Transport II: TCP and Congestion Control

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Operating Systems and Networks. Network Lecture 10: Congestion Control. Adrian Perrig Network Security Group ETH Zürich

Where we are in the Course. Topic. Nature of Congestion. Nature of Congestion (3) Nature of Congestion (2) Operating Systems and Networks

Congestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources

Lecture 21: Congestion Control" CSE 123: Computer Networks Alex C. Snoeren

Answers to Sample Questions on Transport Layer

Advanced Computer Networks

Problem Set 7 Due: Start of Class, November 2

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness

Congestion Control. Principles of Congestion Control. Network-assisted Congestion Control: ATM. Congestion Control. Computer Networks 10/21/2009

Outline Computer Networking. TCP slow start. TCP modeling. TCP details AIMD. Congestion Avoidance. Lecture 18 TCP Performance Peter Steenkiste

ADVANCED COMPUTER NETWORKS

6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long

Transmission Control Protocol. ITS 413 Internet Technologies and Applications

Congestion Collapse in the 1980s

Congestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks

Fast Retransmit. Problem: coarsegrain. timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007

CS321: Computer Networks Congestion Control in TCP

Congestion Control. Principles of Congestion Control. Network assisted congestion. Asynchronous Transfer Mode. Computer Networks 10/23/2013

TCP Congestion Control. Housekeeping. Additive Increase/Multiplicative Decrease. AIMD (cont) Pick up folders for exam study Exam next Friday, Nov.

CSE 123A Computer Networks

15-441: Computer Networks Homework 3

Transport Layer (Congestion Control)

Computer Networks Spring 2017 Homework 2 Due by 3/2/2017, 10:30am

Congestion Control. Tom Anderson

Increase-Decrease Congestion Control for Real-time Streaming: Scalability

2.993: Principles of Internet Computing Quiz 1. Network

Congestion Control in TCP

Overview. TCP congestion control Computer Networking. TCP modern loss recovery. TCP modeling. TCP Congestion Control AIMD

Documents. Configuration. Important Dependent Parameters (Approximate) Version 2.3 (Wed, Dec 1, 2010, 1225 hours)

CSCI-1680 Transport Layer III Congestion Control Strikes Back Rodrigo Fonseca

Congestion Control End Hosts. CSE 561 Lecture 7, Spring David Wetherall. How fast should the sender transmit data?

Congestion Control. Resource allocation and congestion control problem

TCP OVER AD HOC NETWORK

There are 10 questions in total. Please write your SID on each page.

Transport Layer Congestion Control

Department of EECS - University of California at Berkeley EECS122 - Introduction to Communication Networks - Spring 2005 Final: 5/20/2005

Computer Networking

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services

CSE/EE 461. TCP congestion control. Last Lecture. This Lecture. Focus How should senders pace themselves to avoid stressing the network?

Oscillations and Buffer Overflows in Video Streaming under Non- Negligible Queuing Delay

Chapter III. congestion situation in Highspeed Networks

TCP Congestion Control

image 3.8 KB Figure 1.6: Example Web Page

Fall 2012: FCM 708 Bridge Foundation I

Transport Layer (Congestion Control)

Exercises TCP/IP Networking With Solutions

CS132/EECS148 - Instructor: Karim El Defrawy Midterm Spring 2013 Time: 1hour May 2nd, 2013

PRACTICE QUESTIONS ON RESOURCE ALLOCATION

CS519: Computer Networks. Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control

CSE/EE 461 Lecture 16 TCP Congestion Control. TCP Congestion Control

Overview. TCP & router queuing Computer Networking. TCP details. Workloads. TCP Performance. TCP Performance. Lecture 10 TCP & Routers

TCP. CSU CS557, Spring 2018 Instructor: Lorenzo De Carli (Slides by Christos Papadopoulos, remixed by Lorenzo De Carli)

TCP so far Computer Networking Outline. How Was TCP Able to Evolve

Performance Analysis of TCP Variants

Chapter 24 Congestion Control and Quality of Service 24.1

CS244a: An Introduction to Computer Networks

Network Working Group Request for Comments: 1046 ISI February A Queuing Algorithm to Provide Type-of-Service for IP Links

Studying Fairness of TCP Variants and UDP Traffic

A Survey on Quality of Service and Congestion Control

CMSC 417. Computer Networks Prof. Ashok K Agrawala Ashok Agrawala. October 11, 2018

15-441: Computer Networks Homework 3

CS Transport. Outline. Window Flow Control. Window Flow Control

Advanced Computer Networks

Congestion Control in TCP

A Hybrid Systems Modeling Framework for Fast and Accurate Simulation of Data Communication Networks. Motivation

Chapter II. Protocols for High Speed Networks. 2.1 Need for alternative Protocols

15-744: Computer Networking TCP

CS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP

RCRT:Rate-Controlled Reliable Transport Protocol for Wireless Sensor Networks

Episode 4. Flow and Congestion Control. Baochun Li Department of Electrical and Computer Engineering University of Toronto

6.3 TCP congestion control 499

Flow and Congestion Control

TCP Congestion Control : Computer Networking. Introduction to TCP. Key Things You Should Know Already. Congestion Control RED

bitcoin allnet exam review: transport layer TCP basics congestion control project 2 Computer Networks ICS 651

Congestion Control and Resource Allocation

CS4700/CS5700 Fundamentals of Computer Networks

Transcription:

Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015 I d like to complete our exploration of TCP by taking a close look at the topic of congestion control in TCP. To prepare for this, you should read the discussion of congestion control presented in 6.1-6.4 or our text. There are several additional readings. The early part of the readings from the text together with some of the material in 6.4.2 of the text describe techniques that can be used in individual routers to address congestion issues. The RFC Recommendations on Queue Management and Congestion Avoidance in the Internet discusses the relationships between and significance of these techniques. At the end of 6.4, Peterson and Davie discuss a proposed alternative to the standard TCP congestion control mechanisms. This scheme, known as TCP Vegas, was actually first described in a paper authored by Peterson and a student named Lawrence Brakmo. The paper, TCP Vegas: New Techniques for Congestion Detection and Avoidance, will be one of our readings this week. The techniques the paper describes are interesting in their own right. In addition, by providing a contrast to the techniques employed by standard TCP implementations, learning about TCP Vegas will help us understand current TCP congestion control mechanisms more fully. Exercises 1. Complete problem 6.14 part (a) from the text. Stop when you get to wall clock time 10. For each tick of the wall clock show the value of the virtual clock used to determine A i, which packets arrive, the F i value assigned to each packet that arrived, the packet (if any) sent, and the packets in the queue at each node. Identify packets by host/arrival time. As you work this problem, think about what data structures would be used to implement the process efficiently and come prepared to discuss such an implementation. 2. This is another question borrowed from the Cornell instructor who produced one of the traces we looked at using Wireshark. The trace file mentioned can be found in tom/shared/336/traces. Load the trace file trace4.cap in Wireshark and answer the following questions. For this trace a stream of TCP packets were sent from orkid to orchid for 1 second. The delay from orkid to orchid was set to be 10 ms using NistNet. The complete Ethernet bandwidth of 10 Mbps was available between the two nodes. (a) Plot the time sequence graph and zoom in to the first 200ms of the trace. What do you see? Give approximate sizes for the first 5 send windows and explain your observation briefly in words. (b) Plot the time sequence graph and look at the complete trace. The curve is not a straight line, as you would have observed in other traces. Instead we have 4 similarly shaped segments in the curve. What events do you believe caused each of these kinks in the curve? Does your explanation precisely predict the number of packets sent in each burst? If not, identify the anomalies you can t quite explain. 1

Figure 1: Configurations of two sender feedback ssytem. 3. The text states that It has been shown that additive increase/multiplicative decrease is a necessary condition for a congestion-control mechanism to be stable. For this question, I would like you to investigate some alternatives to additive increase/multiplicative decrease to help understand its advantages. Additive increase and multiplicative decrease are two examples of linear window size update mechanisms. In general, If W i is the congestion window size at time i, after the window size is adjusted by a linear mechanism, we will have W i+1 = AW i + B for some constants A and B. In the case of additive increase/multiplicative decrease, the adjustment made when congestion is detected uses a value of A between 0 and 1 and sets B to zero. On the other hand, when a full window s worth of packets is acknowledged, the update formula uses 1 as the value of both A and B. To understand the behavior of such a scheme, consider the simple case where just two computers using this approach are sharing a single bottleneck link. Assuming that the capacity of the bottleneck link is R, the diagram shown in Figure 1 can be used to illustrate the way in which the update process will adjust these two flows of packets. In this figure, the point (x,y) is used to represent the state of the system when the flow from the first computer has a window size that allows it to transmit at the rate y and the flow from the second computer has a window size that allows it to transmit at rate x. The transmission rate of a station using TCP congestion control will equal its window size divided by the round trip time. Assuming the units used to measure time are such that RTT = 1, it is fair to treat x and y interchangeably as rates or window sizes. We will take this liberty in the following explanation. The shaded triangle includes the points corresponding to all states for which the sum of the packets transmitted by the two flows does not exceed R, the capacity of the bottleneck link. The hypotenuse of this triagle corresponds to the set of states in which the link is maximally utilized 2

without exceeding its capacity. Different points on this line, however, represent different degrees of fairness. At the point (0,R), the link is fully utilized, but only flow 1 is allowed to send packets. Similarly, at point (R,0) only flow 2 can transmit. The midpoint of the hypotenuse, (R/2, R/2) is therefore the optimal operating state for the system. At this point, the link would be fully utilized and the allocation of bandwidth to the two senders would be fair. A feedback mechanism like additive increase/multiplicative decrease aims to move the system from one state to another with the goal of approaching this optimal state as closely and as quickly as possible. For example, if the system began in the state corresponding to the point (x 0, y 0 ) shown in the figure, packets would eventually be lost (since the sum of the flows exceed the link capacity). When the flows detected this loss, they would update their window sizes yielding new window sizes x 1 = Ax 0 + B and y 1 = Ay 0 + B. This would correspond to the transition of the system state from (x 0, y 0 ) to (x 1, y 1 ) shown in Figure 1. The overall behavior of the systems would be determined by the complete series of such transitions it performed. (a) One possible alternative to additive increase/multiplicative increase would be additive increase/ additive decrease. In this case, when a packet is lost, the window would be updated using the formula W i+1 = W i B d. When a full window of packets is sent without loss, the update performed would be W i+1 = W i + B i. Typically, the constants would be selected so that B d > B i. Assuming that the round-trip times for the two flows are the same and that when packets are lost both flows suffer and detect loss, all the states (x i, y i ) that can ever be reached from some initial state (x 0, y 0 ) fall on a line that passes through (x 0, y 0 ). Find the slope and y-intercept of this line. (b) Another alternative scheme would be to use multiplicative increase and multiplicative decrease. In this case, when congestion is detected, the window size update would be computed using W i+1 = A d W i, and after a full window of packets was delivered without loss, the update would be W i+1 = A i W i with 0 < A d < 1 < A i and A i A d < 1. Under the same assumptions about round-trip time and loss detection made in part (a), show that all states (x i, y i ) that can be reached from a given initial state (x 0, y 0 ) using this scheme fall on a line that passes through (x 0, y 0 ). Find the slope and y-intercept of this line. (c) Assuming a fixed round-trip time, the congestion window size used by a flow is proportional to the rate at which it sends data and is therefore proportional to its throughput. In situations, like those described in parts (a) and (b), where one flow s window size is a simple function of the other s, it becomes possible to express the fairness of the state of the system as a function of the window size of one flow. Please derive such a formula for fairness as a function of flow 2 s window size (i.e. fairness as a function of x i ) for both: additive increase/additive decrease, and multiplicative increase/multiplicative decrease using Jain s fairness index as the measure of fairness. Determine how each of these functions behaves as the window sizes change? That is, does the fairness index increase, decrease or remain the same as the window sizes of the two flows are increased? (d) Given your observations concerning the behavior of the fairness index from part (c), explain qualitatively how the fairness index will behave as a function of time when the additive increase/ multiplicative decrease procedure actually incorporated in TCP is used to adjust window sizes. 3

(e) For all of the subparts of this question, we have assumed that the round-trip times associated with the two flows are equal. This is unrealistic for the Internet where connection lengths may vary greatly. Suppose that flow 1 s connection s round-trip time is three times as long as flow 2 s. In this case, each time flow 1 succeeds in delivering one window s worth of packets, flow 2 will deliver three windows worth of packets. Therefore flow 2 will apply the increase part of any window size adjustment algorithm three times as often as flow 1. How would the results you obtained for the additive increase/additive decrease combination in parts (a) and (c) change in this case? In your answer, assume that flow 2 applies both the increase and decrease adjustments 3 times as often as flow 1. Would the set of states the system could reach still be a line? If so, what would its slope be? What value will the fairness index for the system approach as the window size grows? What does this suggest about the fairness of additive increase/multiplicative decrease in TCP when flows with different round-trip times interact? (f) If you have been paying attention carefully all semester, you should realize that this is not the first time we have encountered the use of additive-increase/multiplicative-decrease in a feedback mechanism. What other protocol proposal that we have studied used this technique? Is there a similar argument supporting the assumption that the use of AIMD in this other context will provide convergence to fairness? 4. The diagram from our textbook shown below illustrates the adjustments made to the TCP congestion window under the additive increase/multiplicative decrease strategy (AIMD). This diagram is an idealized representation of actual TCP behavior. Among other things, it suggests that each computer immediately resumes additive increase after applying multiplicative decrease due to a packet loss. This approximates actual behavior when all loses are detected using duplicate ACKs, but is far from accurate when timeouts occur. For this problem, however, I want you to assume that it does accurately reflects the behavior of a single TCP connection. What I want you to explore in this question is what happens when we consider the traffic a router actually processes as many flows independently follow this sawtooth pattern in response to packets dropped by the router. In particular, suppose that the F distinct flows are sending packets through a single router. We will assume that all of the flows send packets of the same size and that all flows have the same round trip time, RT T. We will also assume that variations in the round trip time due to changes 4

in queuing delay are negligible so that RT T can be considered constant. Given this assumption, we will measure all transmission rates in packets per RT T. This includes the individual flow rates, R i, the combined transmission rate of all flows, R = F i=1 R i, and the router s total capacity, C. Measured in this way, a flow s rate R i will be equal to its congestion window size measured in packets. Therefore, assuming no packet loss, each R i will increase by 1 every RTT. Earlier in this assignment, you showed that under the assumption of equal RTT, the AIMD process is designed to adjust congestion windows in such a way that the system approaches fair allocation and full utilization. That is, we would hope that under AIMD, the system described would tend toward a point where R i = C/F for all i. This suggest that to make this interesting we should assume C >> F. Important note: The goal of this problem and all of the simplifying assumptions made in its statement is to get a good approximate description of the idealized behavior of AIMD in certain scenarios. This means you should take the word approximately seriously whenever it appears. (a) Suppose that at some time t 0 this system reaches the ideal point where R i = C/F for all i, and none of the router s buffers are occupied. No packets will need to be dropped, so over the next RTT, all flows will increase their sending rate by 1. As a result, at time t 0 + RT T, the combined rate R will equal C + F, and F of router s buffers will be full. If the router has enough buffers to continue processing packets without any drops until time t 0 + 2RT T, what will R be at this time and how many packets will be buffered. 1 (b) Now, assume instead that the router actually has only F packet buffers. As a result, immediately after t 0 + RT T the router will begin dropping packets. Approximately how many packets will have to be dropped before the application of multiplicative decrease sufficiently reduces the combined flow rate so that the number of buffers required is F. In your answer, assume that the router is lucky (or clever) and drops packets from as many distinct flows as possible (rather than dropping many packets from a few flows and none from others). (c) Under the assumptions stated in part (b), approximately what will be the combined flow rate, R, at time t 0 + 2RT T. (d) After the events described in part (b) and (c), the flows will resume AIMD and eventually reach the state where the combined rates of all flows exceed C leading to another, similar round of packet dropping. As long as all flows remain active, the system will repeat this cycle indefinitely. How long will each cycle last (measured in RT T s) and what will be the average number of packets dropped per RT T averaged over many such cycles? (e) Give a formula for the average efficiency of a router operating in this cycle. Your answer should account for inefficiency due to both packet loss and periods where the combined rates of all flows are less than the router s capacity. Apply this formulas and your formula from part (d) to the case where C = 100 and F = 10. (f) Recalling the advice in the introduction to this problem that you should take the word approximately seriously, consider how the results would differ if the flows passing through the router were all using TCP Vegas rather than TCP Reno. 1 The assumption that RTT is fixed might become questionable at this point with many packets buffered, but please continue to make this assumption to keep things tractable. 5

Assume as we did in part (a) that the system magically arrives at a state where the flows are fairly sharing the full capacity of the router at time t 0. Also assume that all flows started when all the routers on the paths used were idle and that there are enough buffers in the router that no packets will have to be dropped. Finally, in the earlier parts of this problem, we simplified things by assuming that the router was lucky enough to spread dropped packets among the flows equally. With Vegas, you should instead assume that the router is lucky enough to spread any queuing delays equally among the flows. That is, you should assume that each flow will observe the same delivery rate. Given these assumptions, what (if any) cycle would you expect the flow rates to follow in the future. (g) Now, still assuming that we are using Vegas rather than Reno, suppose that at time t 0 +RTT another flow (flow F+1) begins to transmit packets. Assuming the Vegas mechanism designed to end slow start before congestion begins moves flow F+1 into additive increase before its rate exceeds its fair share, will the system eventually reach a state where all the flows divide the capacity of the network fairly among the F+1 flows? If not, what will happen? 6