Episode 4. Flow and Congestion Control. Baochun Li Department of Electrical and Computer Engineering University of Toronto
|
|
- Philip Chase
- 5 years ago
- Views:
Transcription
1 Episode 4. Flow and Congestion Control Baochun Li Department of Electrical and Computer Engineering University of Toronto
2 Recall the previous episode Detailed design principles in: The link layer The network layer Topic of this episode: Design principles in the end-to-end layer Congestion control a network system design issue 2
3 Salzer 7.5.6, 7.6; Keshav Chapter 9.7, , CUBIC paper (critique 2)
4 Design Principles in the End-to-End Layer
5 The network layer The network layer provides a useful but not completely dependable best-effort communication environment, that will deliver data segments to any destination But with no guarantees on the order of arrival, certainty of arrival, and the accuracy of content This is too hostile for most applications! 5
6 The end-to-end layer The job of the end-to-end layer is to create a more comfortable communication environment that has the features of performance, reliability, and certainty that an application needs Problem: different applications have different needs But they tend to fall into classes of similar requirements For each class it is possible to design a broadly useful protocol called the transport protocol A transport protocol operates between two attachment points of a network (a client and a service), with the goal of moving either messages or a stream of data between them, while providing a particular set of assurances 6
7 Transport Protocol Design
8 Sending muti-segment messages The simplest method of sending a multi-segment message end-to-end send one segment, wait for the receiver to acknowledge the segment, then send the second segment, and so on known as the lock-step protocol: takes N round-trip times to send N messages! sender send first segment receive ACK, send second segment receive ACK, send third segment (repeat N times) Done. segment 1 Acknowledgment 1 segment 2 Acknowledgment 2 3 N Acknowledgment N receiver accept segment 1 accept segment 2 accept segment N time 8
9 Overlapping transmissions Adopt the pipelining principle As soon as the first segment has been sent, immediately send the next ones, without waiting for acknowledgments When the pipeline is completely filled, there may be several segments in the network : N segments require N transmission times + 1 RTT send segment 1 send segment 2 send segment 3 receive ACK 1 receive ACK 2 (repeat N times) receive ACK N, done. sender 3 2 ack 1 ack 2 segment 1 ack N N receiver acknowledge segment 1 acknowledge segment 2 acknowledge segment N time 9
10 But things can go wrong: lost packets One or more packets or acknowledgments may be lost along the way The sender needs to maintain a list of segments sent As each acknowledgment gets back, the sender checks that off its list After sending the last segment, the sender sets a timer to expire a little more than one round-trip time in the future If upon receiving an acknowledgment, the list of missing acknowledgments becomes empty, all is well Otherwise, the sender resends each one in the list, starts another timer, and repeat the sequence until every segment is acknowledged (or the retry limit is reached) 10
11 But things can go wrong: bottlenecks When the sender generates data, the network can transmit it faster than the (slower) receiver can accept it The transport protocol needs to include some method of controlling the rate at which the sender generates data called flow control A basic intuitive idea The sender starts by asking the receiver how much data the receiver can handle The response from the receiver is known as a window The sender asks for permission to send, and the receiver responds by quoting a window size The sender then sends that much data and waits until it receives permission to send more 11
12 Flow control with a fixed window receive permission, send segment 1 send segment 2 send segment 3 send segment 4 receive ACK 1 receive ACK 2 receive ACK 3 receive ACK 4, wait receive permission, send segment 5 send segment 6 sender segment #1 ack # 2 may I send? yes, 4 segments ack # 1 ack # 3 ack # 4 #2 #3 #4 send 4 more segment #5 #6 receiver receive request, open a 4-segment window buffer segment 1 buffer segment 2 buffer segment 3 buffer segment 4 finished processing segments 1 4, reopen the window buffer segment 5 buffer segment 6 time 12
13 Sliding Windows As soon as it has freed up a segment buffer, the receiver can immediately send permission for a window that is one segment larger Either by sending a separate message or, if there happens to be an ACK ready to go, piggy-backing on that ACK The sender keeps track of how much window space is left, and increases that number whenever additional permission arrives 13
14 Self-Pacing Once the sender fills a sliding window, it cannot send the next data element until the acknowledgment of the oldest data element in the window returns At the same time, the receiver cannot generate acknowledgments any faster than the network can deliver data elements Because of these two considerations, the rate at which the window slides adjusts itself automatically to be equal to the bottleneck data rate! 14
15 Appropriate window size Do we still need to know the network round-trip time at the sender? Yes window size >= round-trip time x bottleneck data rate The bandwidth-delay product If a too-large round trip time estimate is used in window setting, the resulting excessive window size will simply increase the length of packet forwarding queues in the network Those longer queues will increase the transit time The increase will lead the sender to think that it needs an even larger window a positive feedback! The estimate needs to err on the side of being too small 15
16 Congestion control a network-wide problem of managing shared resources
17 Shared resources: everywhere in a system Resource sharing examples in systems: Many virtual processors (threads) sharing a few physical processors using a thread manager A multilevel memory manager creates the illusion of large, fast virtual memories by combining a small and fast shared memory with large and slow storage devices In networks, the resource that is shared is a set of communication links and the supporting packet forwarding switches They are geographically and administratively distributed managing them is more complex! 17
18 Analogy: Supermarket vs. Packet Switch Queues are started to manage the problem that packets may arrive at a switch at a time when the outgoing link is already busy transmitting another packet Just like checkout lines in the supermarket Any time there is a shared resource, and the demand for that resource comes from several statistically independent sources, there will be fluctuations in the arrival of load Thus there will be fluctuations in the length of the queue, and the time spent waiting for service in the queue Offered load > capacity of a resource: overloaded 18
19 How long will overload persist? If the duration of overload is comparable to service time, it is normal the time in a supermarket to serve one customer or the time for a packet forwarding switch to handle one packet In this case, a queue handles short bursts of too much demand by time-averaging with adjacent periods when there is excess capacity If overload persists for a time significantly longer than the service time, there begins to develop a risk that the system will fail to meet some specification, such as maximum delay When this occurs, the resource is said to be congested If congestion is chronic, the length of the queue will grow without bound 19
20 The stability of offered load The stability of offered load is another factor in the frequency and duration of congestion When the load on a resource is aggregated from a large number of statistically independent small sources, averaging can reduce the frequency and duration of load peaks When the load comes from a small number of large sources, even if the sources are independent, the probability that they all demand service at about the same time can be high enough, that congestion can be frequent or long-lasting 20
21 Congestion Collapse Competition for resource may lead to waste of resource Counter-intuitive, but the supermarket analogy can help understand it Customers who are tired of waiting may just walk out, leaving filled shopping carts behind Someone has to put the goods from abandoned carts back to the shelves One of two of the checkout clerks leave their registers to do so The rate of sales being rung up drops while they are away The queues at the remaining registers grow longer Causing more people to abandon their carts Eventually, the clerks will be doing nothing but restocking 21
22 Self-sustaining nature of congestion collapse Once temporary congestion induces a collapse, even if the offered load drops back to a level that the resource can handle, the already induced waste rate can continue to exceed the capacity of the resource This will cause it to continue to waste the resource, remain congested indefinitely capacity of a limited resource unlimited resource useful work done limited resource with no waste congestion collapse offered load 22
23 Primary goal of resource management Avoid congestion collapse! by increasing the capacity of the resource by reducing the offered load There is a need to move quickly to a state in which the load is less than the capacity of the resource But when offered load is reduced, the amount reduced does not really go away It is just deferred to a later time at the source The source is still averaging periods of overload with periods of excess capacity, but over a longer period of time 23
24 How to increase capacity or reduce load? It is necessary to provide feedback to one or more control points an entity that determines the amount of resource that is available the load being offered A congestion control system is fundamentally a feedback system A delay in the feedback path can lead to oscillations in load 24
25 The Supermarket and Call Centre Analogies In a supermarket, a store manager can be used to watch the queues at the checkout lines Whenever there are more than two or three customers in any line, the manager calls for staff elsewhere in the store to drop what they are doing, and temporarily take stations as checkout clerks This practically increases capacity When you call customer service, you may hear an automatic response message Your call is important to us. It will be 30 minutes to we can answer. This may lead some callers to hang up and try again at a different time This practically decrease load Both may lead to oscillations 25
26 Resource Management in Networks
27 Shared resources in a computer network Communication links The processing and buffering capacity of the packet forwarding switches 27
28 Main Challenges Part 1 There is more than one resource Even a small number of resources can be used up in a large number of different ways, which is complex to keep track of There can be dynamic interactions among different resources as one nears capacity it may push back on another which may push back on yet another which may push back on the first one! It is easy to induce congestion collapse As queues for a particular communication link grow, delays grow When queuing delays become too long, the timers of higher layer protocols begin to expire and trigger retransmissions of the delayed packets The retransmitted packets join the long queues, and waste capacity 28
29 No, we cannot install more buffers! As memory gets cheaper, the idea is tempting but it doesn t work Suppose memory is so cheap that a packet forwarder can be equipped with an infinite buffer size, which can absorb an unlimited amount of overload But as more buffers are used, the queuing delay grows At some point the queuing delay exceeds the timeouts of end-to-end protocols Packets are retransmitted The offered load is now larger, so the queue grows even longer It becomes self-sustaining, and the queue grows even longer The infinite buffer size does not solve the problem, it makes it worse! 29
30 Main Challenges Part 2 There are limited options to expand capacity Capacity is determined by physical facilities (e.g., wireless spectrum) One can try sending some queued packets via an alternate path But these strategies are too complex to work well! Reducing the offered load (the demand) is the only realistic way 30
31 Main Challenges Part 3 The options to reduce load are awkward The control point for the offered load is too far away Feedback path to that point may be long: by the time the feedback signal gets there, the sender may have stopped sending; The feedback may get lost The control point must be capable of reducing its offered load Video streaming protocols are not able to do this! The control point must be willing to cooperate The packet forwarder in the network layer may be under a different administration than the control point in the end-to-end layer The control point is more interested in keeping its offered load equal to its intended load, in the hope of capturing more of the capacity in the face of competition! (think BitTorrent) 31
32 Possible ideas to address these challenges
33 Overprovisioning Basic idea: configure each link of the network to have 125% or 200% as much capacity as the offered load at the busiest minute of the day Works best on interior links of a large network, where no individual client represents more than a tiny fraction of the load Average load offered by a large number of statistically independent sources is relatively stable Problems Odd events can disrupt statistical independence Overprovisioning on one link will move the congestion to another At the edge, statistical averaging stops working flash crowd User usage patterns may adapt to the additional capacity 33
34 Pricing in a market: the invisible hand Since network resources are just another commodity with limited availability, it should be possible to use pricing as a congestion control mechanism If demand for a resource temporarily exceeds its capacity, clients will bid up the price The increased price will cause some clients to defer their use of the resource until a time when it is cheaper, thereby reducing offered load It will also induce additional suppliers to provide more capacity Challenges How do we make it work on the short time scales of congestion? Clients need a way to predict the costs in the short term, too There has to be a minimal barrier of entry by alternate suppliers 34
35 How do we address these challenges? Decentralized schemes are extremely scalable
36 Case in point: the Internet
37 Cross-layer Cooperation: Feedback
38 Cross-layer feedback: basic idea The packet forwarder that notices congestion provides feedback to one or more end-to-end layer sources The end-to-end source responds by reducing its offered load The best solution: the packet forwarder simply discards the packet Simple and reliable! 38
39 Which packet to discard? The choice is not obvious The simplest strategy, tail drop, limits the size of the queue, and any packet that arrives when the queue is full gets discarded A better technique, called random drop, may be to choose a victim from the queue at random The sources that are contributing the most to congestion are the most likely to receive the feedback Another refinement, called early drop, begins dropping packets before the queue is completely full, in the hope of alerting the source sooner The goal of early drop is to start reducing the offered load as soon as the possibility of congestion is detected, rather than waiting till congestion is confirmed avoidance rather than recovery Random drop + early drop: random early detection (RED) 39
40 Cross-layer Cooperation: Control
41 What should the end-to-end protocol do? The end-to-end protocol learns of a lost packet, what now? One idea: just retransmit the lost packet, and continue to send more data as rapidly as its application supplies it This way, it may discover that by sending packets at the greatest rate it can sustain, it will push more data through the congested packet forwarder The problem: If this is the standard mode of operation of all end hosts, congestion will set in and all will suffer The tragedy of the commons 41
42 Two things the end-to-end protocol can do Be careful about the use of timers involves setting the timer s value Pace the rate at which it sends data automatic rate adaptation involves managing the flow control window Both require having an estimate of the round-trip time between the two ends of the protocol 42
43 The retransmit timer With congestion, an expired timer may imply that either a queue in the network has grown too long, or a packet forwarder has intentionally discarded the packet We need to reduce the rate of retransmissions Setting longer retransmission timer intervals If short internal intervals are used, congestion will increase queuing delays Longer queuing delays will increase the observed round-trip times These observations will increase the round-trip estimate used for setting future retransmit timers When a timer does expire, exponential backoff for the timer interval should be used for retransmitting the same packet Effectively avoids contributing to congestion collapse 43
44 Automatic rate adaptation The flow control window and the receiver s buffer should both be at least as large as the bottleneck data rate multiplied by the round trip time the BDP ( bandwidth-delay product ) But if it is larger, it will result in more packets piling up in the queue of the bottleneck link We need to ensure that the flow control window is no larger than necessary 44
45 The original design of TCP In the original TCP design, the only form of acknowledgment to the sender was I have received all the bytes up to X. But not in the form of I am missing bytes Y through Z. The consequences When a timer expired due to a lost packet, as soon as the sender retransmitted that packet, the timer of the next packet expired, causing its retransmission This will repeat until the next acknowledgment returns, a full round trip later On long-delay routes, the flow control window may be large Each discarded packet will trigger retransmission of a window full of packets congestion collapse! 45
46 Solution? By the time this effect was noticed, TCP was already widely deployed, so changes to TCP were severely constrained they have to be backward compatible! The result one expired timer leads to slow start Send just one packet, and wait for its acknowledgment For each acknowledged packet, add one to the window size In each RTT, the number of packets that the sender sends doubles This repeats till The receiver s window size has been reached the network is not bottleneck! a packet loss has been detected, using duplicate acknowledgment 46
47 Duplicate acknowledgment The receiving TCP implementation is modified slightly Whenever it receives an out-of-order packet, it sends back a duplicate of its latest acknowledgment Such a duplicate can be interpreted by the sender as a NAK The sender then operates in an equilibrium mode Upon duplicate acknowledgment, the sender retransmits just the first unacknowledged packet and also drops its window size to some fixed fraction of its previous size after this it probes gently for more capacity by doing Additive increase: whenever all the packets in a round-trip time are successfully acknowledged, the sender increases the window size by 1 Multiplicative decrease: Whenever a duplicate acknowledgment arrives again, the sender decreases the size of the window again, by a fixed fraction 47
48 The AIMD TCP Window size duplicate acknowledgment received multiplicative decrease slow start additive increase delay timer expires, stop sending slow start, again Time 48
49 This is where the story stops in a typical textbook
50 But it is a story far from what happens in reality!
51 Real-world measurement studies have shown By probing 5000 popular web servers, researchers have found that only 15-20% uses AIMD TCP! P. Yang, et al. TCP Congestion Avoidance Algorithm Identification, IEEE ICDCS What about the other 80%? It turns out that web servers, which are what define the Internet, use a wide variety of different TCP protocols, each with its own congestion control protocol But why? 51
52 The fundamental problem with AIMD TCP As Internet evolves, the number of long latency and high bandwidth networks grows The Bandwidth and Delay Product (BDP) The total number of packets in flight, determined by the flow control window size on the sender, must fully utilize the bandwidth Standard AIMD TCP increases its congestion window size too slowly in high BDP environments Example: Bandwidth 10Gbps, RTT 100ms, Packet size 1250 bytes, takes 10,000 seconds to fully utilize 52
53 New congestion control design objectives Highly scalable to high BDP environments Very slow (gentle) window increase at the saturation point Fair to AIMD TCP flows backward compatibility 53
54 Binary Increase Congestion (BIC) After a packet loss, reduces its window by a multiplicative factor of β, the default is 0.2 The window size just before reduction is set to W max and after reduction W min In the next step, it finds the midpoint using these two sizes and jump there binary search But if midpoint is very far from W min, a constant value called S max is used If no loss, W min is set to the new window size 54
55 Binary Increase Congestion (BIC) The process continues until the increment is less than a constant value of S min Then it is set to the maximum window If no loss, new maximum must be found and it enters a max probing phase Window growth function is exactly symmetric to the previous part BIC is proposed by Injong Rhee s group at NCSU in an INFOCOM 2004 paper, and later was used in the Linux kernel and set to the default TCP (since ) 55
56 BIC: During the congestion epoch Congestion epoch: defined as the interval between two packet losses Additive Increase Binary Search Wmax Max Probing 56
57 Problems with BIC BIC works very well in production, but in low speed or short RTT networks it is too aggressive for TCP Different phases like binary search increase, max probing, S max and S min, make its implementation not very efficient A new congestion control protocol is required to solve these problems, while keeping its advantages of stability and scalability 57
58 CUBIC As the name suggests, it uses cubic function for window growth It uses time instead of RTT to increase the window size It contains a TCP mode to behave the same as AIMD TCP when RTTs are short Steady State Behavior Wmax Max Probing CUBIC, again, was proposed by Injong Rhee s group at NCSU, and was later used in the Linux kernel and set to the default TCP (since ) 58
59 CUBIC: Controlling the window size After a packet loss, reduces its window by a multiplicative factor of β, the default is 0.2 The window size just before reduction is set to W max After it enters into congestion avoidance, it starts to increase the window using a cubic function The plateau of cubic function is set to W max Size of the window grows in concave mode to reach W max, then it enters the convex part 59
60 Real-world measurement studies Over 40% of the web servers uses BIC/CUBIC TCP! P. Yang, et al. TCP Congestion Avoidance Algorithm Identification, IEEE ICDCS There is about 10-15% of the servers, running Windows Server 2003 or later, uses different versions of CTCP (Compound TCP) latest version was deployed as a Hotfix by Microsoft to Windows Server products in
61 Salzer 7.5.6, 7.6; Keshav Chapter 9.7, , CUBIC paper (critique 2)
62 BBR: congestion-based (rather than loss-based) congestion control
63 Reading: Comm. ACM, February 2017
Episode 4. Flow and Congestion Control. Baochun Li Department of Electrical and Computer Engineering University of Toronto
Episode 4. Flow and Congestion Control Baochun Li Department of Electrical and Computer Engineering University of Toronto Recall the previous episode Detailed design principles in: The link layer The network
More informationTransmission Control Protocol (TCP)
TETCOS Transmission Control Protocol (TCP) Comparison of TCP Congestion Control Algorithms using NetSim @2017 Tetcos. This document is protected by copyright, all rights reserved Table of Contents 1. Abstract....
More information6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long
6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long Please read Chapter 19 of the 6.02 book for background, especially on acknowledgments (ACKs), timers,
More informationTCP and BBR. Geoff Huston APNIC
TCP and BBR Geoff Huston APNIC Computer Networking is all about moving data The way in which data movement is controlled is a key characteristic of the network architecture The Internet protocol passed
More informationTCP and BBR. Geoff Huston APNIC
TCP and BBR Geoff Huston APNIC Computer Networking is all about moving data The way in which data movement is controlled is a key characteristic of the network architecture The Internet protocol passed
More informationCSE 123A Computer Networks
CSE 123A Computer Networks Winter 2005 Lecture 14 Congestion Control Some images courtesy David Wetherall Animations by Nick McKeown and Guido Appenzeller The bad news and the good news The bad news: new
More informationCSE 461. TCP and network congestion
CSE 461 TCP and network congestion This Lecture Focus How should senders pace themselves to avoid stressing the network? Topics Application Presentation Session Transport Network congestion collapse Data
More informationLecture 21: Congestion Control" CSE 123: Computer Networks Alex C. Snoeren
Lecture 21: Congestion Control" CSE 123: Computer Networks Alex C. Snoeren Lecture 21 Overview" How fast should a sending host transmit data? Not to fast, not to slow, just right Should not be faster than
More informationCUBIC: A New TCP-Friendly High-Speed TCP Variant
CUBIC: A New TCP-Friendly High-Speed TCP Variant Sangtae Ha, Injong Rhee and Lisong Xu Presented by Shams Feyzabadi Introduction As Internet evolves the number of Long distance and High speed networks
More information15-744: Computer Networking TCP
15-744: Computer Networking TCP Congestion Control Congestion Control Assigned Reading [Jacobson and Karels] Congestion Avoidance and Control [TFRC] Equation-Based Congestion Control for Unicast Applications
More informationCommunication Networks
Communication Networks Spring 2018 Laurent Vanbever nsg.ee.ethz.ch ETH Zürich (D-ITET) April 30 2018 Materials inspired from Scott Shenker & Jennifer Rexford Last week on Communication Networks We started
More informationCongestion control in TCP
Congestion control in TCP If the transport entities on many machines send too many packets into the network too quickly, the network will become congested, with performance degraded as packets are delayed
More informationComputer Networking
15-441 Computer Networking Lecture 17 TCP Performance & Future Eric Anderson Fall 2013 www.cs.cmu.edu/~prs/15-441-f13 Outline TCP modeling TCP details 2 TCP Performance Can TCP saturate a link? Congestion
More informationCongestion Control. Tom Anderson
Congestion Control Tom Anderson Bandwidth Allocation How do we efficiently share network resources among billions of hosts? Congestion control Sending too fast causes packet loss inside network -> retransmissions
More informationRecap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness
Recap TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness 81 Feedback Signals Several possible signals, with different
More informationTCP Congestion Control : Computer Networking. Introduction to TCP. Key Things You Should Know Already. Congestion Control RED
TCP Congestion Control 15-744: Computer Networking L-4 TCP Congestion Control RED Assigned Reading [FJ93] Random Early Detection Gateways for Congestion Avoidance [TFRC] Equation-Based Congestion Control
More informationTCP and BBR. Geoff Huston APNIC. #apricot
TCP and BBR Geoff Huston APNIC The IP Architecture At its heart IP is a datagram network architecture Individual IP packets may be lost, re-ordered, re-timed and even fragmented The IP Architecture At
More informationCongestion Control End Hosts. CSE 561 Lecture 7, Spring David Wetherall. How fast should the sender transmit data?
Congestion Control End Hosts CSE 51 Lecture 7, Spring. David Wetherall Today s question How fast should the sender transmit data? Not tooslow Not toofast Just right Should not be faster than the receiver
More informationCS4700/CS5700 Fundamentals of Computer Networks
CS4700/CS5700 Fundamentals of Computer Networks Lecture 15: Congestion Control Slides used with permissions from Edward W. Knightly, T. S. Eugene Ng, Ion Stoica, Hui Zhang Alan Mislove amislove at ccs.neu.edu
More informationFlow and Congestion Control Marcos Vieira
Flow and Congestion Control 2014 Marcos Vieira Flow Control Part of TCP specification (even before 1988) Goal: not send more data than the receiver can handle Sliding window protocol Receiver uses window
More informationCS321: Computer Networks Congestion Control in TCP
CS321: Computer Networks Congestion Control in TCP Dr. Manas Khatua Assistant Professor Dept. of CSE IIT Jodhpur E-mail: manaskhatua@iitj.ac.in Causes and Cost of Congestion Scenario-1: Two Senders, a
More informationTransmission Control Protocol. ITS 413 Internet Technologies and Applications
Transmission Control Protocol ITS 413 Internet Technologies and Applications Contents Overview of TCP (Review) TCP and Congestion Control The Causes of Congestion Approaches to Congestion Control TCP Congestion
More informationCS3600 SYSTEMS AND NETWORKS
CS3600 SYSTEMS AND NETWORKS NORTHEASTERN UNIVERSITY Lecture 24: Congestion Control Prof. Alan Mislove (amislove@ccs.neu.edu) Slides used with permissions from Edward W. Knightly, T. S. Eugene Ng, Ion Stoica,
More informationOutline Computer Networking. TCP slow start. TCP modeling. TCP details AIMD. Congestion Avoidance. Lecture 18 TCP Performance Peter Steenkiste
Outline 15-441 Computer Networking Lecture 18 TCP Performance Peter Steenkiste Fall 2010 www.cs.cmu.edu/~prs/15-441-f10 TCP congestion avoidance TCP slow start TCP modeling TCP details 2 AIMD Distributed,
More informationOverview. TCP & router queuing Computer Networking. TCP details. Workloads. TCP Performance. TCP Performance. Lecture 10 TCP & Routers
Overview 15-441 Computer Networking TCP & router queuing Lecture 10 TCP & Routers TCP details Workloads Lecture 10: 09-30-2002 2 TCP Performance TCP Performance Can TCP saturate a link? Congestion control
More informationCongestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University
Congestion Control Daniel Zappala CS 460 Computer Networking Brigham Young University 2/25 Congestion Control how do you send as fast as possible, without overwhelming the network? challenges the fastest
More informationimage 3.8 KB Figure 1.6: Example Web Page
image. KB image 1 KB Figure 1.: Example Web Page and is buffered at a router, it must wait for all previously queued packets to be transmitted first. The longer the queue (i.e., the more packets in the
More informationReliable Transport II: TCP and Congestion Control
Reliable Transport II: TCP and Congestion Control Stefano Vissicchio UCL Computer Science COMP0023 Recap: Last Lecture Transport Concepts Layering context Transport goals Transport mechanisms and design
More informationADVANCED COMPUTER NETWORKS
ADVANCED COMPUTER NETWORKS Congestion Control and Avoidance 1 Lecture-6 Instructor : Mazhar Hussain CONGESTION CONTROL When one part of the subnet (e.g. one or more routers in an area) becomes overloaded,
More informationChapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control
Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4 Principles of reliable data transfer 3.5 Connection-oriented transport: TCP segment
More informationTCP and BBR. Geoff Huston APNIC
TCP and BBR Geoff Huston APNIC The IP Architecture At its heart IP is a datagram network architecture Individual IP packets may be lost, re-ordered, re-timed and even fragmented The IP Architecture At
More informationChapter III. congestion situation in Highspeed Networks
Chapter III Proposed model for improving the congestion situation in Highspeed Networks TCP has been the most used transport protocol for the Internet for over two decades. The scale of the Internet and
More informationLecture 4: Congestion Control
Lecture 4: Congestion Control Overview Internet is a network of networks Narrow waist of IP: unreliable, best-effort datagram delivery Packet forwarding: input port to output port Routing protocols: computing
More informationPriority Traffic CSCD 433/533. Advanced Networks Spring Lecture 21 Congestion Control and Queuing Strategies
CSCD 433/533 Priority Traffic Advanced Networks Spring 2016 Lecture 21 Congestion Control and Queuing Strategies 1 Topics Congestion Control and Resource Allocation Flows Types of Mechanisms Evaluation
More informationLecture 14: Congestion Control"
Lecture 14: Congestion Control" CSE 222A: Computer Communication Networks George Porter Thanks: Amin Vahdat, Dina Katabi and Alex C. Snoeren Lecture 14 Overview" TCP congestion control review Dukkipati
More informationECE 610: Homework 4 Problems are taken from Kurose and Ross.
ECE 610: Homework 4 Problems are taken from Kurose and Ross. Problem 1: Host A and B are communicating over a TCP connection, and Host B has already received from A all bytes up through byte 248. Suppose
More informationCS519: Computer Networks. Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control
: Computer Networks Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control TCP performance We ve seen how TCP the protocol works Sequencing, receive window, connection setup and teardown And
More informationCOMP/ELEC 429/556 Introduction to Computer Networks
COMP/ELEC 429/556 Introduction to Computer Networks The TCP Protocol Some slides used with permissions from Edward W. Knightly, T. S. Eugene Ng, Ion Stoica, Hui Zhang T. S. Eugene Ng eugeneng at cs.rice.edu
More informationCongestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources
Congestion Source 1 Source 2 10-Mbps Ethernet 100-Mbps FDDI Router 1.5-Mbps T1 link Destination Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets
More informationChapter II. Protocols for High Speed Networks. 2.1 Need for alternative Protocols
Chapter II Protocols for High Speed Networks 2.1 Need for alternative Protocols As the conventional TCP suffers from poor performance on high bandwidth delay product links [47] meant for supporting transmission
More informationOperating Systems and Networks. Network Lecture 10: Congestion Control. Adrian Perrig Network Security Group ETH Zürich
Operating Systems and Networks Network Lecture 10: Congestion Control Adrian Perrig Network Security Group ETH Zürich Where we are in the Course More fun in the Transport Layer! The mystery of congestion
More informationWhere we are in the Course. Topic. Nature of Congestion. Nature of Congestion (3) Nature of Congestion (2) Operating Systems and Networks
Operating Systems and Networks Network Lecture 0: Congestion Control Adrian Perrig Network Security Group ETH Zürich Where we are in the Course More fun in the Transport Layer! The mystery of congestion
More informationReliable Transport II: TCP and Congestion Control
Reliable Transport II: TCP and Congestion Control Brad Karp UCL Computer Science CS 3035/GZ01 31 st October 2013 Outline Slow Start AIMD Congestion control Throughput, loss, and RTT equation Connection
More informationBandwidth Allocation & TCP
Bandwidth Allocation & TCP The Transport Layer Focus Application Presentation How do we share bandwidth? Session Topics Transport Network Congestion control & fairness Data Link TCP Additive Increase/Multiplicative
More informationCSE/EE 461. TCP congestion control. Last Lecture. This Lecture. Focus How should senders pace themselves to avoid stressing the network?
CSE/EE 461 TCP congestion control Last Lecture Focus How should senders pace themselves to avoid stressing the network? Topics congestion collapse congestion control Application Presentation Session Transport
More informationCongestion Control in Communication Networks
Congestion Control in Communication Networks Introduction Congestion occurs when number of packets transmitted approaches network capacity Objective of congestion control: keep number of packets below
More informationCSCI Topics: Internet Programming Fall 2008
CSCI 491-01 Topics: Internet Programming Fall 2008 Transport Layer Derek Leonard Hendrix College October 20, 2008 Original slides copyright 1996-2007 J.F Kurose and K.W. Ross 1 Chapter 3: Roadmap 3.1 Transport-layer
More informationLecture 15: Transport Layer Congestion Control
Lecture 15: Transport Layer Congestion Control COMP 332, Spring 2018 Victoria Manfredi Acknowledgements: materials adapted from Computer Networking: A Top Down Approach 7 th edition: 1996-2016, J.F Kurose
More informationFlow and Congestion Control (Hosts)
Flow and Congestion Control (Hosts) 14-740: Fundamentals of Computer Networks Bill Nace Material from Computer Networking: A Top Down Approach, 6 th edition. J.F. Kurose and K.W. Ross traceroute Flow Control
More informationComputer Networking Introduction
Computer Networking Introduction Halgurd S. Maghdid Software Engineering Department Koya University-Koya, Kurdistan-Iraq Lecture No.11 Chapter 3 outline 3.1 transport-layer services 3.2 multiplexing and
More informationTransport Protocols and TCP: Review
Transport Protocols and TCP: Review CSE 6590 Fall 2010 Department of Computer Science & Engineering York University 1 19 September 2010 1 Connection Establishment and Termination 2 2 1 Connection Establishment
More informationTCP Congestion Control. Lecture 16. Outline. TCP Congestion Control. Additive Increase / Multiplicative Decrease (AIMD)
Lecture 16 TCP Congestion Control Homework 6 Due Today TCP uses ACK arrival as a signal to transmit a new packet. Since connections come-and-go TCP congestion control must be adaptive. TCP congestion control
More informationReliable Transport I: Concepts and TCP Protocol
Reliable Transport I: Concepts and TCP Protocol Brad Karp UCL Computer Science CS 3035/GZ01 29 th October 2013 Part I: Transport Concepts Layering context Transport goals Transport mechanisms 2 Context:
More informationTransport Protocols & TCP TCP
Transport Protocols & TCP CSE 3213 Fall 2007 13 November 2007 1 TCP Services Flow control Connection establishment and termination Congestion control 2 1 TCP Services Transmission Control Protocol (RFC
More informationTCP so far Computer Networking Outline. How Was TCP Able to Evolve
TCP so far 15-441 15-441 Computer Networking 15-641 Lecture 14: TCP Performance & Future Peter Steenkiste Fall 2016 www.cs.cmu.edu/~prs/15-441-f16 Reliable byte stream protocol Connection establishments
More information6.3 TCP congestion control 499
6.3 TCP congestion control 499 queue each time around. This results in each flow getting 1/nth of the bandwidth when there are n flows. With WFQ, however, one queue might have a weight of 2, a second queue
More informationAssignment 7: TCP and Congestion Control Due the week of October 29/30, 2015
Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015 I d like to complete our exploration of TCP by taking a close look at the topic of congestion control in TCP. To prepare for
More informationCUBIC. Qian HE (Steve) CS 577 Prof. Bob Kinicki
CUBIC Qian HE (Steve) CS 577 Prof. Bob Kinicki Agenda Brief Introduction of CUBIC Prehistory of CUBIC Standard TCP BIC CUBIC Conclusion 1 Brief Introduction CUBIC is a less aggressive and more systematic
More informationCS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP
CS 5520/ECE 5590NA: Network Architecture I Spring 2008 Lecture 13: UDP and TCP Most recent lectures discussed mechanisms to make better use of the IP address space, Internet control messages, and layering
More informationCSCI-1680 Transport Layer II Data over TCP Rodrigo Fonseca
CSCI-1680 Transport Layer II Data over TCP Rodrigo Fonseca Based partly on lecture notes by David Mazières, Phil Levis, John Janno< Last Class CLOSED Passive open Close Close LISTEN Introduction to TCP
More informationTCP Congestion Control 65KB W
TCP Congestion Control 65KB W TO 3DA 3DA TO 0.5 0.5 0.5 0.5 3 3 1 SS SS CA SS CA TCP s Congestion Window Maintenance TCP maintains a congestion window (cwnd), based on packets Sender s window is limited
More informationTCP Congestion Control
6.033, Spring 2014 TCP Congestion Control Dina Katabi & Sam Madden nms.csail.mit.edu/~dina Sharing the Internet How do you manage resources in a huge system like the Internet, where users with different
More informationCongestion Control. Brighten Godfrey CS 538 January Based in part on slides by Ion Stoica
Congestion Control Brighten Godfrey CS 538 January 31 2018 Based in part on slides by Ion Stoica Announcements A starting point: the sliding window protocol TCP flow control Make sure receiving end can
More informationTransport Layer (Congestion Control)
Transport Layer (Congestion Control) Where we are in the Course Moving on up to the Transport Layer! Application Transport Network Link Physical CSE 461 University of Washington 2 Congestion Collapse Congestion
More informationThe Transport Layer Reliability
The Transport Layer Reliability CS 3, Lecture 7 http://www.cs.rutgers.edu/~sn4/3-s9 Srinivas Narayana (slides heavily adapted from text authors material) Quick recap: Transport Provide logical communication
More informationCongestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015
1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2015 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion
More informationCongestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014
1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2014 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion
More informationOverview. TCP congestion control Computer Networking. TCP modern loss recovery. TCP modeling. TCP Congestion Control AIMD
Overview 15-441 Computer Networking Lecture 9 More TCP & Congestion Control TCP congestion control TCP modern loss recovery TCP modeling Lecture 9: 09-25-2002 2 TCP Congestion Control Changes to TCP motivated
More informationCMSC 417. Computer Networks Prof. Ashok K Agrawala Ashok Agrawala. October 11, 2018
CMSC 417 Computer Networks Prof. Ashok K Agrawala 2018 Ashok Agrawala Message, Segment, Packet, and Frame host host HTTP HTTP message HTTP TCP TCP segment TCP router router IP IP packet IP IP packet IP
More informationComputer Networks. Sándor Laki ELTE-Ericsson Communication Networks Laboratory
Computer Networks Sándor Laki ELTE-Ericsson Communication Networks Laboratory ELTE FI Department Of Information Systems lakis@elte.hu http://lakis.web.elte.hu Based on the slides of Laurent Vanbever. Further
More informationThe Transport Layer Congestion control in TCP
CPSC 360 Network Programming The Transport Layer Congestion control in TCP Michele Weigle Department of Computer Science Clemson University mweigle@cs.clemson.edu http://www.cs.clemson.edu/~mweigle/courses/cpsc360
More informationCongestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014
1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2014 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion
More informationTCP Congestion Control
1 TCP Congestion Control Onwutalobi, Anthony Claret Department of Computer Science University of Helsinki, Helsinki Finland onwutalo@cs.helsinki.fi Abstract This paper is aimed to discuss congestion control
More informationPage 1. Review: Internet Protocol Stack. Transport Layer Services EEC173B/ECS152C. Review: TCP. Transport Layer: Connectionless Service
EEC7B/ECS5C Review: Internet Protocol Stack Review: TCP Application Telnet FTP HTTP Transport Network Link Physical bits on wire TCP LAN IP UDP Packet radio Do you remember the various mechanisms we have
More informationTCP Revisited CONTACT INFORMATION: phone: fax: web:
TCP Revisited CONTACT INFORMATION: phone: +1.301.527.1629 fax: +1.301.527.1690 email: whitepaper@hsc.com web: www.hsc.com PROPRIETARY NOTICE All rights reserved. This publication and its contents are proprietary
More informationCS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007
CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 Question 344 Points 444 Points Score 1 10 10 2 10 10 3 20 20 4 20 10 5 20 20 6 20 10 7-20 Total: 100 100 Instructions: 1. Question
More informationUNIT IV -- TRANSPORT LAYER
UNIT IV -- TRANSPORT LAYER TABLE OF CONTENTS 4.1. Transport layer. 02 4.2. Reliable delivery service. 03 4.3. Congestion control. 05 4.4. Connection establishment.. 07 4.5. Flow control 09 4.6. Transmission
More informationTCP: Flow and Error Control
1 TCP: Flow and Error Control Required reading: Kurose 3.5.3, 3.5.4, 3.5.5 CSE 4213, Fall 2006 Instructor: N. Vlajic TCP Stream Delivery 2 TCP Stream Delivery unlike UDP, TCP is a stream-oriented protocol
More informationPage 1. Review: Internet Protocol Stack. Transport Layer Services. Design Issue EEC173B/ECS152C. Review: TCP
EEC7B/ECS5C Review: Internet Protocol Stack Review: TCP Application Telnet FTP HTTP Transport Network Link Physical bits on wire TCP LAN IP UDP Packet radio Transport Layer Services Design Issue Underlying
More informationThe flow of data must not be allowed to overwhelm the receiver
Data Link Layer: Flow Control and Error Control Lecture8 Flow Control Flow and Error Control Flow control refers to a set of procedures used to restrict the amount of data that the sender can send before
More informationECE697AA Lecture 3. Today s lecture
ECE697AA Lecture 3 Transport Layer: TCP and UDP Tilman Wolf Department of Electrical and Computer Engineering 09/09/08 Today s lecture Transport layer User datagram protocol (UDP) Reliable data transfer
More informationCSCD 330 Network Programming Winter 2015
CSCD 330 Network Programming Winter 2015 Lecture 11a Transport Layer Reading: Chapter 3 Some Material in these slides from J.F Kurose and K.W. Ross All material copyright 1996-2007 1 Chapter 3 Sections
More informationTransport Protocols and TCP
Transport Protocols and TCP Functions Connection establishment and termination Breaking message into packets Error recovery ARQ Flow control Multiplexing, de-multiplexing Transport service is end to end
More informationThe Present and Future of Congestion Control. Mark Handley
The Present and Future of Congestion Control Mark Handley Outline Purpose of congestion control The Present: TCP s congestion control algorithm (AIMD) TCP-friendly congestion control for multimedia Datagram
More informationSequence Number. Acknowledgment Number. Data
CS 455 TCP, Page 1 Transport Layer, Part II Transmission Control Protocol These slides are created by Dr. Yih Huang of George Mason University. Students registered in Dr. Huang's courses at GMU can make
More informationTCP. CSU CS557, Spring 2018 Instructor: Lorenzo De Carli (Slides by Christos Papadopoulos, remixed by Lorenzo De Carli)
TCP CSU CS557, Spring 2018 Instructor: Lorenzo De Carli (Slides by Christos Papadopoulos, remixed by Lorenzo De Carli) 1 Sources Fall and Stevens, TCP/IP Illustrated Vol. 1, 2nd edition Congestion Avoidance
More informationLecture 8. TCP/IP Transport Layer (2)
Lecture 8 TCP/IP Transport Layer (2) Outline (Transport Layer) Principles behind transport layer services: multiplexing/demultiplexing principles of reliable data transfer learn about transport layer protocols
More informationComputer Network Fundamentals Spring Week 10 Congestion Control Andreas Terzis
Computer Network Fundamentals Spring 2008 Week 10 Congestion Control Andreas Terzis Outline Congestion Control TCP Congestion Control CS 344/Spring08 2 What We Know We know: How to process packets in a
More informationThe GBN sender must respond to three types of events:
Go-Back-N (GBN) In a Go-Back-N (GBN) protocol, the sender is allowed to transmit several packets (when available) without waiting for an acknowledgment, but is constrained to have no more than some maximum
More informationTopics. TCP sliding window protocol TCP PUSH flag TCP slow start Bulk data throughput
Topics TCP sliding window protocol TCP PUSH flag TCP slow start Bulk data throughput 2 Introduction In this chapter we will discuss TCP s form of flow control called a sliding window protocol It allows
More informationCongestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015
Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2015 1 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion
More informationAnswers to Sample Questions on Transport Layer
Answers to Sample Questions on Transport Layer 1) Which protocol Go-Back-N or Selective-Repeat - makes more efficient use of network bandwidth? Why? Answer: Selective repeat makes more efficient use of
More informationCongestion Collapse in the 1980s
Congestion Collapse Congestion Collapse in the 1980s Early TCP used fixed size window (e.g., 8 packets) Initially fine for reliability But something happened as the ARPANET grew Links stayed busy but transfer
More informationECS-087: Mobile Computing
ECS-087: Mobile Computing TCP over wireless TCP and mobility Most of the Slides borrowed from Prof. Sridhar Iyer s lecture IIT Bombay Diwakar Yagyasen 1 Effect of Mobility on Protocol Stack Application:
More informationERROR AND FLOW CONTROL. Lecture: 10 Instructor Mazhar Hussain
ERROR AND FLOW CONTROL Lecture: 10 Instructor Mazhar Hussain 1 FLOW CONTROL Flow control coordinates the amount of data that can be sent before receiving acknowledgement It is one of the most important
More informationData Link Layer, Part 5 Sliding Window Protocols. Preface
Data Link Layer, Part 5 Sliding Window Protocols These slides are created by Dr. Yih Huang of George Mason University. Students registered in Dr. Huang's courses at GMU can make a single machine-readable
More informationCS 349/449 Internet Protocols Final Exam Winter /15/2003. Name: Course:
CS 349/449 Internet Protocols Final Exam Winter 2003 12/15/2003 Name: Course: Instructions: 1. You have 2 hours to finish 2. Question 9 is only for 449 students 3. Closed books, closed notes. Write all
More informationAdvanced Congestion Control (Hosts)
Advanced Congestion Control (Hosts) 14-740: Fundamentals of Computer Networks Bill Nace Material from Computer Networking: A Top Down Approach, 5 th edition. J.F. Kurose and K.W. Ross Congestion Control
More informationCS 43: Computer Networks. 19: TCP Flow and Congestion Control October 31, Nov 2, 2018
CS 43: Computer Networks 19: TCP Flow and Congestion Control October 31, Nov 2, 2018 Five-layer Internet Model Application: the application (e.g., the Web, Email) Transport: end-to-end connections, reliability
More informationPerformance Consequences of Partial RED Deployment
Performance Consequences of Partial RED Deployment Brian Bowers and Nathan C. Burnett CS740 - Advanced Networks University of Wisconsin - Madison ABSTRACT The Internet is slowly adopting routers utilizing
More information