Episode 4. Flow and Congestion Control. Baochun Li Department of Electrical and Computer Engineering University of Toronto

Size: px
Start display at page:

Download "Episode 4. Flow and Congestion Control. Baochun Li Department of Electrical and Computer Engineering University of Toronto"

Transcription

1 Episode 4. Flow and Congestion Control Baochun Li Department of Electrical and Computer Engineering University of Toronto

2 Recall the previous episode Detailed design principles in: The link layer The network layer Topic of this episode: Design principles in the end-to-end layer Congestion control a network system design issue 2

3 Salzer 7.5.6, 7.6; Keshav Chapter 9.7, , CUBIC paper (critique 2)

4 Design Principles in the End-to-End Layer

5 The network layer 5

6 The network layer The network layer provides a useful but not completely dependable best-effort communication environment, that will deliver data segments to any destination But with no guarantees on the order of arrival, certainty of arrival, and the accuracy of content This is too hostile for most applications! 5

7 The end-to-end layer The job of the end-to-end layer is to create a more comfortable communication environment that has the features of performance, reliability, and certainty that an application needs Problem: different applications have different needs But they tend to fall into classes of similar requirements For each class it is possible to design a broadly useful protocol called the transport protocol A transport protocol operates between two attachment points of a network (a client and a service), with the goal of moving either messages or a stream of data between them, while providing a particular set of assurances 6

8 Transport Protocol Design

9 Sending muti-segment messages The simplest method of sending a multi-segment message end-to-end send one segment, wait for the receiver to acknowledge the segment, then send the second segment, and so on known as the lock-step protocol: takes N round-trip times to send N messages! sender send first segment receive ACK, send second segment receive ACK, send third segment (repeat N times) Done. segment 1 Acknowledgment 1 segment 2 Acknowledgment 2 3 N Acknowledgment N receiver accept segment 1 accept segment 2 accept segment N time 8

10 Overlapping transmissions Adopt the pipelining principle As soon as the first segment has been sent, immediately send the next ones, without waiting for acknowledgments When the pipeline is completely filled, there may be several segments in the network : N segments require N transmission times + 1 RTT send segment 1 send segment 2 send segment 3 receive ACK 1 receive ACK 2 (repeat N times) receive ACK N, done. sender 3 2 ack 1 ack 2 segment 1 ack N N receiver acknowledge segment 1 acknowledge segment 2 acknowledge segment N time 9

11 But things can go wrong: lost packets 10

12 But things can go wrong: lost packets One or more packets or acknowledgments may be lost along the way 10

13 But things can go wrong: lost packets One or more packets or acknowledgments may be lost along the way The sender needs to maintain a list of segments sent 10

14 But things can go wrong: lost packets One or more packets or acknowledgments may be lost along the way The sender needs to maintain a list of segments sent As each acknowledgment gets back, the sender checks that off its list 10

15 But things can go wrong: lost packets One or more packets or acknowledgments may be lost along the way The sender needs to maintain a list of segments sent As each acknowledgment gets back, the sender checks that off its list After sending the last segment, the sender sets a timer to expire a little more than one round-trip time in the future 10

16 But things can go wrong: lost packets One or more packets or acknowledgments may be lost along the way The sender needs to maintain a list of segments sent As each acknowledgment gets back, the sender checks that off its list After sending the last segment, the sender sets a timer to expire a little more than one round-trip time in the future If upon receiving an acknowledgment, the list of missing acknowledgments becomes empty, all is well 10

17 But things can go wrong: lost packets One or more packets or acknowledgments may be lost along the way The sender needs to maintain a list of segments sent As each acknowledgment gets back, the sender checks that off its list After sending the last segment, the sender sets a timer to expire a little more than one round-trip time in the future If upon receiving an acknowledgment, the list of missing acknowledgments becomes empty, all is well Otherwise, the sender resends each one in the list, starts another timer, and repeat the sequence until every segment is acknowledged (or the retry limit is reached) 10

18 But things can go wrong: bottlenecks 11

19 But things can go wrong: bottlenecks When the sender generates data, the network can transmit it faster than the (slower) receiver can accept it 11

20 But things can go wrong: bottlenecks When the sender generates data, the network can transmit it faster than the (slower) receiver can accept it The transport protocol needs to include some method of controlling the rate at which the sender generates data called flow control 11

21 But things can go wrong: bottlenecks When the sender generates data, the network can transmit it faster than the (slower) receiver can accept it The transport protocol needs to include some method of controlling the rate at which the sender generates data called flow control A basic intuitive idea 11

22 But things can go wrong: bottlenecks When the sender generates data, the network can transmit it faster than the (slower) receiver can accept it The transport protocol needs to include some method of controlling the rate at which the sender generates data called flow control A basic intuitive idea The sender starts by asking the receiver how much data the receiver can handle 11

23 But things can go wrong: bottlenecks When the sender generates data, the network can transmit it faster than the (slower) receiver can accept it The transport protocol needs to include some method of controlling the rate at which the sender generates data called flow control A basic intuitive idea The sender starts by asking the receiver how much data the receiver can handle The response from the receiver is known as a window 11

24 But things can go wrong: bottlenecks When the sender generates data, the network can transmit it faster than the (slower) receiver can accept it The transport protocol needs to include some method of controlling the rate at which the sender generates data called flow control A basic intuitive idea The sender starts by asking the receiver how much data the receiver can handle The response from the receiver is known as a window The sender asks for permission to send, and the receiver responds by quoting a window size 11

25 But things can go wrong: bottlenecks When the sender generates data, the network can transmit it faster than the (slower) receiver can accept it The transport protocol needs to include some method of controlling the rate at which the sender generates data called flow control A basic intuitive idea The sender starts by asking the receiver how much data the receiver can handle The response from the receiver is known as a window The sender asks for permission to send, and the receiver responds by quoting a window size The sender then sends that much data and waits until it receives permission to send more 11

26 Flow control with a fixed window receive permission, send segment 1 send segment 2 send segment 3 send segment 4 receive ACK 1 receive ACK 2 receive ACK 3 receive ACK 4, wait receive permission, send segment 5 send segment 6 sender segment #1 ack # 2 may I send? yes, 4 segments ack # 1 ack # 3 ack # 4 #2 #3 #4 send 4 more segment #5 #6 receiver receive request, open a 4-segment window buffer segment 1 buffer segment 2 buffer segment 3 buffer segment 4 finished processing segments 1 4, reopen the window buffer segment 5 buffer segment 6 time 12

27 Sliding Windows 13

28 Sliding Windows As soon as it has freed up a segment buffer, the receiver can immediately send permission for a window that is one segment larger Either by sending a separate message or, if there happens to be an ACK ready to go, piggy-backing on that ACK 13

29 Sliding Windows As soon as it has freed up a segment buffer, the receiver can immediately send permission for a window that is one segment larger Either by sending a separate message or, if there happens to be an ACK ready to go, piggy-backing on that ACK The sender keeps track of how much window space is left, and increases that number whenever additional permission arrives 13

30 Self-Pacing 14

31 Self-Pacing Once the sender fills a sliding window, it cannot send the next data element until the acknowledgment of the oldest data element in the window returns 14

32 Self-Pacing Once the sender fills a sliding window, it cannot send the next data element until the acknowledgment of the oldest data element in the window returns At the same time, the receiver cannot generate acknowledgments any faster than the network can deliver data elements 14

33 Self-Pacing Once the sender fills a sliding window, it cannot send the next data element until the acknowledgment of the oldest data element in the window returns At the same time, the receiver cannot generate acknowledgments any faster than the network can deliver data elements Because of these two considerations, the rate at which the window slides adjusts itself automatically to be equal to the bottleneck data rate! 14

34 Appropriate window size 15

35 Appropriate window size Do we still need to know the network round-trip time at the sender? 15

36 Appropriate window size Do we still need to know the network round-trip time at the sender? Yes window size >= round-trip time x bottleneck data rate 15

37 Appropriate window size Do we still need to know the network round-trip time at the sender? Yes window size >= round-trip time x bottleneck data rate The bandwidth-delay product 15

38 Appropriate window size Do we still need to know the network round-trip time at the sender? Yes window size >= round-trip time x bottleneck data rate The bandwidth-delay product If a too-large round trip time estimate is used in window setting, the resulting excessive window size will simply increase the length of packet forwarding queues in the network 15

39 Appropriate window size Do we still need to know the network round-trip time at the sender? Yes window size >= round-trip time x bottleneck data rate The bandwidth-delay product If a too-large round trip time estimate is used in window setting, the resulting excessive window size will simply increase the length of packet forwarding queues in the network Those longer queues will increase the transit time 15

40 Appropriate window size Do we still need to know the network round-trip time at the sender? Yes window size >= round-trip time x bottleneck data rate The bandwidth-delay product If a too-large round trip time estimate is used in window setting, the resulting excessive window size will simply increase the length of packet forwarding queues in the network Those longer queues will increase the transit time The increase will lead the sender to think that it needs an even larger window a positive feedback! 15

41 Appropriate window size Do we still need to know the network round-trip time at the sender? Yes window size >= round-trip time x bottleneck data rate The bandwidth-delay product If a too-large round trip time estimate is used in window setting, the resulting excessive window size will simply increase the length of packet forwarding queues in the network Those longer queues will increase the transit time The increase will lead the sender to think that it needs an even larger window a positive feedback! The estimate needs to err on the side of being too small 15

42 Congestion control a network-wide problem of managing shared resources

43 Shared resources: everywhere in a system 17

44 Shared resources: everywhere in a system Resource sharing examples in systems: Many virtual processors (threads) sharing a few physical processors using a thread manager A multilevel memory manager creates the illusion of large, fast virtual memories by combining a small and fast shared memory with large and slow storage devices 17

45 Shared resources: everywhere in a system Resource sharing examples in systems: Many virtual processors (threads) sharing a few physical processors using a thread manager A multilevel memory manager creates the illusion of large, fast virtual memories by combining a small and fast shared memory with large and slow storage devices In networks, the resource that is shared is a set of communication links and the supporting packet forwarding switches They are geographically and administratively distributed managing them is more complex! 17

46 Analogy: Supermarket vs. Packet Switch 18

47 Analogy: Supermarket vs. Packet Switch Queues are started to manage the problem that packets may arrive at a switch at a time when the outgoing link is already busy transmitting another packet Just like checkout lines in the supermarket 18

48 Analogy: Supermarket vs. Packet Switch Queues are started to manage the problem that packets may arrive at a switch at a time when the outgoing link is already busy transmitting another packet Just like checkout lines in the supermarket Any time there is a shared resource, and the demand for that resource comes from several statistically independent sources, there will be fluctuations in the arrival of load Thus there will be fluctuations in the length of the queue, and the time spent waiting for service in the queue Offered load > capacity of a resource: overloaded 18

49 How long will overload persist? 19

50 How long will overload persist? If the duration of overload is comparable to service time, it is normal the time in a supermarket to serve one customer or the time for a packet forwarding switch to handle one packet In this case, a queue handles short bursts of too much demand by time-averaging with adjacent periods when there is excess capacity 19

51 How long will overload persist? If the duration of overload is comparable to service time, it is normal the time in a supermarket to serve one customer or the time for a packet forwarding switch to handle one packet In this case, a queue handles short bursts of too much demand by time-averaging with adjacent periods when there is excess capacity If overload persists for a time significantly longer than the service time, there begins to develop a risk that the system will fail to meet some specification, such as maximum delay When this occurs, the resource is said to be congested If congestion is chronic, the length of the queue will grow without bound 19

52 The stability of offered load 20

53 The stability of offered load The stability of offered load is another factor in the frequency and duration of congestion When the load on a resource is aggregated from a large number of statistically independent small sources, averaging can reduce the frequency and duration of load peaks When the load comes from a small number of large sources, even if the sources are independent, the probability that they all demand service at about the same time can be high enough, that congestion can be frequent or long-lasting 20

54 Congestion Collapse Competition for resource may lead to waste of resource Counter-intuitive, but the supermarket analogy can help understand it Customers who are tired of waiting may just walk out, leaving filled shopping carts behind Someone has to put the goods from abandoned carts back to the shelves One of two of the checkout clerks leave their registers to do so The rate of sales being rung up drops while they are away The queues at the remaining registers grow longer Causing more people to abandon their carts Eventually, the clerks will be doing nothing but restocking 21

55 Self-sustaining nature of congestion collapse capacity of a limited resource unlimited resource useful work done limited resource with no waste congestion collapse offered load 22

56 Self-sustaining nature of congestion collapse Once temporary congestion induces a collapse, even if the offered load drops back to a level that the resource can handle, the already induced waste rate can continue to exceed the capacity of the resource This will cause it to continue to waste the resource, remain congested indefinitely capacity of a limited resource unlimited resource useful work done limited resource with no waste congestion collapse offered load 22

57 Primary goal of resource management 23

58 Primary goal of resource management Avoid congestion collapse! by increasing the capacity of the resource by reducing the offered load 23

59 Primary goal of resource management Avoid congestion collapse! by increasing the capacity of the resource by reducing the offered load There is a need to move quickly to a state in which the load is less than the capacity of the resource But when offered load is reduced, the amount reduced does not really go away It is just deferred to a later time at the source The source is still averaging periods of overload with periods of excess capacity, but over a longer period of time 23

60 How to increase capacity or reduce load? 24

61 How to increase capacity or reduce load? It is necessary to provide feedback to one or more control points an entity that determines the amount of resource that is available the load being offered 24

62 How to increase capacity or reduce load? It is necessary to provide feedback to one or more control points an entity that determines the amount of resource that is available the load being offered A congestion control system is fundamentally a feedback system A delay in the feedback path can lead to oscillations in load 24

63 The Supermarket and Call Centre Analogies 25

64 The Supermarket and Call Centre Analogies In a supermarket, a store manager can be used to watch the queues at the checkout lines Whenever there are more than two or three customers in any line, the manager calls for staff elsewhere in the store to drop what they are doing, and temporarily take stations as checkout clerks This practically increases capacity 25

65 The Supermarket and Call Centre Analogies In a supermarket, a store manager can be used to watch the queues at the checkout lines Whenever there are more than two or three customers in any line, the manager calls for staff elsewhere in the store to drop what they are doing, and temporarily take stations as checkout clerks This practically increases capacity When you call customer service, you may hear an automatic response message Your call is important to us. It will be 30 minutes to we can answer. This may lead some callers to hang up and try again at a different time This practically decrease load 25

66 The Supermarket and Call Centre Analogies In a supermarket, a store manager can be used to watch the queues at the checkout lines Whenever there are more than two or three customers in any line, the manager calls for staff elsewhere in the store to drop what they are doing, and temporarily take stations as checkout clerks This practically increases capacity When you call customer service, you may hear an automatic response message Your call is important to us. It will be 30 minutes to we can answer. This may lead some callers to hang up and try again at a different time This practically decrease load Both may lead to oscillations 25

67 Resource Management in Networks

68 Shared resources in a computer network 27

69 Shared resources in a computer network Communication links 27

70 Shared resources in a computer network Communication links The processing and buffering capacity of the packet forwarding switches 27

71 Main Challenges Part 1 28

72 Main Challenges Part 1 There is more than one resource Even a small number of resources can be used up in a large number of different ways, which is complex to keep track of There can be dynamic interactions among different resources as one nears capacity it may push back on another which may push back on yet another which may push back on the first one! 28

73 Main Challenges Part 1 There is more than one resource Even a small number of resources can be used up in a large number of different ways, which is complex to keep track of There can be dynamic interactions among different resources as one nears capacity it may push back on another which may push back on yet another which may push back on the first one! It is easy to induce congestion collapse As queues for a particular communication link grow, delays grow When queuing delays become too long, the timers of higher layer protocols begin to expire and trigger retransmissions of the delayed packets The retransmitted packets join the long queues, and waste capacity 28

74 No, we cannot install more buffers! 29

75 No, we cannot install more buffers! As memory gets cheaper, the idea is tempting but it doesn t work Suppose memory is so cheap that a packet forwarder can be equipped with an infinite buffer size, which can absorb an unlimited amount of overload But as more buffers are used, the queuing delay grows At some point the queuing delay exceeds the timeouts of end-to-end protocols Packets are retransmitted The offered load is now larger, so the queue grows even longer It becomes self-sustaining, and the queue grows even longer The infinite buffer size does not solve the problem, it makes it worse! 29

76 Main Challenges Part 2 30

77 Main Challenges Part 2 There are limited options to expand capacity Capacity is determined by physical facilities (e.g., wireless spectrum) One can try sending some queued packets via an alternate path But these strategies are too complex to work well! Reducing the offered load (the demand) is the only realistic way 30

78 Main Challenges Part 3 31

79 Main Challenges Part 3 The options to reduce load are awkward 31

80 Main Challenges Part 3 The options to reduce load are awkward The control point for the offered load is too far away 31

81 Main Challenges Part 3 The options to reduce load are awkward The control point for the offered load is too far away Feedback path to that point may be long: by the time the feedback signal gets there, the sender may have stopped sending; The feedback may get lost 31

82 Main Challenges Part 3 The options to reduce load are awkward The control point for the offered load is too far away Feedback path to that point may be long: by the time the feedback signal gets there, the sender may have stopped sending; The feedback may get lost The control point must be capable of reducing its offered load 31

83 Main Challenges Part 3 The options to reduce load are awkward The control point for the offered load is too far away Feedback path to that point may be long: by the time the feedback signal gets there, the sender may have stopped sending; The feedback may get lost The control point must be capable of reducing its offered load Video streaming protocols are not able to do this! 31

84 Main Challenges Part 3 The options to reduce load are awkward The control point for the offered load is too far away Feedback path to that point may be long: by the time the feedback signal gets there, the sender may have stopped sending; The feedback may get lost The control point must be capable of reducing its offered load Video streaming protocols are not able to do this! The control point must be willing to cooperate 31

85 Main Challenges Part 3 The options to reduce load are awkward The control point for the offered load is too far away Feedback path to that point may be long: by the time the feedback signal gets there, the sender may have stopped sending; The feedback may get lost The control point must be capable of reducing its offered load Video streaming protocols are not able to do this! The control point must be willing to cooperate The packet forwarder in the network layer may be under a different administration than the control point in the end-to-end layer 31

86 Main Challenges Part 3 The options to reduce load are awkward The control point for the offered load is too far away Feedback path to that point may be long: by the time the feedback signal gets there, the sender may have stopped sending; The feedback may get lost The control point must be capable of reducing its offered load Video streaming protocols are not able to do this! The control point must be willing to cooperate The packet forwarder in the network layer may be under a different administration than the control point in the end-to-end layer The control point is more interested in keeping its offered load equal to its intended load, in the hope of capturing more of the capacity in the face of competition! (think BitTorrent) 31

87 Possible ideas to address these challenges

88 Overprovisioning 33

89 Overprovisioning Basic idea: configure each link of the network to have 125% or 200% as much capacity as the offered load at the busiest minute of the day Works best on interior links of a large network, where no individual client represents more than a tiny fraction of the load Average load offered by a large number of statistically independent sources is relatively stable 33

90 Overprovisioning Basic idea: configure each link of the network to have 125% or 200% as much capacity as the offered load at the busiest minute of the day Works best on interior links of a large network, where no individual client represents more than a tiny fraction of the load Average load offered by a large number of statistically independent sources is relatively stable Problems Odd events can disrupt statistical independence Overprovisioning on one link will move the congestion to another At the edge, statistical averaging stops working flash crowd User usage patterns may adapt to the additional capacity 33

91 Pricing in a market: the invisible hand 34

92 Pricing in a market: the invisible hand Since network resources are just another commodity with limited availability, it should be possible to use pricing as a congestion control mechanism If demand for a resource temporarily exceeds its capacity, clients will bid up the price The increased price will cause some clients to defer their use of the resource until a time when it is cheaper, thereby reducing offered load It will also induce additional suppliers to provide more capacity 34

93 Pricing in a market: the invisible hand Since network resources are just another commodity with limited availability, it should be possible to use pricing as a congestion control mechanism If demand for a resource temporarily exceeds its capacity, clients will bid up the price The increased price will cause some clients to defer their use of the resource until a time when it is cheaper, thereby reducing offered load It will also induce additional suppliers to provide more capacity Challenges How do we make it work on the short time scales of congestion? Clients need a way to predict the costs in the short term, too There has to be a minimal barrier of entry by alternate suppliers 34

94 How do we address these challenges? Decentralized schemes are extremely scalable

95 Case in point: the Internet

96 Cross-layer Cooperation: Feedback

97 Cross-layer feedback: basic idea 38

98 Cross-layer feedback: basic idea The packet forwarder that notices congestion provides feedback to one or more end-to-end layer sources 38

99 Cross-layer feedback: basic idea The packet forwarder that notices congestion provides feedback to one or more end-to-end layer sources The end-to-end source responds by reducing its offered load 38

100 Cross-layer feedback: basic idea The packet forwarder that notices congestion provides feedback to one or more end-to-end layer sources The end-to-end source responds by reducing its offered load The best solution: the packet forwarder simply discards the packet Simple and reliable! 38

101 Which packet to discard? 39

102 Which packet to discard? The choice is not obvious 39

103 Which packet to discard? The choice is not obvious The simplest strategy, tail drop, limits the size of the queue, and any packet that arrives when the queue is full gets discarded 39

104 Which packet to discard? The choice is not obvious The simplest strategy, tail drop, limits the size of the queue, and any packet that arrives when the queue is full gets discarded A better technique, called random drop, may be to choose a victim from the queue at random 39

105 Which packet to discard? The choice is not obvious The simplest strategy, tail drop, limits the size of the queue, and any packet that arrives when the queue is full gets discarded A better technique, called random drop, may be to choose a victim from the queue at random The sources that are contributing the most to congestion are the most likely to receive the feedback 39

106 Which packet to discard? The choice is not obvious The simplest strategy, tail drop, limits the size of the queue, and any packet that arrives when the queue is full gets discarded A better technique, called random drop, may be to choose a victim from the queue at random The sources that are contributing the most to congestion are the most likely to receive the feedback Another refinement, called early drop, begins dropping packets before the queue is completely full, in the hope of alerting the source sooner 39

107 Which packet to discard? The choice is not obvious The simplest strategy, tail drop, limits the size of the queue, and any packet that arrives when the queue is full gets discarded A better technique, called random drop, may be to choose a victim from the queue at random The sources that are contributing the most to congestion are the most likely to receive the feedback Another refinement, called early drop, begins dropping packets before the queue is completely full, in the hope of alerting the source sooner The goal of early drop is to start reducing the offered load as soon as the possibility of congestion is detected, rather than waiting till congestion is confirmed avoidance rather than recovery 39

108 Which packet to discard? The choice is not obvious The simplest strategy, tail drop, limits the size of the queue, and any packet that arrives when the queue is full gets discarded A better technique, called random drop, may be to choose a victim from the queue at random The sources that are contributing the most to congestion are the most likely to receive the feedback Another refinement, called early drop, begins dropping packets before the queue is completely full, in the hope of alerting the source sooner The goal of early drop is to start reducing the offered load as soon as the possibility of congestion is detected, rather than waiting till congestion is confirmed avoidance rather than recovery Random drop + early drop: random early detection (RED) 39

109 Which packet to discard? The choice is not obvious The simplest strategy, tail drop, limits the size of the queue, and any packet that arrives when the queue is full gets discarded A better technique, called random drop, may be to choose a victim from the queue at random The sources that are contributing the most to congestion are the most likely to receive the feedback Another refinement, called early drop, begins dropping packets before the queue is completely full, in the hope of alerting the source sooner The goal of early drop is to start reducing the offered load as soon as the possibility of congestion is detected, rather than waiting till congestion is confirmed avoidance rather than recovery Random drop + early drop: random early detection (RED) 39

Episode 4. Flow and Congestion Control. Baochun Li Department of Electrical and Computer Engineering University of Toronto

Episode 4. Flow and Congestion Control. Baochun Li Department of Electrical and Computer Engineering University of Toronto Episode 4. Flow and Congestion Control Baochun Li Department of Electrical and Computer Engineering University of Toronto Recall the previous episode Detailed design principles in: The link layer The network

More information

Congestion control in TCP

Congestion control in TCP Congestion control in TCP If the transport entities on many machines send too many packets into the network too quickly, the network will become congested, with performance degraded as packets are delayed

More information

Priority Traffic CSCD 433/533. Advanced Networks Spring Lecture 21 Congestion Control and Queuing Strategies

Priority Traffic CSCD 433/533. Advanced Networks Spring Lecture 21 Congestion Control and Queuing Strategies CSCD 433/533 Priority Traffic Advanced Networks Spring 2016 Lecture 21 Congestion Control and Queuing Strategies 1 Topics Congestion Control and Resource Allocation Flows Types of Mechanisms Evaluation

More information

CSE 461. TCP and network congestion

CSE 461. TCP and network congestion CSE 461 TCP and network congestion This Lecture Focus How should senders pace themselves to avoid stressing the network? Topics Application Presentation Session Transport Network congestion collapse Data

More information

Lecture 21: Congestion Control" CSE 123: Computer Networks Alex C. Snoeren

Lecture 21: Congestion Control CSE 123: Computer Networks Alex C. Snoeren Lecture 21: Congestion Control" CSE 123: Computer Networks Alex C. Snoeren Lecture 21 Overview" How fast should a sending host transmit data? Not to fast, not to slow, just right Should not be faster than

More information

6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long

6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long 6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long Please read Chapter 19 of the 6.02 book for background, especially on acknowledgments (ACKs), timers,

More information

Transmission Control Protocol. ITS 413 Internet Technologies and Applications

Transmission Control Protocol. ITS 413 Internet Technologies and Applications Transmission Control Protocol ITS 413 Internet Technologies and Applications Contents Overview of TCP (Review) TCP and Congestion Control The Causes of Congestion Approaches to Congestion Control TCP Congestion

More information

CSE 123A Computer Networks

CSE 123A Computer Networks CSE 123A Computer Networks Winter 2005 Lecture 14 Congestion Control Some images courtesy David Wetherall Animations by Nick McKeown and Guido Appenzeller The bad news and the good news The bad news: new

More information

Transport Protocols & TCP TCP

Transport Protocols & TCP TCP Transport Protocols & TCP CSE 3213 Fall 2007 13 November 2007 1 TCP Services Flow control Connection establishment and termination Congestion control 2 1 TCP Services Transmission Control Protocol (RFC

More information

Data Link Layer, Part 5 Sliding Window Protocols. Preface

Data Link Layer, Part 5 Sliding Window Protocols. Preface Data Link Layer, Part 5 Sliding Window Protocols These slides are created by Dr. Yih Huang of George Mason University. Students registered in Dr. Huang's courses at GMU can make a single machine-readable

More information

Transport Protocols and TCP: Review

Transport Protocols and TCP: Review Transport Protocols and TCP: Review CSE 6590 Fall 2010 Department of Computer Science & Engineering York University 1 19 September 2010 1 Connection Establishment and Termination 2 2 1 Connection Establishment

More information

Communication Networks

Communication Networks Communication Networks Spring 2018 Laurent Vanbever nsg.ee.ethz.ch ETH Zürich (D-ITET) April 30 2018 Materials inspired from Scott Shenker & Jennifer Rexford Last week on Communication Networks We started

More information

TCP and BBR. Geoff Huston APNIC

TCP and BBR. Geoff Huston APNIC TCP and BBR Geoff Huston APNIC Computer Networking is all about moving data The way in which data movement is controlled is a key characteristic of the network architecture The Internet protocol passed

More information

Congestion Control in Communication Networks

Congestion Control in Communication Networks Congestion Control in Communication Networks Introduction Congestion occurs when number of packets transmitted approaches network capacity Objective of congestion control: keep number of packets below

More information

Chapter III. congestion situation in Highspeed Networks

Chapter III. congestion situation in Highspeed Networks Chapter III Proposed model for improving the congestion situation in Highspeed Networks TCP has been the most used transport protocol for the Internet for over two decades. The scale of the Internet and

More information

Congestion Control. Tom Anderson

Congestion Control. Tom Anderson Congestion Control Tom Anderson Bandwidth Allocation How do we efficiently share network resources among billions of hosts? Congestion control Sending too fast causes packet loss inside network -> retransmissions

More information

CS321: Computer Networks Congestion Control in TCP

CS321: Computer Networks Congestion Control in TCP CS321: Computer Networks Congestion Control in TCP Dr. Manas Khatua Assistant Professor Dept. of CSE IIT Jodhpur E-mail: manaskhatua@iitj.ac.in Causes and Cost of Congestion Scenario-1: Two Senders, a

More information

TCP and BBR. Geoff Huston APNIC

TCP and BBR. Geoff Huston APNIC TCP and BBR Geoff Huston APNIC Computer Networking is all about moving data The way in which data movement is controlled is a key characteristic of the network architecture The Internet protocol passed

More information

Lixia Zhang M. I. T. Laboratory for Computer Science December 1985

Lixia Zhang M. I. T. Laboratory for Computer Science December 1985 Network Working Group Request for Comments: 969 David D. Clark Mark L. Lambert Lixia Zhang M. I. T. Laboratory for Computer Science December 1985 1. STATUS OF THIS MEMO This RFC suggests a proposed protocol

More information

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4 Principles of reliable data transfer 3.5 Connection-oriented transport: TCP segment

More information

Bandwidth Allocation & TCP

Bandwidth Allocation & TCP Bandwidth Allocation & TCP The Transport Layer Focus Application Presentation How do we share bandwidth? Session Topics Transport Network Congestion control & fairness Data Link TCP Additive Increase/Multiplicative

More information

Congestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources

Congestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources Congestion Source 1 Source 2 10-Mbps Ethernet 100-Mbps FDDI Router 1.5-Mbps T1 link Destination Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets

More information

Outline Computer Networking. TCP slow start. TCP modeling. TCP details AIMD. Congestion Avoidance. Lecture 18 TCP Performance Peter Steenkiste

Outline Computer Networking. TCP slow start. TCP modeling. TCP details AIMD. Congestion Avoidance. Lecture 18 TCP Performance Peter Steenkiste Outline 15-441 Computer Networking Lecture 18 TCP Performance Peter Steenkiste Fall 2010 www.cs.cmu.edu/~prs/15-441-f10 TCP congestion avoidance TCP slow start TCP modeling TCP details 2 AIMD Distributed,

More information

Reliable Transport I: Concepts and TCP Protocol

Reliable Transport I: Concepts and TCP Protocol Reliable Transport I: Concepts and TCP Protocol Brad Karp UCL Computer Science CS 3035/GZ01 29 th October 2013 Part I: Transport Concepts Layering context Transport goals Transport mechanisms 2 Context:

More information

Overview. TCP & router queuing Computer Networking. TCP details. Workloads. TCP Performance. TCP Performance. Lecture 10 TCP & Routers

Overview. TCP & router queuing Computer Networking. TCP details. Workloads. TCP Performance. TCP Performance. Lecture 10 TCP & Routers Overview 15-441 Computer Networking TCP & router queuing Lecture 10 TCP & Routers TCP details Workloads Lecture 10: 09-30-2002 2 TCP Performance TCP Performance Can TCP saturate a link? Congestion control

More information

CS519: Computer Networks. Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control

CS519: Computer Networks. Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control : Computer Networks Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control TCP performance We ve seen how TCP the protocol works Sequencing, receive window, connection setup and teardown And

More information

TCP Congestion Control : Computer Networking. Introduction to TCP. Key Things You Should Know Already. Congestion Control RED

TCP Congestion Control : Computer Networking. Introduction to TCP. Key Things You Should Know Already. Congestion Control RED TCP Congestion Control 15-744: Computer Networking L-4 TCP Congestion Control RED Assigned Reading [FJ93] Random Early Detection Gateways for Congestion Avoidance [TFRC] Equation-Based Congestion Control

More information

Transport Layer (TCP/UDP)

Transport Layer (TCP/UDP) Transport Layer (TCP/UDP) Where we are in the Course Moving on up to the Transport Layer! Application Transport Network Link Physical CSE 461 University of Washington 2 Three-Way Handshake (3) Suppose

More information

Lecture 14: Congestion Control"

Lecture 14: Congestion Control Lecture 14: Congestion Control" CSE 222A: Computer Communication Networks George Porter Thanks: Amin Vahdat, Dina Katabi and Alex C. Snoeren Lecture 14 Overview" TCP congestion control review Dukkipati

More information

Reliable Transport II: TCP and Congestion Control

Reliable Transport II: TCP and Congestion Control Reliable Transport II: TCP and Congestion Control Stefano Vissicchio UCL Computer Science COMP0023 Recap: Last Lecture Transport Concepts Layering context Transport goals Transport mechanisms and design

More information

UNIT IV -- TRANSPORT LAYER

UNIT IV -- TRANSPORT LAYER UNIT IV -- TRANSPORT LAYER TABLE OF CONTENTS 4.1. Transport layer. 02 4.2. Reliable delivery service. 03 4.3. Congestion control. 05 4.4. Connection establishment.. 07 4.5. Flow control 09 4.6. Transmission

More information

CS457 Transport Protocols. CS 457 Fall 2014

CS457 Transport Protocols. CS 457 Fall 2014 CS457 Transport Protocols CS 457 Fall 2014 Topics Principles underlying transport-layer services Demultiplexing Detecting corruption Reliable delivery Flow control Transport-layer protocols User Datagram

More information

Sequence Number. Acknowledgment Number. Data

Sequence Number. Acknowledgment Number. Data CS 455 TCP, Page 1 Transport Layer, Part II Transmission Control Protocol These slides are created by Dr. Yih Huang of George Mason University. Students registered in Dr. Huang's courses at GMU can make

More information

ECE 610: Homework 4 Problems are taken from Kurose and Ross.

ECE 610: Homework 4 Problems are taken from Kurose and Ross. ECE 610: Homework 4 Problems are taken from Kurose and Ross. Problem 1: Host A and B are communicating over a TCP connection, and Host B has already received from A all bytes up through byte 248. Suppose

More information

Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015

Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015 Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015 I d like to complete our exploration of TCP by taking a close look at the topic of congestion control in TCP. To prepare for

More information

Computer Networks. Sándor Laki ELTE-Ericsson Communication Networks Laboratory

Computer Networks. Sándor Laki ELTE-Ericsson Communication Networks Laboratory Computer Networks Sándor Laki ELTE-Ericsson Communication Networks Laboratory ELTE FI Department Of Information Systems lakis@elte.hu http://lakis.web.elte.hu Based on the slides of Laurent Vanbever. Further

More information

Reliable Transport II: TCP and Congestion Control

Reliable Transport II: TCP and Congestion Control Reliable Transport II: TCP and Congestion Control Brad Karp UCL Computer Science CS 3035/GZ01 31 st October 2013 Outline Slow Start AIMD Congestion control Throughput, loss, and RTT equation Connection

More information

Congestion Control End Hosts. CSE 561 Lecture 7, Spring David Wetherall. How fast should the sender transmit data?

Congestion Control End Hosts. CSE 561 Lecture 7, Spring David Wetherall. How fast should the sender transmit data? Congestion Control End Hosts CSE 51 Lecture 7, Spring. David Wetherall Today s question How fast should the sender transmit data? Not tooslow Not toofast Just right Should not be faster than the receiver

More information

Introduction to Real-Time Communications. Real-Time and Embedded Systems (M) Lecture 15

Introduction to Real-Time Communications. Real-Time and Embedded Systems (M) Lecture 15 Introduction to Real-Time Communications Real-Time and Embedded Systems (M) Lecture 15 Lecture Outline Modelling real-time communications Traffic and network models Properties of networks Throughput, delay

More information

Network Working Group Request for Comments: 1046 ISI February A Queuing Algorithm to Provide Type-of-Service for IP Links

Network Working Group Request for Comments: 1046 ISI February A Queuing Algorithm to Provide Type-of-Service for IP Links Network Working Group Request for Comments: 1046 W. Prue J. Postel ISI February 1988 A Queuing Algorithm to Provide Type-of-Service for IP Links Status of this Memo This memo is intended to explore how

More information

ADVANCED COMPUTER NETWORKS

ADVANCED COMPUTER NETWORKS ADVANCED COMPUTER NETWORKS Congestion Control and Avoidance 1 Lecture-6 Instructor : Mazhar Hussain CONGESTION CONTROL When one part of the subnet (e.g. one or more routers in an area) becomes overloaded,

More information

Computer Networking

Computer Networking 15-441 Computer Networking Lecture 17 TCP Performance & Future Eric Anderson Fall 2013 www.cs.cmu.edu/~prs/15-441-f13 Outline TCP modeling TCP details 2 TCP Performance Can TCP saturate a link? Congestion

More information

The GBN sender must respond to three types of events:

The GBN sender must respond to three types of events: Go-Back-N (GBN) In a Go-Back-N (GBN) protocol, the sender is allowed to transmit several packets (when available) without waiting for an acknowledgment, but is constrained to have no more than some maximum

More information

Transport Protocols Reading: Sections 2.5, 5.1, and 5.2. Goals for Todayʼs Lecture. Role of Transport Layer

Transport Protocols Reading: Sections 2.5, 5.1, and 5.2. Goals for Todayʼs Lecture. Role of Transport Layer Transport Protocols Reading: Sections 2.5, 5.1, and 5.2 CS 375: Computer Networks Thomas C. Bressoud 1 Goals for Todayʼs Lecture Principles underlying transport-layer services (De)multiplexing Detecting

More information

CSCI Topics: Internet Programming Fall 2008

CSCI Topics: Internet Programming Fall 2008 CSCI 491-01 Topics: Internet Programming Fall 2008 Transport Layer Derek Leonard Hendrix College October 20, 2008 Original slides copyright 1996-2007 J.F Kurose and K.W. Ross 1 Chapter 3: Roadmap 3.1 Transport-layer

More information

Recap. More TCP. Congestion avoidance. TCP timers. TCP lifeline. Application Presentation Session Transport Network Data Link Physical

Recap. More TCP. Congestion avoidance. TCP timers. TCP lifeline. Application Presentation Session Transport Network Data Link Physical Recap ½ congestion window ½ congestion window More TCP Congestion avoidance TCP timers TCP lifeline Application Presentation Session Transport Network Data Link Physical 1 Congestion Control vs Avoidance

More information

Computer Networking Introduction

Computer Networking Introduction Computer Networking Introduction Halgurd S. Maghdid Software Engineering Department Koya University-Koya, Kurdistan-Iraq Lecture No.11 Chapter 3 outline 3.1 transport-layer services 3.2 multiplexing and

More information

Programming Assignment 3: Transmission Control Protocol

Programming Assignment 3: Transmission Control Protocol CS 640 Introduction to Computer Networks Spring 2005 http://www.cs.wisc.edu/ suman/courses/640/s05 Programming Assignment 3: Transmission Control Protocol Assigned: March 28,2005 Due: April 15, 2005, 11:59pm

More information

CMSC 417. Computer Networks Prof. Ashok K Agrawala Ashok Agrawala. October 11, 2018

CMSC 417. Computer Networks Prof. Ashok K Agrawala Ashok Agrawala. October 11, 2018 CMSC 417 Computer Networks Prof. Ashok K Agrawala 2018 Ashok Agrawala Message, Segment, Packet, and Frame host host HTTP HTTP message HTTP TCP TCP segment TCP router router IP IP packet IP IP packet IP

More information

Chapter 24 Congestion Control and Quality of Service 24.1

Chapter 24 Congestion Control and Quality of Service 24.1 Chapter 24 Congestion Control and Quality of Service 24.1 Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 24-1 DATA TRAFFIC The main focus of congestion control

More information

Chapter 11 Data Link Control 11.1

Chapter 11 Data Link Control 11.1 Chapter 11 Data Link Control 11.1 Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 11-1 FRAMING The data link layer needs to pack bits into frames, so that each

More information

15-744: Computer Networking TCP

15-744: Computer Networking TCP 15-744: Computer Networking TCP Congestion Control Congestion Control Assigned Reading [Jacobson and Karels] Congestion Avoidance and Control [TFRC] Equation-Based Congestion Control for Unicast Applications

More information

The Transport Layer Reliability

The Transport Layer Reliability The Transport Layer Reliability CS 3, Lecture 7 http://www.cs.rutgers.edu/~sn4/3-s9 Srinivas Narayana (slides heavily adapted from text authors material) Quick recap: Transport Provide logical communication

More information

ERROR AND FLOW CONTROL. Lecture: 10 Instructor Mazhar Hussain

ERROR AND FLOW CONTROL. Lecture: 10 Instructor Mazhar Hussain ERROR AND FLOW CONTROL Lecture: 10 Instructor Mazhar Hussain 1 FLOW CONTROL Flow control coordinates the amount of data that can be sent before receiving acknowledgement It is one of the most important

More information

CSCD 330 Network Programming Winter 2015

CSCD 330 Network Programming Winter 2015 CSCD 330 Network Programming Winter 2015 Lecture 11a Transport Layer Reading: Chapter 3 Some Material in these slides from J.F Kurose and K.W. Ross All material copyright 1996-2007 1 Chapter 3 Sections

More information

Congestion in Data Networks. Congestion in Data Networks

Congestion in Data Networks. Congestion in Data Networks Congestion in Data Networks CS420/520 Axel Krings 1 Congestion in Data Networks What is Congestion? Congestion occurs when the number of packets being transmitted through the network approaches the packet

More information

CSE/EE 461. TCP congestion control. Last Lecture. This Lecture. Focus How should senders pace themselves to avoid stressing the network?

CSE/EE 461. TCP congestion control. Last Lecture. This Lecture. Focus How should senders pace themselves to avoid stressing the network? CSE/EE 461 TCP congestion control Last Lecture Focus How should senders pace themselves to avoid stressing the network? Topics congestion collapse congestion control Application Presentation Session Transport

More information

Transport Protocols Reading: Sections 2.5, 5.1, and 5.2

Transport Protocols Reading: Sections 2.5, 5.1, and 5.2 Transport Protocols Reading: Sections 2.5, 5.1, and 5.2 CE443 - Fall 1390 Acknowledgments: Lecture slides are from Computer networks course thought by Jennifer Rexford at Princeton University. When slides

More information

image 3.8 KB Figure 1.6: Example Web Page

image 3.8 KB Figure 1.6: Example Web Page image. KB image 1 KB Figure 1.: Example Web Page and is buffered at a router, it must wait for all previously queued packets to be transmitted first. The longer the queue (i.e., the more packets in the

More information

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness Recap TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness 81 Feedback Signals Several possible signals, with different

More information

ECE697AA Lecture 3. Today s lecture

ECE697AA Lecture 3. Today s lecture ECE697AA Lecture 3 Transport Layer: TCP and UDP Tilman Wolf Department of Electrical and Computer Engineering 09/09/08 Today s lecture Transport layer User datagram protocol (UDP) Reliable data transfer

More information

CS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP

CS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP CS 5520/ECE 5590NA: Network Architecture I Spring 2008 Lecture 13: UDP and TCP Most recent lectures discussed mechanisms to make better use of the IP address space, Internet control messages, and layering

More information

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 Question 344 Points 444 Points Score 1 10 10 2 10 10 3 20 20 4 20 10 5 20 20 6 20 10 7-20 Total: 100 100 Instructions: 1. Question

More information

Communications Software. CSE 123b. CSE 123b. Spring Lecture 3: Reliable Communications. Stefan Savage. Some slides couresty David Wetherall

Communications Software. CSE 123b. CSE 123b. Spring Lecture 3: Reliable Communications. Stefan Savage. Some slides couresty David Wetherall CSE 123b CSE 123b Communications Software Spring 2002 Lecture 3: Reliable Communications Stefan Savage Some slides couresty David Wetherall Administrativa Home page is up and working http://www-cse.ucsd.edu/classes/sp02/cse123b/

More information

TCP Congestion Control

TCP Congestion Control 1 TCP Congestion Control Onwutalobi, Anthony Claret Department of Computer Science University of Helsinki, Helsinki Finland onwutalo@cs.helsinki.fi Abstract This paper is aimed to discuss congestion control

More information

Page 1. Review: Internet Protocol Stack. Transport Layer Services EEC173B/ECS152C. Review: TCP. Transport Layer: Connectionless Service

Page 1. Review: Internet Protocol Stack. Transport Layer Services EEC173B/ECS152C. Review: TCP. Transport Layer: Connectionless Service EEC7B/ECS5C Review: Internet Protocol Stack Review: TCP Application Telnet FTP HTTP Transport Network Link Physical bits on wire TCP LAN IP UDP Packet radio Do you remember the various mechanisms we have

More information

CS 349/449 Internet Protocols Final Exam Winter /15/2003. Name: Course:

CS 349/449 Internet Protocols Final Exam Winter /15/2003. Name: Course: CS 349/449 Internet Protocols Final Exam Winter 2003 12/15/2003 Name: Course: Instructions: 1. You have 2 hours to finish 2. Question 9 is only for 449 students 3. Closed books, closed notes. Write all

More information

TCP: Flow and Error Control

TCP: Flow and Error Control 1 TCP: Flow and Error Control Required reading: Kurose 3.5.3, 3.5.4, 3.5.5 CSE 4213, Fall 2006 Instructor: N. Vlajic TCP Stream Delivery 2 TCP Stream Delivery unlike UDP, TCP is a stream-oriented protocol

More information

Suprakash Datta. Office: CSEB 3043 Phone: ext Course page:

Suprakash Datta. Office: CSEB 3043 Phone: ext Course page: CSE 3214: Computer Networks Protocols and Applications Suprakash Datta datta@cse.yorku.ca Office: CSEB 3043 Phone: 416-736-2100 ext 77875 Course page: http://www.cse.yorku.ca/course/3214 These slides are

More information

Topics. TCP sliding window protocol TCP PUSH flag TCP slow start Bulk data throughput

Topics. TCP sliding window protocol TCP PUSH flag TCP slow start Bulk data throughput Topics TCP sliding window protocol TCP PUSH flag TCP slow start Bulk data throughput 2 Introduction In this chapter we will discuss TCP s form of flow control called a sliding window protocol It allows

More information

CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS

CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS 28 CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS Introduction Measurement-based scheme, that constantly monitors the network, will incorporate the current network state in the

More information

The flow of data must not be allowed to overwhelm the receiver

The flow of data must not be allowed to overwhelm the receiver Data Link Layer: Flow Control and Error Control Lecture8 Flow Control Flow and Error Control Flow control refers to a set of procedures used to restrict the amount of data that the sender can send before

More information

Flow and Congestion Control

Flow and Congestion Control CE443 Computer Networks Flow and Congestion Control Behnam Momeni Computer Engineering Department Sharif University of Technology Acknowledgments: Lecture slides are from Computer networks course thought

More information

II. Principles of Computer Communications Network and Transport Layer

II. Principles of Computer Communications Network and Transport Layer II. Principles of Computer Communications Network and Transport Layer A. Internet Protocol (IP) IPv4 Header An IP datagram consists of a header part and a text part. The header has a 20-byte fixed part

More information

CS 640 Introduction to Computer Networks Spring 2009

CS 640 Introduction to Computer Networks Spring 2009 CS 640 Introduction to Computer Networks Spring 2009 http://pages.cs.wisc.edu/~suman/courses/wiki/doku.php?id=640-spring2009 Programming Assignment 3: Transmission Control Protocol Assigned: March 26,

More information

CSE/EE 461 Lecture 16 TCP Congestion Control. TCP Congestion Control

CSE/EE 461 Lecture 16 TCP Congestion Control. TCP Congestion Control CSE/EE Lecture TCP Congestion Control Tom Anderson tom@cs.washington.edu Peterson, Chapter TCP Congestion Control Goal: efficiently and fairly allocate network bandwidth Robust RTT estimation Additive

More information

TCP Congestion Control 65KB W

TCP Congestion Control 65KB W TCP Congestion Control 65KB W TO 3DA 3DA TO 0.5 0.5 0.5 0.5 3 3 1 SS SS CA SS CA TCP s Congestion Window Maintenance TCP maintains a congestion window (cwnd), based on packets Sender s window is limited

More information

CNT 6885 Network Review on Transport Layer

CNT 6885 Network Review on Transport Layer CNT 6885 Network Review on Transport Layer Jonathan Kavalan, Ph.D. Department of Computer, Information Science and Engineering (CISE), University of Florida User Datagram Protocol [RFC 768] no frills,

More information

Problem 7. Problem 8. Problem 9

Problem 7. Problem 8. Problem 9 Problem 7 To best answer this question, consider why we needed sequence numbers in the first place. We saw that the sender needs sequence numbers so that the receiver can tell if a data packet is a duplicate

More information

EECS 122, Lecture 19. Reliable Delivery. An Example. Improving over Stop & Wait. Picture of Go-back-n/Sliding Window. Send Window Maintenance

EECS 122, Lecture 19. Reliable Delivery. An Example. Improving over Stop & Wait. Picture of Go-back-n/Sliding Window. Send Window Maintenance EECS 122, Lecture 19 Today s Topics: More on Reliable Delivery Round-Trip Timing Flow Control Intro to Congestion Control Kevin Fall, kfall@cs cs.berkeley.eduedu Reliable Delivery Stop and Wait simple

More information

CSCD 330 Network Programming

CSCD 330 Network Programming CSCD 330 Network Programming Lecture 10 Transport Layer Continued Spring 2018 Reading: Chapter 3 Some Material in these slides from J.F Kurose and K.W. Ross All material copyright 1996-2007 1 Last Time.

More information

Flow Control. Flow control problem. Other considerations. Where?

Flow Control. Flow control problem. Other considerations. Where? Flow control problem Flow Control An Engineering Approach to Computer Networking Consider file transfer Sender sends a stream of packets representing fragments of a file Sender should try to match rate

More information

TCP : Fundamentals of Computer Networks Bill Nace

TCP : Fundamentals of Computer Networks Bill Nace TCP 14-740: Fundamentals of Computer Networks Bill Nace Material from Computer Networking: A Top Down Approach, 6 th edition. J.F. Kurose and K.W. Ross Administrivia Lab #1 due now! Reminder: Paper Review

More information

Lecture 14: Congestion Control"

Lecture 14: Congestion Control Lecture 14: Congestion Control" CSE 222A: Computer Communication Networks Alex C. Snoeren Thanks: Amin Vahdat, Dina Katabi Lecture 14 Overview" TCP congestion control review XCP Overview 2 Congestion Control

More information

TCP/IP-2. Transmission control protocol:

TCP/IP-2. Transmission control protocol: TCP/IP-2 Transmission control protocol: TCP and IP are the workhorses in the Internet. In this section we first discuss how TCP provides reliable, connectionoriented stream service over IP. To do so, TCP

More information

COMP/ELEC 429/556 Introduction to Computer Networks

COMP/ELEC 429/556 Introduction to Computer Networks COMP/ELEC 429/556 Introduction to Computer Networks The TCP Protocol Some slides used with permissions from Edward W. Knightly, T. S. Eugene Ng, Ion Stoica, Hui Zhang T. S. Eugene Ng eugeneng at cs.rice.edu

More information

Chapter 3: Transport Layer Part A

Chapter 3: Transport Layer Part A Chapter 3: Transport Layer Part A Course on Computer Communication and Networks, CTH/GU The slides are adaptation of the slides made available by the authors of the course s main textbook 3: Transport

More information

User Datagram Protocol (UDP):

User Datagram Protocol (UDP): SFWR 4C03: Computer Networks and Computer Security Feb 2-5 2004 Lecturer: Kartik Krishnan Lectures 13-15 User Datagram Protocol (UDP): UDP is a connectionless transport layer protocol: each output operation

More information

Page 1. Review: Internet Protocol Stack. Transport Layer Services. Design Issue EEC173B/ECS152C. Review: TCP

Page 1. Review: Internet Protocol Stack. Transport Layer Services. Design Issue EEC173B/ECS152C. Review: TCP EEC7B/ECS5C Review: Internet Protocol Stack Review: TCP Application Telnet FTP HTTP Transport Network Link Physical bits on wire TCP LAN IP UDP Packet radio Transport Layer Services Design Issue Underlying

More information

Computer Networking. Queue Management and Quality of Service (QOS)

Computer Networking. Queue Management and Quality of Service (QOS) Computer Networking Queue Management and Quality of Service (QOS) Outline Previously:TCP flow control Congestion sources and collapse Congestion control basics - Routers 2 Internet Pipes? How should you

More information

Name Student ID Department/Year. Midterm Examination. Introduction to Computer Networks Class#: 901 E31110 Fall 2006

Name Student ID Department/Year. Midterm Examination. Introduction to Computer Networks Class#: 901 E31110 Fall 2006 Name Student ID Department/Year Midterm Examination Introduction to Computer Networks Class#: 901 E31110 Fall 2006 9:20-11:00 Tuesday November 14, 2006 Prohibited 1. You are not allowed to write down the

More information

Flow and Congestion Control Marcos Vieira

Flow and Congestion Control Marcos Vieira Flow and Congestion Control 2014 Marcos Vieira Flow Control Part of TCP specification (even before 1988) Goal: not send more data than the receiver can handle Sliding window protocol Receiver uses window

More information

Conges'on. Last Week: Discovery and Rou'ng. Today: Conges'on Control. Distributed Resource Sharing. Conges'on Collapse. Conges'on

Conges'on. Last Week: Discovery and Rou'ng. Today: Conges'on Control. Distributed Resource Sharing. Conges'on Collapse. Conges'on Last Week: Discovery and Rou'ng Provides end-to-end connectivity, but not necessarily good performance Conges'on logical link name Michael Freedman COS 461: Computer Networks Lectures: MW 10-10:50am in

More information

Outline. CS5984 Mobile Computing

Outline. CS5984 Mobile Computing CS5984 Mobile Computing Dr. Ayman Abdel-Hamid Computer Science Department Virginia Tech Outline Review Transmission Control Protocol (TCP) Based on Behrouz Forouzan, Data Communications and Networking,

More information

TCP. CSU CS557, Spring 2018 Instructor: Lorenzo De Carli (Slides by Christos Papadopoulos, remixed by Lorenzo De Carli)

TCP. CSU CS557, Spring 2018 Instructor: Lorenzo De Carli (Slides by Christos Papadopoulos, remixed by Lorenzo De Carli) TCP CSU CS557, Spring 2018 Instructor: Lorenzo De Carli (Slides by Christos Papadopoulos, remixed by Lorenzo De Carli) 1 Sources Fall and Stevens, TCP/IP Illustrated Vol. 1, 2nd edition Congestion Avoidance

More information

Lecture 15: Transport Layer Congestion Control

Lecture 15: Transport Layer Congestion Control Lecture 15: Transport Layer Congestion Control COMP 332, Spring 2018 Victoria Manfredi Acknowledgements: materials adapted from Computer Networking: A Top Down Approach 7 th edition: 1996-2016, J.F Kurose

More information

Two approaches to Flow Control. Cranking up to speed. Sliding windows in action

Two approaches to Flow Control. Cranking up to speed. Sliding windows in action CS314-27 TCP: Transmission Control Protocol IP is an unreliable datagram protocol congestion or transmission errors cause lost packets multiple routes may lead to out-of-order delivery If senders send

More information

Reliable Transport I: Concepts and TCP Protocol

Reliable Transport I: Concepts and TCP Protocol Reliable Transport I: Concepts and TCP Protocol Stefano Vissicchio UCL Computer Science COMP0023 Today Transport Concepts Layering context Transport goals Transport mechanisms and design choices TCP Protocol

More information

Lecture 8. TCP/IP Transport Layer (2)

Lecture 8. TCP/IP Transport Layer (2) Lecture 8 TCP/IP Transport Layer (2) Outline (Transport Layer) Principles behind transport layer services: multiplexing/demultiplexing principles of reliable data transfer learn about transport layer protocols

More information

Operating Systems and Networks. Network Lecture 10: Congestion Control. Adrian Perrig Network Security Group ETH Zürich

Operating Systems and Networks. Network Lecture 10: Congestion Control. Adrian Perrig Network Security Group ETH Zürich Operating Systems and Networks Network Lecture 10: Congestion Control Adrian Perrig Network Security Group ETH Zürich Where we are in the Course More fun in the Transport Layer! The mystery of congestion

More information