Shrinking the Timescales at Which Congestion-control Operates

Size: px
Start display at page:

Download "Shrinking the Timescales at Which Congestion-control Operates"

Transcription

1 Shrinking the Timescales at Which Congestion-control Operates Vishnu Konda and Jasleen Kaur Technical Report # TR8- Department of Computer Science University of North Carolina at Chapel Hill Abstract TCP congestion-control is fairly ineffieicnt in achieving high throughput in high-speed and dynamic-bandwidth environments. The main culprit is the slow bandwidth-search process used by TCP, which may take up to several thousands of roundtrip times (RTTs) in searching for and acquiring the end-to-end spare bandwidth. Even the recently-proposed high-speed transport protocols may take hundreds of RTTs for this. In this paper, we design a new approach for congestion-control that allows TCP connections to boldly search for, and adapt to, the available bandwidth within a single RTT. Our approach relies on carefully orchestrated packet sending times and estimates the available bandwidth based on the delays experienced by these. We instantiate our new protocol, referred to as RAPID, using mechanisms that promote efficiency, queue-friendliness, and fairness. Our experimental evaluations on gigabit networks indicate that RAPID: (i) converges to an updated value of bandwidth within -4 RTTs; (ii) helps maintain fairly small queues; (iii) has negligible impact on regular TCP traffic; and (iv) exhibits excellent intra-protocol fairness among co-existing RAPID transfers. The rate-based design allows RAPID to be the first protocol that truly does not suffer from RTT-unfairness. Introduction Congestion Control can be easily listed among the top- networking problems of the past two decades. And indeed, why not? A congestion-control protocol has no simple task it has to adaptively discover the end-to-end spare bandwidth available to a transfer in a quick and non-intrusive manner. Simultaneously achieving these properties turns out to be a significant challenge, especially for an end-to-end protocol that receives no explicit feedback from routers/switches. Indeed, the dominant end-to-end transport protocol, TCP NewReno [], has been shown to be abysmally slow in discovering the spare bandwidth, especially in high-speed networks and on paths that experience dynamic bandwidth. Several alternate protocols have been proposed to address this limitation. However, as discussed in Section 2, most of these protocols struggle to remain non-intrusive to other network traffic while achieving speed consequently, these designs are still quite sluggish in probing for spare bandwidth. In particular, we show that even the so-called high-speed protocols may take hundreds-to-thousands of round-trip times (RTTs) to converge to a stable sending rate in gigabit networks. In this paper, inspired by recent advances in the field of bandwidth estimation, we propose the idea that the sluggishness of transport protocols can be eliminated, without overloading the network, if we limit the impact of probing for spare bandwidth. In particular, we rely on carefully orchestrated packet sending times and use the relative delays experienced by the packets, to probe for an exponentially-wide range of rates within a single RTT the impact of probing is limited by ensuring that the average sending rate is not high. We use this idea to design a novel approach, referred to as RAPID Congestion Control (RAPID), that exhibits three significantly desirable characteristics. Most notably, it reduces the time it takes a transport protocol to acquire freshly-available bandwidth by more than an order of magnitude. Equally significantly, by relying on a delay-based congestion-detection strategy, the protocol ensures (i) that it is friendly to transfers that use regular loss-based TCP, and (ii) that packet queues at bottleneck links are small and transient. Finally, due to its speed, RAPID also exhibits excellent intra-protocol fairness properties at small-to-medium timescales, even in network environments with heterogeneous RTTs. Table summarizes several notations used throughout the paper we use the terms packets and segments interchangeably. In the rest of this paper, we describe the sluggishness of existing protocols and present our key insight in Section 2. We describe the RAPID protocol mechanisms in Section 3 and present performance evaluation results in Section 4. We conclude in Section 5.

2 avail-bw, AB ABest p-stream N P r avg r i m τ available bandwidth the AB estimate returned by the receiver multi-rate probe stream the number of packets in a p-stream packet size the average sending rate of a p-stream the sending rate of the (i + ) th packet in a p-stream the ratio of ri+ r i for all i [, N ] the duration over which an increase in AB is adopted 2 Table : Notations Used in the Paper Protocol Search-step Feedback-metric Experimentally-observed time (per-rtt increase in probe-rate) for acquiring AB = Gbps NewReno Additive Increase Packet Loss 74 RTT HighSpeed [2] Multiplicative Increase Packet Loss 25 RTTs CUBIC [3, 4] Additive/Binary-search Increase Packet Loss RTTs FAST [5] Additive Increase Packet Delays (and Loss) 5 RTTs RAPID Exponential Increase Delays (and Loss) 4 RTTs Table 2: Time taken to acquire an AB of Gbps by different protocols (see Section 4.) 2 Problem Formulation 2. The Problem: Looong Feedback Loop To understand the sluggishness issue, consider an end-to-end congestion-control protocol in steady-state the protocol continuously operates a 2-step AB-search cycle: in the search-step, it successively probes for (by sending at) larger data sending rates. For each rate probed at, it examines performance feedback such as packet loss and high end-to-end delays that arrives after an RTT-worth of delay this helps it estimate when the most recent sending rate was higher than the avail-bw. When a higher rate is reached, the reduction-step of the protocol reduces the sending rate to a lower value and switches back to the search-step. The speed with which a single loop of this 2-step cycle is executed fundamentally determines how quickly a protocol can acquire and adapt to changes in the avail-bw. Note that the search-step can not be executed at a timescale smaller than an RTT performance feedback can arrive no earlier than this time. Unfortunately, most existing protocols execute this step at timescales much larger than this. This is because, primarily driven by the goal of not overloading the network, all previously-proposed protocols adopt two limiting design features:. Only a single (larger) sending rate is probed for over a round-trip time. The protocol probes for a candidate sending rate and then waits for performance feedback, which arrives after an RTTworth of delay. A new and larger sending rate (proberate) is then probed for only during the next RTT time interval. This is true for all previous protocols, including recent ones, such as HighSpeed TCP, FAST, Scalable, CUBIC, and PCP [2, 3, 4, 5, 6, 7, 8]. This legacy design decision is perhaps motivated by the fact that unless the single rate is deemed acceptable (not too high), other rates should not be probed for. 2. The new rate probed for (proberate) is not significantly larger than the previous sending rate (prevrate). This feature is adopted primarily to prevent a single transfer from overloading the network, in case the previous rate was quite close to the avail-bw. Existing protocols differ in how the new probing rate relates to the previous rate for most protocols, the ratio of proberate/prevrate is only slightly larger than in high-speed networks. For instance, these two quantities are additively related in NewReno and FAST, as in: proberate = prevrate + α*mss/rtt, where MSS is the maximum segment size allowed on the path, and α is set by default to and 2 in the two protocols, respectively. Scalable, HighSpeed, and CUBIC rely on a multiplicative relation as in: proberate = γ*prevrate; however, γ is again Different protocols differ in how the successive probing rates relate to the current rate, as well as in the performance measures they use as feedback. Table 2 summarizes these differences for some prominent protocols.

3 3 restricted to small values. 2 Figure : Illustration of a p-stream (here, t i = P r i and t avg = P r avg ) In addition to these two limitations, protocols that rely only on packet losses as an indicator of congestion suffer from yet another problem these protocols may have to reach a sending rate much higher than the avail-bw (and oscillate several times around it) before stabilizing to it. This is because a loss-based protocol would need to fill up buffers in the bottleneck routers and suffer a loss before it can detect that it has acquired (and surpassed) the avail-bw. As a result of these design limitations, most protocols even the recent ones designed for high-speed networks [2, 7, 5, 8, 3, 4] take a fairly long time for converging to the avail-bw. Table 2 lists, for different protocols, the experimentally-observed times taken by a single transfer for acquiring a spare capacity of Gbps (say, after experiencing a packet loss). 3 We find that even the fastest of high-speed protocols can take hundreds of RTTs for converging to the avail-bw. 2.2 Key Insight: Limit Probing Volume We believe that all of the above design limitations can be done away with, without overloading a network or inducing buffer overflow, if one limits the volume (and impact) of probing for a larger sending rate. In fact, we claim that a protocol can boldly probe for an exponentially-wide range of candidate sending rates within a single RTT, if it: (i) uses carefully-orchestrated inter-packet gaps, and (ii) relies on relative packet delays for estimating avail-bw. Any associated overloading impact can be avoided by using the following guidelines:. Achieve a rapid AB search: Probe for an exponentially-wide range of candidate sending rates within a single RTT. However, send extremely small probes (of one packet each) at each candidate rate in order to limit to very small timescales the overloading impact of the large rates. 2. Avoid persistently overloading the network path: Ensure that the average rate of packet transmission does not exceed the most-recently discovered estimate of avail-bw. This implies that some of the rates in the above-suggested exponential range will be smaller than this estimate, and some will be larger. We use these ideas to design a new protocol, referred to as RAPID Congestion Control (RAPID), and show that it can adapt to fairly large changes in AB within -4 RTTs. Furthermore, RAPID can avoid persistently large router queues due to its primary focus on avoiding network overload. The benefits of the protocol are especially significant in dynamic bandwidth environments and high-speed networks. It is important to note that several protocols such as Vegas, FAST, and PCP also rely on packet delays (instead of packet losses) for detecting congestion; however, none of these protocols exploit inter-packet gaps to probe for a wide range of rates within a single RTT. 4 We next present the basic mechanisms used in RAPID. 2 BIC [4] relies on a combination of additive-increase and a binary search based method after the AB is discovered for the first time here, the ratio proberate/prevrate depends on the past probing history. 3 These experiments were run on the ns-2 simulator, and a packet size of 4 B was used see Section 4. for details. 4 The rate-based PCP protocol [6] also adopts the idea of limiting the probe volume. However, it probes for only a single larger rate per RTT and has an AB-search speed similar in magnitude to that of existing protocols. In all fairness, the protocol primarily focusses on minimizing response times in under-utilized networks it has not even been evaluated for large transfers in high-speed networks.

4 3 RAPID Congestion Control 4 Unlike many congestion-control protocols, RAPID employs a multi-rate based transmission policy and relies on relative packet delays for estimating avail-bw. While the RAPID design is motivated by the primary goal of shrinking the timescales at which congestion-control operates, several equally-important goals are given due consideration in the design process [9]. Most significantly, a RAPID network strives to (i) maintain a low buffer occupancy at congested router links, (ii) achieve a fair sending rate when several RAPID transfers co-exist, and (iii) remain friendly to regular low-speed TCP transfers. Below, we describe the basic mechanisms used for achieving each of these. 3. Acquiring AB Within a Few RTTs 3.. Rate-based packet transmission at the sender When there is sufficient data to send, a RAPID sender continuously transmits data in logical groups of N packets each, referred to as a multi-rate probe stream (p-stream). 5 The i-th packet in a p-stream is sent at a rate of r i ; this also implies that the sending times of packets i and i differ by P r i, where P is the size of packet i (see Figure ). The sender explicitly controls/manages the average sending rate of a p-stream, referred to as r avg, which is given by: Further, for all i >, r i > r i AB-estimation analysis at the receiver N r avg = r + r () r N We observe the inter-packet gaps in a p-stream at the receiver, and use these for estimating avail-bw in the same manner as the PathChirp bandwidth estimation tool []. Recent evaluations have shown that PathChirp estimates avail-bw with good accuracy in multi-hop settings, while incurring the least overhead among existing tools []. Like PathChirp, when a RAPID receiver receives all packets of a p-stream, it computes the AB by looking for increasing trends in the intended inter-packet spacings. This analysis relies on the concept of self-induced congestion. Intuitively, if q i is the queuing delay experienced at a bottleneck link by the i-th packet in a p-stream, then: q k =, if r k AB (2) q k > q k, otherwise (3) Thus, if i is the first packet in a p-stream such that r i AB, then each of the packets [i,..., N] will queue up behind its previous packet at the bottleneck link (since r i > r i, for all i > ) due to this self-congestion, each of these packets will experience a larger one-way delay (and a larger increase in the pre-set inter-packet gap) than its predecessor. Thus, the smallest rate r i at which the receiver observes an increasing trend in the inter-packet gaps can be used to compute an estimate of the current avail-bw as: ABest = r i. 7 The actual analysis uses several heuristics to account for bursty cross-traffic we refer the reader to [] for details and the precise formulation. The receiver encodes the value of ABest obtained from the most recent p-stream in the acknowledgments sent to the sender Transmitting in a non-overloading, responsive manner When the sender receives an ABest value, it updates the r avg of the next p-stream as: r avg = ABest. Thus, the transfer acquires an average sending rate equal to the estimated avail-bw within an RTT. The sender then selects an appropriate set of rates, r,..., r N, for the next p-stream such that the average of these is equal to r avg, as computed in Eqn (). The above mechanism helps simultaneously achieve two desirable properties. First, by setting r avg equal to the estimated avail-bw, RAPID helps to ensure that the average load on the bottleneck link does not exceed its capacity this is crucial for 5 Unlike window-based protocols, the sender does not stall, waiting for acknowledgements. Also, if data for only k < N packets is available, a p-stream of k packets is sent instead. Note that if k 2, the sender sends these packets at a uniform rate of r avg. 6 The gap between the first packet of a p-stream and the last packet of the previous p-stream is set to r avg. This can also be stated as: r = r avg. Typically, r > r. 7 If no increasing trend is detected in a p-stream, r N is taken as the AB estimate. Also, if the increasing trends starts at the first or second packet (i 2), r 2 is returned as the AB estimate.

5 # of RTTs r avg Rates that can be probed for RTT x.45x 3.22x RTT 3.22x.5x.7x RTT 2.7x 4.7x 33.4x RTT x 5x 8x RTT 4 8x 48x 346x RTT 5 346x 56x 5x 5 Table 3: AB-acquiring speed by a RAPID sender maintaining small and transient queues at the bottleneck links. Second, by selecting a set of rates which includes values larger (r N ) as well as smaller (r ) than r avg, a p-stream is able to simultaneously probe for both increase and decrease in the current end-to-end avail-bw this greatly helps the RAPID sender in quickly detecting and adapting to changes in AB Setting [r,..., r N ] (speeding up the search process) Each p-stream probes for the range of sending rates given by: [r,..., r N ]. Note that for a given r avg and N, there are infinite choices for this set of rates, such that they satisfy Eqn (). However, the larger is the ratio r N r, the faster would be the AB-search process this is because a single p-stream now probes for a wider range of rates. So for instance, while these rates could be additively-related, a faster search will be obtained by using a multiplicativerelation as in: r i = m i r, < i < N (4) RAPID adopts the above relation. Given r avg, Eqns () and (4) can be used to compute r as: r = r 2,..., r N can then be computed from Eqns (4) and (5). m N (N )(m )m N 2 r avg (5) Selecting m and N For a given N, two conflicting considerations guide the choice of m. A larger value of m would also result in a larger ratio of r N r, and would improve the speed as well as adaptivity of the AB-search process. A smaller m, on the other hand, would result in a smaller ratio of ri+ r i this would result in a finer rate granularity with which the avail-bw is probed. A coarse-granularity estimate of avail-bw would prevent a collection of RAPID senders from efficiently utilizing the bottleneck link. The selection of N is also faced by two opposing considerations. For a given m, a larger value of N would improve the AB-search range ( r N ). However, a larger p-stream would also be more intrusive to cross-traffic (more packets would be sent at a rate larger than r avg ) at bottleneck links. RAPID adopts the default values of N = 3 and m =.7 (7% granularity in rates probed for). These values have been selected after controlled experimentation under diverse topology and traffic settings Details of this sensitivity analysis can be found in Appendix B. The above choices of m and N yield: r.45 r avg and r N 3.22 r avg. This enables a RAPID sender to probe for freshly-available spare bandwidth spanning several orders of magnitude within a few RTTs (see Table 3) Achieving a Quick-yet-Slow Start RAPID faces a similar dilemma as all congestion-control protocols how to obtain the initial ABest (or the initial r avg ) for a new transfer? The main concern here is that the initial choice of r avg may be too high for a given network path. We address this issue by being only as aggressive as the TCP Slow-Start mechanism (which is also adopted by most other protocols). Specifically, in slow-start, a RAPID sender sends only as many packets in an RTT as would a TCP transfer fortunately, the ability of a p-stream to probe for multiple rates within an RTT makes the RAPID slow-start terminate much earlier than other protocols. In the slow-start phase, a RAPID sender sends only a single p-stream over an RTT. Further, we initialize N = 2, and double the value of N over successive RTTs, up to a maximum value of 6. 8 For constructing the p-streams, we use a multiplicative 8 Note that this process is no more aggressive than the slow-start adopted by most protocols, which multiplicatively increases the number of packets sent over an RTT in exactly the same manner (and start with an initial value of or 2 segments). Also note that the slow-start threshold is usually set to a much higher value than 6 segments.

6 # of RTTs N Range of rates probed for RTT 2 - Kbps RTT Kbps RTT Kbps - 2 Mbps RTT Mbps Gbps 6 Table 4: Speed of RAPID Slow-Start factor of m = 2 throughout slow-start thus, the granularity with which a RAPID sender probes for avail-bw during slow-start is coarse, but is the same as all existing protocols (that double their window size every RTT) [, 2]. Unlike most protocols, however, the AB-search speed is quite high since it relies on an AB-estimation analysis, a RAPID transfer in slow-start discovers avail-bw much earlier than any other protocol. For a new RAPID transfer, we initialize r avg to a small value (Kbps) this implies that, in the first RTT, the transfer sends N = 2 packets at a rate of r = Kbps. If the receiver returns ABest < r N, we exit slow-start; if not (which implies, ABest = r N ), we double the value of N, set r = ABest, and send the next p-stream. 9 This process is repeated till the receiver returns an ABest value less than r N. Following this, we switch to the steady-state RAPID mode, in which m =.7 and N = 3. Table 4 illustrates the number of RTTs that a single RAPID transfer would take to probe for different amounts of avail-bw in slow-start an RAPID sender can probe for more than Tbps in just 4 RTTs, while being no more aggressive than TCP slow-start! In Section 4, we show that existing protocols take much longer. 3.2 Achieving Fairness Among Co-existing RAPID Transfers The RAPID design should justifiably raise fairness concerns when multiple RAPID transfers share a bottleneck link: how do the p-streams of different transfers interact do the transfers obtain a fair share of the avail-bw? In this section, we address two sources of unfairness that have been identified in the literature [3, 4] RTT-unfairness Most window-based congestion control protocols have been shown, both experimentally and analytically, to suffer from RTTunfairness transfers with a long RTT get a lower throughput than short-rtt transfers [3, 4, 5]. This happens because window-based protocols update their sending rates once per RTT in heterogeneous RTT environments, this RTT-dependence results in differences in the rate updating frequency as well as rate increments, and results in a bias against long RTT transfers [4]. Fortunately, the rate-based design of RAPID is not influenced by the value of RTT a RAPID sender continuously send p-streams, for both long and short RTT transfers. The rate-updating frequency (once per p-stream) as well as rate increment (determined by ABest) are independent of the RTT consequently, and by design, RAPID truly does not suffer from RTTunfairness. Our experiments in Section 4.2 confirm this Bias due to rate-proportional feedback frequency As described so far, however, the RAPID design is likely to be unfair for a different reason. The rate at which a RAPID r sender receives ABest values (once per p-stream), is directly proportional to the current value of r avg and is given by: avg N P, where P is the packet size. Consequently, a RAPID transfer that has achieved a larger sending rate will receive more frequent notifications of any spare capacity that becomes available, than a co-existing transfer with a lower r avg thus, the former is likely to attain an even higher sending rate than the latter. In fact, such differences in the rate at which feedback arrives have been shown to result in unfair throughput allocations even in window-based protocols [4]. To ensure that co-existing RAPID transfers achieve a fair-share of the avail-bw, we explicitly equalize the rate at which RAPID senders converge to the ABest values they receive. For this, we select a parameter τ that represents the common (large) time interval over which any RAPID sender, independent of its sending rate, would converge to an increase in avail-bw. Specifically, when a sender receives an ABest that is larger than its current r avg, it computes a new value as: r avg = r avg + l τ (ABest r avg) (6) 9 Note that in slow-start, RAPID sets r (and not r avg) equal to ABest.

7 7 Figure 2: Experimental Topology where l is the duration of the most-recent p-stream, which is given by: l = N P r avg. The above filter is used to repeatedly update r avg for all subsequent p-streams, till r avg converges to ABest. The effect of this filtering mechanism is that it would take a RAPID sender roughly τ time units (or τ l p-streams) for converging to an updated value of ABest for a transfer with a small l value of r avg, τ will be close to and the sender will immediately adopt the ABest as its new r avg. Thus, even though the feedback frequency depends on the sending rate of a RAPID transfer, the rate-increment frequency does not.. Our experiments in Section 4.2 show that this mechanism helps achieve excellent fairness among RAPID transfers. τ is set by default to 2 ms. 3.3 Remaining TCP-Friendly RAPID, like FAST, is quite non-intrusive to regular low-speed TCP NewReno transfers. The prime reason for this is that these protocols rely on increased packet delays for detecting network congestion, whereas TCP reduces its sending rate only on witnessing packet losses. When a router carrying both TCP and RAPID transfers gets congested, the RAPID transfers would respond to the congestion (and reduce their sending rates) much earlier than the TCP transfers would. This would ensure that the performance of the low-speed TCP transfers is not significantly impacted due to the presence of high-speed RAPID transfers. The downside is that in the presence of long-lived TCP transfers, RAPID transfers would obtain lower throughput than the former. However, this problem plagues any network that simultaneously runs fundamentally different congestion-control protocols [5] a simple solution is to provision routers with separate queues for traffic from different protocols. We next experimentally evaluate how well the above mechanisms achieve the stated goals for RAPID. 4 Experimental Evaluation There are at least three types of concerns that the RAPID design is likely to raise in the minds of a reader: does a short p-stream really help RAPID in accurately estimating avail-bw, especially in high-speed networks? RAPID seems aggressive in the rates it probes for is it really friendly to router queues and competing low-speed TCP traffic? And, how do the AB-estimation processes of co-existing RAPID transfers interact don t they interfere with each other (and do they get a fair share of the availbw)? In this section, we address these concerns by experimentally evaluating RAPID. We have also experimentally studied: (i) sensitivity of RAPID to paramter settings, (ii) fairness among multiple high-speed protocols, and (iii) RAPID performance in multi-hop settings the details of which are given in appendix. We use the ns-2 simulator [5] for our evaluations. We have implemented RAPID in ns-2.33 and have re-used the NewReno code base for dealing with loss detection and recovery. We also use publicly-available ns-2 implementations of three other protocols for comparison, namely HighSpeed TCP, CUBIC, and FAST TCP [6] the first two are loss-based protocols with fairly different window-growth functions; FAST is a high-speed version of the delay-based Vegas protocol. For stability reasons, we do not use the above filter when the avail-bw decreases details are provided in Appendix B We rely on the default parameter settings configuration for all protocols, other than for FAST. FAST exhibited severe oscillations for the experiments of Section 4. when run with default ns-2 parameters for the evaluations presented here, we have instead relied on parameters used in sample test scripts supplied along with the implementation [6] (experiments with other parameter settings are included in appendix B.

8 8 For several of our simulations, we rely on a simple dumbbell topology in which multiple sources are aggregated at a single bottleneck link (R R 2 in Fig 2) this bottleneck link is the only link shared by co-existing transfers. 2. All links other than the bottleneck link have a transmission capacity of Gbps the bottleneck link capacity is set by default to Gbps, but is reduced for some experiments. All links are provisioned with a delay-bandwidth product (DBP) worth of buffers, where the delay is the average end-to-end propagation delay for transfers (specified for each experiment), and the bandwidth is that of the bottleneck link (also specified). We set the maximum size of each network-layer packet to 4 B. The main performance statistics we are interested in are throughput obtained by transfers instantiated between the sender and receiver nodes as well as the queue build-up on the bottleneck link. We sample each of these statistics periodically at regular intervals of 5 ms each. In what follows, we summarize our experiments and observations. 4. Speed of Acquiring Spare Bandwidth There are at least two types of scenarios in a high-speed network where the ability of a congestion-control protocol to acquire spare bandwidth quickly is crucial. The first is during slow-start, when a transfer begins without any advance knowledge of the avail-bw, and the second is when the avail-bw suddenly changes by a large amount (perhaps due to the arrival of additional traffic). In our first set of experiments, we compare the performance of RAPID with that of other protocols by simulating examples of both of the above scenarios. For all of the experiments in this section, we simulate a Gbps bottleneck link (R R 2 in Fig 2) and set the end-to-end propagation delay to ms. 4.. Slow-start in High-speed Networks In the first set of experiments, we simulate a single transfer from a sender node to a receiver node (see Fig 2). Each experiment uses a different underlying congestion-control protocol. Fig 3(a) plots as a function of time, the throughput obtained by the transfer with different protocols. Fig 3(b) plots the queue buildup observed over time at the bottleneck router. We find that: HighSpeed TCP takes the longest time (25 RTTs) for acquiring a sending rate of Gbps. And once it acquires that sending rate, being a loss-based protocol, it starts filling up the bottleneck queues. 3 CUBIC is faster, initially acquiring Gbps after merely RTTs but it also fills up the bottleneck buffers as quickly, later leading to packet losses and a drop in throughput. It eventually stabilizes to Gbps after RTTs. Being a loss-based protocol, it also maintains nearly full buffers at the bottleneck link. FAST TCP is less aggressive in filling up router queues and is able to acquire avail-bw faster within 5 RTTs. However, even a single transfer can regularly fill up thousands of packet buffers in the bottleneck router. RAPID is significantly faster than all other protocols, acquiring the Gbps bandwidth in just 4 RTTs this is exactly as was predicted by the design in Section Furthermore, the single RAPID transfer induces very low queuing (less than 2 packets) on the bottleneck router High-speed Networks with Dynamic AB In the second set of experiments, we simulate a network for 5 seconds, and introduce 4 constant-bit-rate (cbr) traffic streams on the bottleneck link according to the following schedule: cbr- exists from 5-4 seconds, cbr-2 exists from -5 seconds, cbr-3 exists from seconds, and cbr-4 exists for a small duration from seconds. Each cbr stream has a bit-rate of 2 Mbps. The spare bandwidth left on the network is plotted in the top-row plots of Figs 4(a)-(d) using a faint dotted line. We use this setup to run a set of experiments in which we introduce a single long-lived transfer at time second, and respectively, run it over CUBIC, HighSpeed, FAST, and RAPID. The throughput obtained by the transfers and the router queue sizes are plotted, respectively, in the top and bottom rows of Fig 4. We find that:. HighSpeed and CUBIC, which are both loss-based protocols, experience heavy packet losses when the avail-bw reduces suddenly (and even when it does not change). This is because a loss-based protocol induces persistent queuing in the bottleneck buffers any sudden decrease in AB overflows the buffers causing multiple packet losses. When this happens, 2 An experiment with multiple bottlenecks can be found in Appendix C 3 We also ran this experiment with the NewReno protocol, which took around 74 RTTs to acquire the Gbps avail-bw.

9 9 s 5s s 5s 2s 25s 3s 35s Throughput (Mbps) time (RTTs) (a) Throughput vs. Time RAPID HighSpeed FAST CUBIC Queuesize (# of Packets) time (RTTs) (b) Bottleneck Queue vs. Time RAPID HighSpeed FAST CUBIC Figure 3: Performance of Slow Start on a Gbps Network

10 Throughput (Mbps) Throughput (Mbps) Throughput (Mbps) Throughput (Mbps) Queuesize (# of Packets) Queuesize (# of Packets) Queuesize (# of Packets) Queuesize (# of Packets) (a) HighSpeed (b) CUBIC (c) FAST (d) RAPID Figure 4: Performance in a Dynamic Bandwidth Gbps Network Throughput (Mbps) transfer transfer2 transfer Throughput (Mbps) transfer transfer2 transfer3 Throughput(Mbps) transfer transfer2 transfer time(sec) (a) HighSpeed (b) FAST (c) RAPID Figure 5: Intra-protocol Fairness

11 the throughput of the transfer drops significantly and each protocol then takes tens-to-hundreds of RTTs for regaining throughput. HighSpeed and CUBIC are also slow in acquiring spare bandwidth when the avail-bw increase suddenly (for example, at 5s, 25s, and 35s) this is simply due to the slow AB-search speed of these protocols. Note that if these protocols already have a large number of packets in the router buffers, they may be able to utilize spare bandwidth immediately loss-based protocols are known to trade-off low latency for high throughput performance. 2. Delay-based FAST maintains smaller router queues in steady-state. However, when the avail-bw decreases suddenly, FAST is unable to react quickly and causes buffer overflow. After the first time this happens, unfortunately, the FAST transfer keeps overflowing the router buffers and is unable to converge to a stable sending rate The RAPID transfer maintains very low queuing and is able to avoid packet losses, even when the avail-bw decreases suddenly. More importantly, though, the protocol is able to very quickly acquire additional spare bandwidth that becomes available and it does so without maintaining large queues in routers! 4.2 Intra-protocol Fairness We next evaluate fairness when multiple RAPID transfers share the bottleneck link. Our objective is to specifically evaluate fairness in heterogeneous RTT environments Dynamics Among Co-existing Transfers Our first set of experiments is inspired by a similar experiment presented in [5], which evaluates the ability of a new transfer to acquire fair share of bandwidth from a pre-existing transfer in that paper, FAST was shown to be better than other protocols in achieving fairness (when compared against HighSpeed as well as BIC). A bottleneck capacity of 8Mbps was used in that experiment we use the same by setting the transmission capacity of R R 2 in Fig 2 to 8Mbps. For the experiment, we simulate three transfers the first lasts from -9s and has an RTT of 2ms, the second lasts from 8-72s and has an RTT of ms, and the third lasts from 36-54s and has an RTT of 5ms. Fig 5 plots the throughput obtained by the three transfers with different underlying protocols. We find that both HighSpeed and CUBIC (latter not plotted here to improve visualization of Fig 5) can be quite unfair in the throughput allocated among the three transfers the second transfer (which has the smallest RTT) dominates over the other two transfers. This has also been observed in [5]. Surprisingly, FAST also exhibits the same unfair behavior the connection with the smallest RTT obtains a significantly high fraction of the avail-bw. This observation differs from the results presented in [5] that illustrate better fairness properties unfortunately, [5] does not clearly specify the parameter settings used for the experiment. At the very least, our experiment illustrates that the performance of FAST is fairly sensitive to its parameter settings. Fig 5(c) shows that RAPID allocates similar throughput to all transfers, irrespective of their RTTs RAPID thus does not suffer from RTT-unfairness. Furthermore, the second transfer is able to acquire a fair share even though the pre-existing first transfer had a high throughput of Gbps; this illustrates that the filter mechanism added in Section 3.2 successfully eliminates from RAPID any bias due to rate-proportional feedback frequency. Fig 5(c) also illustrates that the convergence of the transfers to a fair share occurs at a fairly small timescale we study the fairness timescale in more detail below Fairness Among Large Number of Transfers We next evaluate how well the RAPID intra-protocol fairness scales when a larger number of connections are aggregated. For this, we conduct experiments in which a Gbps bottleneck link is shared by n long-lived RAPID transfers for 6 seconds we vary n from 2 to across experiments. The RTTs of the n connections are selected uniformly randomly from two ranges: 6-8 ms and ms. The start-times of the connections are selected uniformly randomly between -2s. Starting at 2 seconds, the time-series of throughput obtained by each transfer are logged at several different timescales ranging from 5ms 28s. For each timescale, and for each logging instance of the associated time-series, we compute the [7] of the throughput achieved by the n transfers. Fig 6 plots the median, th, and 9th percentiles (latter 4 Our experiments in Appendix A show that providing the bottleneck router with thrice the number of buffers does help FAST in completely avoiding losses and utilizing avail-bw however, such large buffers are also likely to improve the performance of HighSpeed and CUBIC.

12 n=2 n=24 n=5 n= Figure 6: Intra-protocol Fairness (Large Number of Transfers) plotted as error bars) of the distribution of the fairness index, as a function of the timescale at which the throughput data is observed (for n = 2, 24, 5, ). We find that RAPID ensures excellent fairness (median fairness index > ) among a large number of co-existing transfers with heterogeneous RTTs this is true even at timescales as small as 5ms. Further, even the -percentile values of the indices are fairly high, indicating that it is quite rare for even transient unfair episodes in which some connections obtain much less throughput than others to occur. 4.3 Co-existence with Low-speed TCP Traffic Finally, we evaluate the impact of high-throughput RAPID transfers on a realistic mix of regular TCP transfers that co-exist on a bottleneck link. For this, we use the publicly-available Tmix traffic generator code for ns-2, which generates an empiricallyderived TCP traffic mix that is representative of TCP traffic aggregates observed on production Internet links [8]. TMIX simulates application-level socket-writing behavior and runs over the TCP protocol it, consequently, generates response TCP traffic. Our objectives are to study: (i) how a high-throughput transfer might impact the performance of connections in such a traffic mix, and (ii) how effectively can a high-throughput transfer utilize the avail-bw with such dynamic (and realistic) cross-traffic. We use TMIX to generate TCP traffic at an average offered load of 7Mbps for 3 minutes and drive it through a bottleneck link (R R 2 in Fig 2) of Mbps the bottleneck buffers are set to 75 packets (based on the RTT of the L transfer described below). For the set of TCP connections simulated in our TMIX experiment, Fig 7(a) plots the distributions of per-connection RTTs we find that these are fairly diverse (ranging from ms LANs to long-distance transfers). Fig 7(b) plots the (complementary) distribution of the number of bytes transferred in each connection the traffic mix is typical of that found on the Internet (a majority of small mice connections, but a few heavy elephants). All statistics presented below are collected from roughly the middle 2 minutes of the experiments (to exclude the ramp-up and ramp-down behavior of the traffic generator). We first simulate only the TMIX traffic and observe the aggregate throughput and queue sizes at the bottleneck router at a timescale of s these are plotted respectively in Fig 8(a) and 8(b). We find that although the average offered load of the TMIX aggregate is 7Mbps, the traffic is fairly bursty; the short-term load can vary from 5-Mbps. This causes the router queues to vary rapidly from 8 packets (and occasionally to larger amounts). Thus, the TMIX aggregate represents bursty cross-traffic that causes the avail-bw on the bottleneck router link to vary dynamically. We then re-run the Tmix experiment and add a single bulk transfer (henceforth, referred to as L) that shares the bottleneck

13 RTT (ms) (a) Distribution of Transfer RTTs Complementary Cumulative Dist..... e-5 e-6 e-7 e+6 e+7 e+8 Connection Size (Bytes) (b) Distribution of Transfer Sizes Figure 7: Statistics of TMIX Transfers link and has an RTT of 6ms we repeat this experiment three times with L running over HighSpeed, FAST, and RAPID, respectively. In each of these experiments, we also log the throughput observed by L, the queue sizes at the bottleneck router, as well as the aggregate throughput of the bottleneck router. Figs 8(c), 8(e), and 8(g) plot the throughput of L observed with HighSpeed, FAST, and RAPID, respectively. Figs 8(d), 8(f), and 8(h) plot the router queue sizes, respectively. We find that:. Fig 8(d) shows that the loss-based HighSpeed transfer fills up (and overflows) the router queues at a frequent rate consequently, the responsive TCP connections in the TMIX cross-traffic suffer packet losses and reduce their sending rates. The HighSpeed L transfer is, thus, able to obtain a throughput much higher than the spare bandwidth available in the TMIX-only experiment. 2. The L transfer using FAST TCP is barely able to obtain any throughput at all (less than Mbps throughout the experiment). This suggests that FAST is unable to effectively utilize the dynamically-varying avail-bw on the bottleneck link indeed, the utilization of the bottleneck link in this experiment is similar to that observed in the TMIX-only experiment. Fig 8(f) shows that the presence of the FAST-based L has a negligible impact on the router queues. 3. The RAPID-based L transfer is able to utilize the spare bandwidth fairly effectively the fast AB-search speed of RAPID is crucial in achieving this behavior. Fig 8(h) shows that RAPID does do while increasing the router queue buildup by only a small amount. The bottleneck link is nearly-% utilized throughout this experiment. 3 5 Concluding Remarks In this paper, we propose a novel protocol, RAPID, that reduces the timescales at which congestion-control operates by orders of magnitude this enables the protocol to efficiently utilize spare bandwidth in high-speed and dynamic bandwidth environments. RAPID does so while: (i) providing excellent intra-protocol fairness in heterogeneous RTT environments, and (ii) ensuring friendly co-existence with regular TCP transfers and router queues. Perhaps one of the key challenges to deploying RAPID in multi-gigabit networks is related to the high-precision packet time-stamping and packet-spacing (of the order of a few microseconds) that it would need to rely on. Fortunately, since RAPID only relies on detecting increasing trends in the inter-packet spacing, the need for accuracy is more relaxed. Consequently, we believe that existing high-end PC platforms should be able to support RAPID for up to Gbps speeds. We are also exploring the use of FPGA-based network cards to enable a RAPID implementation to scale up to multi-gigabit capabilities. In addition, we are currently designing mechanisms to help auto-tune RAPID as it scales up to multi-gigabit and multi-hop networks. References [] M. Allman, V. Paxson, and W. Stevens, TCP Congestion Control, RFC 258 (Proposed Standard), Apr. 999, updated by RFC 339. [Online]. Available:

14 8 4 Throughput (Mbps) Queuesize (# of Packets) (a) Throughput (TMIX only) (b) Queue Sizes (TMIX only) 8 Throughput (Mbps) Queuesize (# of Packets) (c) Throughput (HighSpeed transfer) (d) Queue Sizes (HighSpeed+TMIX) 8 Throughput (Mbps) Queuesize (# of Packets) (e) Throughput (FAST transfer) (f) Queue Sizes (FAST+TMIX) 8 Throughput (Mbps) Queuesize (# of Packets) (g) Throughput (RAPID transfer) (h) Queue Sizes (RAPID+TMIX) Figure 8: Statistics of TMIX Transfers

15 5 [2] S. Floyd, HighSpeed TCP for Large Congestion Windows, RFC 3649 (Experimental), Dec. 23. [Online]. Available: [3] I. Rhee, L. Xu, and S. Ha, CUBIC for fast long-distance networks, August 27, Internet Draft. [4] L. Xu, K. Harfoush, and I. Rhee, Binary increase congestion control for fast, long distance networks, in Proceedings of IEEE INFOCOM, March 24. [5] D. Wei, C. Jin, S. Low, and S. Hegde, FAST TCP: Motivation, architecture, algorithms, performance, IEEE/ACM Transactions on Networking, vol. 4, no. 6, pp , 26. [6] T. Anderson, A. Collins, A. Krishnamurthy, and J. Zoharjan, PCP: Efficient endpoint congestion control, in Proceedings of the 3rd Symposium on Networked Systems Design and Implementation (NSDI), May 26. [7] M. Fukuhara, F. Hirose, T. Hatano, H. Shigeno, and K. Okada, SRF TCP: A TCP-friendly and fair congestion control method for high-speed networks, in Proceedings of the International Conference on Principles of Distributed Systems (OPODIS), ser. Lecture Notes in Computer Science, vol Springer, 25, pp [8] T. Kelly, Scalable TCP: Improving performance in highspeed wide area networks, in First International Workshop on Protocols for Fast Long-distance Networks, February 23. [9] S. Floyd and M. Allman, Specifying New Congestion Control Algorithms, RFC 533, 27. [Online]. Available: [] V. Ribeiro, R. Riedi, R. Baraniuk, J. Navratil, and L. Cottrell, pathchirp: Efficient available bandwidth estimation for network paths, in Passive and Active Measurement Workshop, April 23. [] A. Shriram and J. Kaur, Empirical evaluations of techniques for measuring available bandwidth, in Proceedings of IEEE INFOCOM 27, May 27. [2] S. Floyd, Limited Slow-Start for TCP with Large Congestion Windows, RFC 3742 (Experimental), Mar. 24. [Online]. Available: [3] S. Gorinsky and H. Vin, Additive increase appears inferior, Dept. of Computer Sciences, University of Texas at Austin, Tech. Rep., 2. [4] M. Vojnovic, J. L. Boudec, and C. Boutremans, Global fairness of additive-increase and multiplicative- decrease with heterogeneous round-trip times, in Proceedings of IEEE INFOCOM, March 2. [5] Network simulator-2 ns2 ( [6] URL [7] R. Jain, D. W. Chiu, and W. R. Hawe, A quantitative measure of fairness and discrimination for resource alloca tion in shared computer system, September 984, dec Research Report TR-3. [8] M. Weigle, P. Adurthi, F. Hernandez-Campos, K. Jeffay, and F. Smith, Tmix: A tool for generating realistic application workloads in ns-2, ACM SIGCOMM Computer Communication Review, vol. 26, no. 3, pp , 26. A Experiments with Fast with different parameters In this section, We perform 3 sets of experiments that show the effect of parameters α, β and minimum threshold (mi) on FAST [5]. The set of values used (shown in table 5) are according to the test scripts that came with the FAST source code for ns.

16 α β mi A. Slowstart Table 5: Values of α, β and mi First we examine the behaviour of FAST during slow start for different values of α, β and minimum threshold (mi).fig 9 plots as a function of time, the throughput obtained by a single long lived transfer on a GB link with FAST for different values of parameters. We can observe that for some parameter settings (figs 9(b) and 9(c)) slow start causes losses. the ramp up time depends on the values of the parameters. It is best when α =, β = and mi =.5 (figs 9(f)). A.2 Dynamic Bandwidth Next, we simulate a network for 5 seconds, and introduce 4 constant-bit-rate (cbr) traffic streams on the bottleneck link according to the following schedule: cbr- exists from 5-4 seconds, cbr-2 exists from -5 seconds, cbr-3 exists from seconds, and cbr-4 exists for a small duration from seconds. Each cbr stream has a bit-rate of 2 Mbps. Fig plots as a function of time, the throughput obtained by the fast transfer. Figs 9(b), 9(c) and 9(d) show losses. The performance is best when α = 8, β = 8 and mi =.75 (fig 9(e)). A.3 Fairness Experiment Lastly, we simulate three transfers the first lasts from -9s and has an RTT of 2ms, the second lasts from 8-72s and has an RTT of ms, and the third lasts from 36-54s and has an RTT of 5ms. Fig plots the throughputs obtained by the three transfers with FAST for different values of the parameters α, β and minimum threshold(mi). From fig (a), where α =, β = and mi =., the third transfer do not get any throughput. The first transfer dies down when the second transfer starts. Fast gives best fairness when α=, β= and mi=.75 Fig (c) and Fig (f) show high unfairness towards the third transfer. Fig (d), where α = 2, β = 2 and mi =.75, and fig (e), where α = 8, β = 8 and mi =.75, show slight bias towards the second transfer initially. But, they converge towards their fair share when the third transfer starts. This shows that the fairness of FAST is highly dependent on values of α, β and mi. In general, we can see that the performance of FAST depends on the values of the parameters. Some key observations are For some parameter settings, even slow start causes losses. The parameter settings for best fairness (fig (b)) gives worst peformance with CBR cross traffic fig (b). The parameter settings which perform best with CBR cross traffic (fig (e)) does not do well with respect to fairness (fig (e)). A.4 SlowStart simulations with Fast

Rethinking the Timescales at Which Congestion-control Operates

Rethinking the Timescales at Which Congestion-control Operates Rethinking the Timescales at Which Congestion-control Operates Vishnu Konda and Jasleen Kaur University of North Carolina at Chapel Hill {vishnu, jasleen}@cs.unc.edu Abstract The efficiency of TCP congestion-control

More information

WHY Ultra-High-Speed 1 Transport Protocols?

WHY Ultra-High-Speed 1 Transport Protocols? 306 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 25, NO. 1, FEBRUARY 2017 Packet-Scale Congestion Control Paradigm Rebecca Lovewell, Qianwen Yin, Tianrong Zhang, Jasleen Kaur, and Frank Donelson Smith Abstract

More information

Congestion Avoidance

Congestion Avoidance COMP 631: NETWORKED & DISTRIBUTED SYSTEMS Congestion Avoidance Jasleen Kaur Fall 2016 1 Avoiding Congestion: Strategies TCP s strategy: congestion control Ø Control congestion once it occurs Repeatedly

More information

Impact of Short-lived TCP Flows on TCP Link Utilization over 10Gbps High-speed Networks

Impact of Short-lived TCP Flows on TCP Link Utilization over 10Gbps High-speed Networks Impact of Short-lived TCP Flows on TCP Link Utilization over Gbps High-speed Networks Lin Xue, Chui-hui Chiu, and Seung-Jong Park Department of Computer Science, Center for Computation & Technology, Louisiana

More information

Performance Analysis of Loss-Based High-Speed TCP Congestion Control Algorithms

Performance Analysis of Loss-Based High-Speed TCP Congestion Control Algorithms Performance Analysis of Loss-Based High-Speed TCP Congestion Control Algorithms HABIBULLAH JAMAL, KIRAN SULTAN Electrical Engineering Department University Of Engineering and Technology Taxila PAKISTAN

More information

Transmission Control Protocol (TCP)

Transmission Control Protocol (TCP) TETCOS Transmission Control Protocol (TCP) Comparison of TCP Congestion Control Algorithms using NetSim @2017 Tetcos. This document is protected by copyright, all rights reserved Table of Contents 1. Abstract....

More information

Chapter II. Protocols for High Speed Networks. 2.1 Need for alternative Protocols

Chapter II. Protocols for High Speed Networks. 2.1 Need for alternative Protocols Chapter II Protocols for High Speed Networks 2.1 Need for alternative Protocols As the conventional TCP suffers from poor performance on high bandwidth delay product links [47] meant for supporting transmission

More information

Effects of Applying High-Speed Congestion Control Algorithms in Satellite Network

Effects of Applying High-Speed Congestion Control Algorithms in Satellite Network Effects of Applying High-Speed Congestion Control Algorithms in Satellite Network Xiuchao Wu, Mun Choon Chan, and A. L. Ananda School of Computing, National University of Singapore Computing 1, Law Link,

More information

Performance Consequences of Partial RED Deployment

Performance Consequences of Partial RED Deployment Performance Consequences of Partial RED Deployment Brian Bowers and Nathan C. Burnett CS740 - Advanced Networks University of Wisconsin - Madison ABSTRACT The Internet is slowly adopting routers utilizing

More information

Appendix B. Standards-Track TCP Evaluation

Appendix B. Standards-Track TCP Evaluation 215 Appendix B Standards-Track TCP Evaluation In this appendix, I present the results of a study of standards-track TCP error recovery and queue management mechanisms. I consider standards-track TCP error

More information

A Bottleneck and Target Bandwidth Estimates-Based Congestion Control Algorithm for High BDP Networks

A Bottleneck and Target Bandwidth Estimates-Based Congestion Control Algorithm for High BDP Networks A Bottleneck and Target Bandwidth Estimates-Based Congestion Control Algorithm for High BDP Networks Tuan-Anh Le 1, Choong Seon Hong 2 Department of Computer Engineering, Kyung Hee University 1 Seocheon,

More information

Computer Networking

Computer Networking 15-441 Computer Networking Lecture 17 TCP Performance & Future Eric Anderson Fall 2013 www.cs.cmu.edu/~prs/15-441-f13 Outline TCP modeling TCP details 2 TCP Performance Can TCP saturate a link? Congestion

More information

RED behavior with different packet sizes

RED behavior with different packet sizes RED behavior with different packet sizes Stefaan De Cnodder, Omar Elloumi *, Kenny Pauwels Traffic and Routing Technologies project Alcatel Corporate Research Center, Francis Wellesplein, 1-18 Antwerp,

More information

Improving TCP Performance over Wireless Networks using Loss Predictors

Improving TCP Performance over Wireless Networks using Loss Predictors Improving TCP Performance over Wireless Networks using Loss Predictors Fabio Martignon Dipartimento Elettronica e Informazione Politecnico di Milano P.zza L. Da Vinci 32, 20133 Milano Email: martignon@elet.polimi.it

More information

Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes

Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes Zhili Zhao Dept. of Elec. Engg., 214 Zachry College Station, TX 77843-3128 A. L. Narasimha Reddy

More information

Equation-Based Congestion Control for Unicast Applications. Outline. Introduction. But don t we need TCP? TFRC Goals

Equation-Based Congestion Control for Unicast Applications. Outline. Introduction. But don t we need TCP? TFRC Goals Equation-Based Congestion Control for Unicast Applications Sally Floyd, Mark Handley AT&T Center for Internet Research (ACIRI) Jitendra Padhye Umass Amherst Jorg Widmer International Computer Science Institute

More information

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup Chapter 4 Routers with Tiny Buffers: Experiments This chapter describes two sets of experiments with tiny buffers in networks: one in a testbed and the other in a real network over the Internet2 1 backbone.

More information

Congestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University

Congestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University Congestion Control Daniel Zappala CS 460 Computer Networking Brigham Young University 2/25 Congestion Control how do you send as fast as possible, without overwhelming the network? challenges the fastest

More information

TCP START-UP BEHAVIOR UNDER THE PROPORTIONAL FAIR SCHEDULING POLICY

TCP START-UP BEHAVIOR UNDER THE PROPORTIONAL FAIR SCHEDULING POLICY TCP START-UP BEHAVIOR UNDER THE PROPORTIONAL FAIR SCHEDULING POLICY J. H. CHOI,J.G.CHOI, AND C. YOO Department of Computer Science and Engineering Korea University Seoul, Korea E-mail: {jhchoi, hxy}@os.korea.ac.kr

More information

TCP Congestion Control : Computer Networking. Introduction to TCP. Key Things You Should Know Already. Congestion Control RED

TCP Congestion Control : Computer Networking. Introduction to TCP. Key Things You Should Know Already. Congestion Control RED TCP Congestion Control 15-744: Computer Networking L-4 TCP Congestion Control RED Assigned Reading [FJ93] Random Early Detection Gateways for Congestion Avoidance [TFRC] Equation-Based Congestion Control

More information

One More Bit Is Enough

One More Bit Is Enough One More Bit Is Enough Yong Xia, RPI Lakshmi Subramanian, UCB Ion Stoica, UCB Shiv Kalyanaraman, RPI SIGCOMM 05, Philadelphia, PA 08 / 23 / 2005 Motivation #1: TCP doesn t work well in high b/w or delay

More information

Congestion Control for High Bandwidth-delay Product Networks. Dina Katabi, Mark Handley, Charlie Rohrs

Congestion Control for High Bandwidth-delay Product Networks. Dina Katabi, Mark Handley, Charlie Rohrs Congestion Control for High Bandwidth-delay Product Networks Dina Katabi, Mark Handley, Charlie Rohrs Outline Introduction What s wrong with TCP? Idea of Efficiency vs. Fairness XCP, what is it? Is it

More information

Performance of Competing High-Speed TCP Flows

Performance of Competing High-Speed TCP Flows Performance of Competing High-Speed TCP Flows Michele C. Weigle, Pankaj Sharma, and Jesse R. Freeman, IV Department of Computer Science, Clemson University, Clemson, SC 29634 {mweigle, pankajs, jessef}@cs.clemson.edu

More information

ADVANCED TOPICS FOR CONGESTION CONTROL

ADVANCED TOPICS FOR CONGESTION CONTROL ADVANCED TOPICS FOR CONGESTION CONTROL Congestion Control The Internet only functions because TCP s congestion control does an effective job of matching traffic demand to available capacity. TCP s Window

More information

Analysis of Binary Adjustment Algorithms in Fair Heterogeneous Networks

Analysis of Binary Adjustment Algorithms in Fair Heterogeneous Networks Analysis of Binary Adjustment Algorithms in Fair Heterogeneous Networks Sergey Gorinsky Harrick Vin Technical Report TR2000-32 Department of Computer Sciences, University of Texas at Austin Taylor Hall

More information

Communication Networks

Communication Networks Communication Networks Spring 2018 Laurent Vanbever nsg.ee.ethz.ch ETH Zürich (D-ITET) April 30 2018 Materials inspired from Scott Shenker & Jennifer Rexford Last week on Communication Networks We started

More information

Random Early Detection (RED) gateways. Sally Floyd CS 268: Computer Networks

Random Early Detection (RED) gateways. Sally Floyd CS 268: Computer Networks Random Early Detection (RED) gateways Sally Floyd CS 268: Computer Networks floyd@eelblgov March 20, 1995 1 The Environment Feedback-based transport protocols (eg, TCP) Problems with current Drop-Tail

More information

RECHOKe: A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks

RECHOKe: A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < : A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks Visvasuresh Victor Govindaswamy,

More information

Tuning RED for Web Traffic

Tuning RED for Web Traffic Tuning RED for Web Traffic Mikkel Christiansen, Kevin Jeffay, David Ott, Donelson Smith UNC, Chapel Hill SIGCOMM 2000, Stockholm subsequently IEEE/ACM Transactions on Networking Vol. 9, No. 3 (June 2001)

More information

6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long

6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long 6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long Please read Chapter 19 of the 6.02 book for background, especially on acknowledgments (ACKs), timers,

More information

The effect of reverse traffic on the performance of new TCP congestion control algorithms

The effect of reverse traffic on the performance of new TCP congestion control algorithms The effect of reverse traffic on the performance of new TCP congestion control algorithms Saverio Mascolo and Francesco Vacirca Dipartimento di Elettrotecnica ed Elettronica Politecnico di Bari Via Orabona

More information

Discrete-Approximation of Measured Round Trip Time Distributions: A Model for Network Emulation

Discrete-Approximation of Measured Round Trip Time Distributions: A Model for Network Emulation Discrete-Approximation of Measured Round Trip Time Distributions: A Model for Network Emulation Jay Aikat*, Shaddi Hasan +, Kevin Jeffay*, and F. Donelson Smith* *University of North Carolina at Chapel

More information

Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks

Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks X. Yuan, R. Melhem and R. Gupta Department of Computer Science University of Pittsburgh Pittsburgh, PA 156 fxyuan,

More information

A COMPARATIVE STUDY OF TCP RENO AND TCP VEGAS IN A DIFFERENTIATED SERVICES NETWORK

A COMPARATIVE STUDY OF TCP RENO AND TCP VEGAS IN A DIFFERENTIATED SERVICES NETWORK A COMPARATIVE STUDY OF TCP RENO AND TCP VEGAS IN A DIFFERENTIATED SERVICES NETWORK Ruy de Oliveira Federal Technical School of Mato Grosso Brazil Gerência de Eletroeletrônica E-mail: ruy@lrc.feelt.ufu.br

More information

15-744: Computer Networking TCP

15-744: Computer Networking TCP 15-744: Computer Networking TCP Congestion Control Congestion Control Assigned Reading [Jacobson and Karels] Congestion Avoidance and Control [TFRC] Equation-Based Congestion Control for Unicast Applications

More information

TCP so far Computer Networking Outline. How Was TCP Able to Evolve

TCP so far Computer Networking Outline. How Was TCP Able to Evolve TCP so far 15-441 15-441 Computer Networking 15-641 Lecture 14: TCP Performance & Future Peter Steenkiste Fall 2016 www.cs.cmu.edu/~prs/15-441-f16 Reliable byte stream protocol Connection establishments

More information

CS 356: Computer Network Architectures Lecture 19: Congestion Avoidance Chap. 6.4 and related papers. Xiaowei Yang

CS 356: Computer Network Architectures Lecture 19: Congestion Avoidance Chap. 6.4 and related papers. Xiaowei Yang CS 356: Computer Network Architectures Lecture 19: Congestion Avoidance Chap. 6.4 and related papers Xiaowei Yang xwy@cs.duke.edu Overview More on TCP congestion control Theory Macroscopic behavior TCP

More information

Performance Evaluation of TCP Westwood. Summary

Performance Evaluation of TCP Westwood. Summary Summary This project looks at a fairly new Transmission Control Protocol flavour, TCP Westwood and aims to investigate how this flavour of TCP differs from other flavours of the protocol, especially TCP

More information

Report on Transport Protocols over Mismatched-rate Layer-1 Circuits with 802.3x Flow Control

Report on Transport Protocols over Mismatched-rate Layer-1 Circuits with 802.3x Flow Control Report on Transport Protocols over Mismatched-rate Layer-1 Circuits with 82.3x Flow Control Helali Bhuiyan, Mark McGinley, Tao Li, Malathi Veeraraghavan University of Virginia Email: {helali, mem5qf, taoli,

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2014 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

Congestion control in TCP

Congestion control in TCP Congestion control in TCP If the transport entities on many machines send too many packets into the network too quickly, the network will become congested, with performance degraded as packets are delayed

More information

Markov Model Based Congestion Control for TCP

Markov Model Based Congestion Control for TCP Markov Model Based Congestion Control for TCP Shan Suthaharan University of North Carolina at Greensboro, Greensboro, NC 27402, USA ssuthaharan@uncg.edu Abstract The Random Early Detection (RED) scheme

More information

Performance of Competing High-Speed TCP Flows

Performance of Competing High-Speed TCP Flows Performance of Competing High-Speed TCP Flows Michele C. Weigle, Pankaj Sharma, and Jesse R. Freeman IV Department of Computer Science, Clemson University, Clemson, SC 29634 {mweigle, pankajs, jessef}@cs.clemson.edu

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2015 1 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

INTERNATIONAL JOURNAL OF RESEARCH IN COMPUTER APPLICATIONS AND ROBOTICS ISSN

INTERNATIONAL JOURNAL OF RESEARCH IN COMPUTER APPLICATIONS AND ROBOTICS ISSN INTERNATIONAL JOURNAL OF RESEARCH IN COMPUTER APPLICATIONS AND ROBOTICS ISSN 2320-7345 A SURVEY ON EXPLICIT FEEDBACK BASED CONGESTION CONTROL PROTOCOLS Nasim Ghasemi 1, Shahram Jamali 2 1 Department of

More information

CS 268: Lecture 7 (Beyond TCP Congestion Control)

CS 268: Lecture 7 (Beyond TCP Congestion Control) Outline CS 68: Lecture 7 (Beyond TCP Congestion Control) TCP-Friendly Rate Control (TFRC) explicit Control Protocol Ion Stoica Computer Science Division Department of Electrical Engineering and Computer

More information

Outline Computer Networking. TCP slow start. TCP modeling. TCP details AIMD. Congestion Avoidance. Lecture 18 TCP Performance Peter Steenkiste

Outline Computer Networking. TCP slow start. TCP modeling. TCP details AIMD. Congestion Avoidance. Lecture 18 TCP Performance Peter Steenkiste Outline 15-441 Computer Networking Lecture 18 TCP Performance Peter Steenkiste Fall 2010 www.cs.cmu.edu/~prs/15-441-f10 TCP congestion avoidance TCP slow start TCP modeling TCP details 2 AIMD Distributed,

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2014 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2015 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

On the Design of Load Factor based Congestion Control Protocols for Next-Generation Networks

On the Design of Load Factor based Congestion Control Protocols for Next-Generation Networks 1 On the Design of Load Factor based Congestion Control Protocols for Next-Generation Networks Ihsan Ayyub Qazi Taieb Znati Student Member, IEEE Senior Member, IEEE Abstract Load factor based congestion

More information

Interactions of Intelligent Route Control with TCP Congestion Control

Interactions of Intelligent Route Control with TCP Congestion Control Interactions of Intelligent Route Control with TCP Congestion Control Ruomei Gao 1, Dana Blair 2, Constantine Dovrolis 1, Monique Morrow 2, and Ellen Zegura 1 1 College of Computing, Georgia Institute

More information

CS519: Computer Networks. Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control

CS519: Computer Networks. Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control : Computer Networks Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control TCP performance We ve seen how TCP the protocol works Sequencing, receive window, connection setup and teardown And

More information

Performance of high-speed TCP Protocols over NS-2 TCP Linux

Performance of high-speed TCP Protocols over NS-2 TCP Linux Performance of high-speed TCP Protocols over NS-2 TCP Linux Masters Project Final Report Author: Sumanth Gelle Email: sgelle@cs.odu.edu Project Advisor: Dr. Michele Weigle Email: mweigle@cs.odu.edu Project

More information

Analyzing the Receiver Window Modification Scheme of TCP Queues

Analyzing the Receiver Window Modification Scheme of TCP Queues Analyzing the Receiver Window Modification Scheme of TCP Queues Visvasuresh Victor Govindaswamy University of Texas at Arlington Texas, USA victor@uta.edu Gergely Záruba University of Texas at Arlington

More information

CS644 Advanced Networks

CS644 Advanced Networks What we know so far CS644 Advanced Networks Lecture 6 Beyond TCP Congestion Control Andreas Terzis TCP Congestion control based on AIMD window adjustment [Jac88] Saved Internet from congestion collapse

More information

Title Problems of TCP in High Bandwidth-Delay Networks Syed Nusrat JJT University, Rajasthan, India Abstract:

Title Problems of TCP in High Bandwidth-Delay Networks Syed Nusrat JJT University, Rajasthan, India Abstract: Title Problems of TCP in High Bandwidth-Delay Networks Syed Nusrat JJT University, Rajasthan, India Abstract: The Transmission Control Protocol (TCP) [J88] is the most popular transport layer protocol

More information

CUBIC. Qian HE (Steve) CS 577 Prof. Bob Kinicki

CUBIC. Qian HE (Steve) CS 577 Prof. Bob Kinicki CUBIC Qian HE (Steve) CS 577 Prof. Bob Kinicki Agenda Brief Introduction of CUBIC Prehistory of CUBIC Standard TCP BIC CUBIC Conclusion 1 Brief Introduction CUBIC is a less aggressive and more systematic

More information

Cross-layer TCP Performance Analysis in IEEE Vehicular Environments

Cross-layer TCP Performance Analysis in IEEE Vehicular Environments 24 Telfor Journal, Vol. 6, No. 1, 214. Cross-layer TCP Performance Analysis in IEEE 82.11 Vehicular Environments Toni Janevski, Senior Member, IEEE, and Ivan Petrov 1 Abstract In this paper we provide

More information

Open Box Protocol (OBP)

Open Box Protocol (OBP) Open Box Protocol (OBP) Paulo Loureiro 1, Saverio Mascolo 2, Edmundo Monteiro 3 1 Polytechnic Institute of Leiria, Leiria, Portugal, loureiro.pjg@gmail.pt 2 Politecnico di Bari, Bari, Italy, saverio.mascolo@gmail.com

More information

COMPARISON OF HIGH SPEED CONGESTION CONTROL PROTOCOLS

COMPARISON OF HIGH SPEED CONGESTION CONTROL PROTOCOLS COMPARISON OF HIGH SPEED CONGESTION CONTROL PROTOCOLS Jawhar Ben Abed 1, Lâarif Sinda 2, Mohamed Ali Mani 3 and Rachid Mbarek 2 1 Polytech Sousse, 2 ISITCOM Hammam Sousse and 3 ISTLS Sousse, Tunisia ba.jawhar@gmail.com

More information

Congestion Control for High Bandwidth-delay Product Networks

Congestion Control for High Bandwidth-delay Product Networks Congestion Control for High Bandwidth-delay Product Networks Dina Katabi, Mark Handley, Charlie Rohrs Presented by Chi-Yao Hong Adapted from slides by Dina Katabi CS598pbg Sep. 10, 2009 Trends in the Future

More information

CSE 123A Computer Networks

CSE 123A Computer Networks CSE 123A Computer Networks Winter 2005 Lecture 14 Congestion Control Some images courtesy David Wetherall Animations by Nick McKeown and Guido Appenzeller The bad news and the good news The bad news: new

More information

On the Effectiveness of CoDel for Active Queue Management

On the Effectiveness of CoDel for Active Queue Management 1 13 Third International Conference on Advanced Computing & Communication Technologies On the Effectiveness of CoDel for Active Queue Management Dipesh M. Raghuvanshi, Annappa B., Mohit P. Tahiliani Department

More information

Lecture 21: Congestion Control" CSE 123: Computer Networks Alex C. Snoeren

Lecture 21: Congestion Control CSE 123: Computer Networks Alex C. Snoeren Lecture 21: Congestion Control" CSE 123: Computer Networks Alex C. Snoeren Lecture 21 Overview" How fast should a sending host transmit data? Not to fast, not to slow, just right Should not be faster than

More information

DiffServ Architecture: Impact of scheduling on QoS

DiffServ Architecture: Impact of scheduling on QoS DiffServ Architecture: Impact of scheduling on QoS Abstract: Scheduling is one of the most important components in providing a differentiated service at the routers. Due to the varying traffic characteristics

More information

TCP and BBR. Geoff Huston APNIC

TCP and BBR. Geoff Huston APNIC TCP and BBR Geoff Huston APNIC Computer Networking is all about moving data The way in which data movement is controlled is a key characteristic of the network architecture The Internet protocol passed

More information

15-744: Computer Networking. Overview. Queuing Disciplines. TCP & Routers. L-6 TCP & Routers

15-744: Computer Networking. Overview. Queuing Disciplines. TCP & Routers. L-6 TCP & Routers TCP & Routers 15-744: Computer Networking RED XCP Assigned reading [FJ93] Random Early Detection Gateways for Congestion Avoidance [KHR02] Congestion Control for High Bandwidth-Delay Product Networks L-6

More information

Congestion Control for High-Bandwidth-Delay-Product Networks: XCP vs. HighSpeed TCP and QuickStart

Congestion Control for High-Bandwidth-Delay-Product Networks: XCP vs. HighSpeed TCP and QuickStart Congestion Control for High-Bandwidth-Delay-Product Networks: XCP vs. HighSpeed TCP and QuickStart Sally Floyd September 11, 2002 ICIR Wednesday Lunch 1 Outline: Description of the problem. Description

More information

BANDWIDTH MEASUREMENT IN WIRELESS NETWORKS

BANDWIDTH MEASUREMENT IN WIRELESS NETWORKS BANDWIDTH MEASUREMENT IN WIRELESS NETWORKS Andreas Johnsson, Bob Melander, and Mats Björkman The Department of Computer Science and Electronics Mälardalen University Sweden Abstract Keywords: For active,

More information

TCP and BBR. Geoff Huston APNIC

TCP and BBR. Geoff Huston APNIC TCP and BBR Geoff Huston APNIC Computer Networking is all about moving data The way in which data movement is controlled is a key characteristic of the network architecture The Internet protocol passed

More information

Implementing stable TCP variants

Implementing stable TCP variants Implementing stable TCP variants IPAM Workshop on Large Scale Communications Networks April 2002 Tom Kelly ctk21@cam.ac.uk Laboratory for Communication Engineering University of Cambridge Implementing

More information

IJSRD - International Journal for Scientific Research & Development Vol. 2, Issue 03, 2014 ISSN (online):

IJSRD - International Journal for Scientific Research & Development Vol. 2, Issue 03, 2014 ISSN (online): IJSRD - International Journal for Scientific Research & Development Vol. 2, Issue 03, 2014 ISSN (online): 2321-0613 Performance Evaluation of TCP in the Presence of in Heterogeneous Networks by using Network

More information

Design of Network Dependent Congestion Avoidance TCP (NDCA-TCP) for Performance Improvement in Broadband Networks

Design of Network Dependent Congestion Avoidance TCP (NDCA-TCP) for Performance Improvement in Broadband Networks International Journal of Principles and Applications of Information Science and Technology February 2010, Vol.3, No.1 Design of Network Dependent Congestion Avoidance TCP (NDCA-TCP) for Performance Improvement

More information

TCP Congestion Control

TCP Congestion Control 1 TCP Congestion Control Onwutalobi, Anthony Claret Department of Computer Science University of Helsinki, Helsinki Finland onwutalo@cs.helsinki.fi Abstract This paper is aimed to discuss congestion control

More information

CS 268: Computer Networking

CS 268: Computer Networking CS 268: Computer Networking L-6 Router Congestion Control TCP & Routers RED XCP Assigned reading [FJ93] Random Early Detection Gateways for Congestion Avoidance [KHR02] Congestion Control for High Bandwidth-Delay

More information

Flow-start: Faster and Less Overshoot with Paced Chirping

Flow-start: Faster and Less Overshoot with Paced Chirping Flow-start: Faster and Less Overshoot with Paced Chirping Joakim Misund, Simula and Uni Oslo Bob Briscoe, Independent IRTF ICCRG, Jul 2018 The Slow-Start

More information

Experimental Study of TCP Congestion Control Algorithms

Experimental Study of TCP Congestion Control Algorithms www..org 161 Experimental Study of TCP Congestion Control Algorithms Kulvinder Singh Asst. Professor, Department of Computer Science & Engineering, Vaish College of Engineering, Rohtak, Haryana, India

More information

Enhancing TCP Throughput over Lossy Links Using ECN-capable RED Gateways

Enhancing TCP Throughput over Lossy Links Using ECN-capable RED Gateways Enhancing TCP Throughput over Lossy Links Using ECN-capable RED Gateways Haowei Bai AES Technology Centers of Excellence Honeywell Aerospace 3660 Technology Drive, Minneapolis, MN 5548 E-mail: haowei.bai@honeywell.com

More information

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness Recap TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness 81 Feedback Signals Several possible signals, with different

More information

Random Early Detection Gateways for Congestion Avoidance

Random Early Detection Gateways for Congestion Avoidance Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson Lawrence Berkeley Laboratory University of California floyd@eelblgov van@eelblgov To appear in the August 1993 IEEE/ACM

More information

TCP and BBR. Geoff Huston APNIC

TCP and BBR. Geoff Huston APNIC TCP and BBR Geoff Huston APNIC The IP Architecture At its heart IP is a datagram network architecture Individual IP packets may be lost, re-ordered, re-timed and even fragmented The IP Architecture At

More information

Congestion Control. Tom Anderson

Congestion Control. Tom Anderson Congestion Control Tom Anderson Bandwidth Allocation How do we efficiently share network resources among billions of hosts? Congestion control Sending too fast causes packet loss inside network -> retransmissions

More information

Cross-layer Flow Control to Improve Bandwidth Utilization and Fairness for Short Burst Flows

Cross-layer Flow Control to Improve Bandwidth Utilization and Fairness for Short Burst Flows Cross-layer Flow Control to Improve Bandwidth Utilization and Fairness for Short Burst Flows Tomoko Kudo, Toshihiro Taketa, Yukio Hiranaka Graduate School of Science and Engineering Yamagata University

More information

Empirical Evaluation of Techniques for Measuring Available Bandwidth

Empirical Evaluation of Techniques for Measuring Available Bandwidth Empirical Evaluation of Techniques for Measuring Available Bandwidth Alok Shriram and Jasleen Kaur Department of Computer Science University of North Carolina at Chapel Hill Email: alok,jasleen @cs.unc.edu

More information

CS268: Beyond TCP Congestion Control

CS268: Beyond TCP Congestion Control TCP Problems CS68: Beyond TCP Congestion Control Ion Stoica February 9, 004 When TCP congestion control was originally designed in 1988: - Key applications: FTP, E-mail - Maximum link bandwidth: 10Mb/s

More information

Random Early Detection Gateways for Congestion Avoidance

Random Early Detection Gateways for Congestion Avoidance Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson Lawrence Berkeley Laboratory University of California floyd@eelblgov van@eelblgov To appear in the August 1993 IEEE/ACM

More information

Performance Analysis of TCP Variants

Performance Analysis of TCP Variants 102 Performance Analysis of TCP Variants Abhishek Sawarkar Northeastern University, MA 02115 Himanshu Saraswat PES MCOE,Pune-411005 Abstract The widely used TCP protocol was developed to provide reliable

More information

TM ALGORITHM TO IMPROVE PERFORMANCE OF OPTICAL BURST SWITCHING (OBS) NETWORKS

TM ALGORITHM TO IMPROVE PERFORMANCE OF OPTICAL BURST SWITCHING (OBS) NETWORKS INTERNATIONAL JOURNAL OF RESEARCH IN COMPUTER APPLICATIONS AND ROBOTICS ISSN 232-7345 TM ALGORITHM TO IMPROVE PERFORMANCE OF OPTICAL BURST SWITCHING (OBS) NETWORKS Reza Poorzare 1 Young Researchers Club,

More information

A Survey on Quality of Service and Congestion Control

A Survey on Quality of Service and Congestion Control A Survey on Quality of Service and Congestion Control Ashima Amity University Noida, U.P, India batra_ashima@yahoo.co.in Sanjeev Thakur Amity University Noida, U.P, India sthakur.ascs@amity.edu Abhishek

More information

On Standardized Network Topologies For Network Research Λ

On Standardized Network Topologies For Network Research Λ On Standardized Network Topologies For Network Research Λ George F. Riley Department of Electrical and Computer Engineering Georgia Institute of Technology Atlanta, GA 3332-25 riley@ece.gatech.edu (44)894-4767,

More information

TCP Congestion Control

TCP Congestion Control 6.033, Spring 2014 TCP Congestion Control Dina Katabi & Sam Madden nms.csail.mit.edu/~dina Sharing the Internet How do you manage resources in a huge system like the Internet, where users with different

More information

Overview. TCP congestion control Computer Networking. TCP modern loss recovery. TCP modeling. TCP Congestion Control AIMD

Overview. TCP congestion control Computer Networking. TCP modern loss recovery. TCP modeling. TCP Congestion Control AIMD Overview 15-441 Computer Networking Lecture 9 More TCP & Congestion Control TCP congestion control TCP modern loss recovery TCP modeling Lecture 9: 09-25-2002 2 TCP Congestion Control Changes to TCP motivated

More information

Enhanced Forward Explicit Congestion Notification (E-FECN) Scheme for Datacenter Ethernet Networks

Enhanced Forward Explicit Congestion Notification (E-FECN) Scheme for Datacenter Ethernet Networks Enhanced Forward Explicit Congestion Notification (E-FECN) Scheme for Datacenter Ethernet Networks Chakchai So-In, Raj Jain, and Jinjing Jiang Department of Computer Science and Engineering Washington

More information

On the Transition to a Low Latency TCP/IP Internet

On the Transition to a Low Latency TCP/IP Internet On the Transition to a Low Latency TCP/IP Internet Bartek Wydrowski and Moshe Zukerman ARC Special Research Centre for Ultra-Broadband Information Networks, EEE Department, The University of Melbourne,

More information

A New Fair Window Algorithm for ECN Capable TCP (New-ECN)

A New Fair Window Algorithm for ECN Capable TCP (New-ECN) A New Fair Window Algorithm for ECN Capable TCP (New-ECN) Tilo Hamann Department of Digital Communication Systems Technical University of Hamburg-Harburg Hamburg, Germany t.hamann@tu-harburg.de Jean Walrand

More information

Incrementally Deployable Prevention to TCP Attack with Misbehaving Receivers

Incrementally Deployable Prevention to TCP Attack with Misbehaving Receivers Incrementally Deployable Prevention to TCP Attack with Misbehaving Receivers Kun Gao and Chengwen Chris Wang kgao, chengwen @cs.cmu.edu Computer Science Department Carnegie Mellon University December 15,

More information

Analytic End-to-End Estimation for the One-Way Delay and Its Variation

Analytic End-to-End Estimation for the One-Way Delay and Its Variation Analytic End-to-End Estimation for the One-Way Delay and Its Variation Jin-Hee Choi and Chuck Yoo Department of Computer Science and Engineering Korea University Email: {jhchoi, hxy}@os.korea.ac.kr Telephone:

More information

Congestion Control In the Network

Congestion Control In the Network Congestion Control In the Network Brighten Godfrey cs598pbg September 9 2010 Slides courtesy Ion Stoica with adaptation by Brighten Today Fair queueing XCP Announcements Problem: no isolation between flows

More information

Fairness of High-Speed TCP Stacks

Fairness of High-Speed TCP Stacks Fairness of High-Speed TCP Stacks Dimitrios Miras Department of Computer Science University College London (UCL) London, WCE 6BT, UK d.miras@cs.ucl.ac.uk Martin Bateman, Saleem Bhatti School of Computer

More information

Overview. TCP & router queuing Computer Networking. TCP details. Workloads. TCP Performance. TCP Performance. Lecture 10 TCP & Routers

Overview. TCP & router queuing Computer Networking. TCP details. Workloads. TCP Performance. TCP Performance. Lecture 10 TCP & Routers Overview 15-441 Computer Networking TCP & router queuing Lecture 10 TCP & Routers TCP details Workloads Lecture 10: 09-30-2002 2 TCP Performance TCP Performance Can TCP saturate a link? Congestion control

More information