Loss Synchronization, Router Buffer Sizing and High-Speed TCP Versions: Adding RED to the Mix

Size: px
Start display at page:

Download "Loss Synchronization, Router Buffer Sizing and High-Speed TCP Versions: Adding RED to the Mix"

Transcription

1 29 IEEE 34th Conference on Local Computer Networks (LCN 29) Zürich, Switzerland; 2-23 October 29 Loss Synchronization, Router Buffer Sizing and High-Speed TCP Versions: Adding RED to the Mix Sofiane Hassayoun Institut Telecom / Telecom Bretagne Rue de la Châtaigneraie, CS Cesson Sévigné cedex, France sofien.hassayoun@telecom-bretagne.eu David Ros Institut Telecom / Telecom Bretagne Rue de la Châtaigneraie, CS Cesson Sévigné cedex, France david.ros@telecom-bretagne.eu Abstract Adapting router buffer sizing rules and TCP s congestion control to Gbit/s link speeds has been the subject of many studies and proposals. In a previous work (Hassayoun and Ros, 28), we found that high-speed versions of TCP may be prone to strong packet-loss synchronization between flows. Given that synchronized losses may lead to oscillatory behavior in router queues, a relatively large amount of buffering may be needed to sustain an adequate TCP performance. In this paper, we are interested in evaluating the potential impact of the Random Early Detection (RED) queue management algorithm, in the context of high-speed TCP versions. In principle, under mild congestion RED should tend to reduce loss synchronization between flows. Hence, we wanted to test in particular whether RED would allow to use moderately-sized buffers, while ensuring good performance. Simulation results are used to assess the effect of RED on goodput, link utilization and fairness among high-speed flows, for a wide range of buffer sizes and several high-speed TCP variants. Index Terms TCP; congestion control; high-speed networks; router buffer sizing; active queue management. I. INTRODUCTION AND BACKGROUND In IP networks, links running at Gb/s speeds and beyond are becoming more and more commonplace. With the rise of such fast links, two related problems have come into light. A first problem is that of the performance of the Transmission Control Protocol (TCP) in very high-speed networks. At gigabit speeds, the performance of TCP s congestion control algorithms may degrade sharply. The fault lies mainly in the way TCP dynamically adapts its congestion window cwnd, both in the absence of packet loss and when losses occur. In the congestion avoidance phase, a TCP sender increases cwnd roughly by only one segment per round-trip time (RTT). When the sender detects packet loss, cwnd is cut by (at least) half. Whenever the bandwidth-delay product (BDP) of the end-toend path is high, the combination of these two policies for updating cwnd often results in poor performance. Indeed, if the BDP is high, cwnd may reach large values before a loss happens; hence, when a packet is lost it may take many RTT cycles before the sender attains again a large window []. A second problem lies in the rules used for dimensioning router buffers. The classical rule-of-thumb [2], attributed to V. Jacobson, states that the size B of a buffer in a bottleneck router should be: B = C RT T, where C is the rate of the egress bottleneck link and RT T is the average RTT experienced by connections that use this link. The basis of this rule is the typical sawtooth behavior of a TCP flow s congestion window. The rule expresses the minimum amount of buffering needed to avoid link underutilization, assuming there is a single, long-lived flow (or a set of fully-synchronized flows) using the link [3]. In high BDP links, such rule often yields huge, impractical values of B. A. New transport protocols for high-speed networks Several new congestion-control algorithms have been proposed, in order to address the need of a new transport protocol that ensures an efficient utilization over high-speed links. Many of these protocols are loss-based, which means that the congestion signal they react to is packet loss. Other protocols are delay-based: in addition to losses, they interpret an increase of the RTT as a congestion signal, and a decrease of the RTT as a signal of available unused bandwidth. In this work, we will consider the following high-speed TCP versions: High Speed TCP (HSTCP), Hamilton TCP (H-TCP), BIC, CUBIC and Compound TCP (CTCP). Except for the latter, all these protocols are loss-based. HSTCP [] adapts its window increase (α) and decrease (β) parameters as a function of the current value of cwnd; when cwnd increases, α increases and β decreases. Such adaptation mechanism only comes into play when cwnd grows above a given threshold; in low-bdp paths, where the window cannot get very large, HSTCP behaves like standard TCP. H-TCP [4] adjusts its window increase/decrease parameters based on: (i) the time elapsed since the last congestion; (ii) the ratio RTT min /RTT max, where RT T min and RT T max are the minimum and the maximum RTT experienced by this flow between the last two congestion events, respectively; (iii) the two sending rates reached by the sender just before the two last congestion events. H-TCP is slightly more complex than HSTCP, but its design aims at eliminating the RTT-unfairness problem proper to TCP [4]. BIC TCP [5] and CUBIC [6] are two related protocol variants. BIC keeps track of the last value cwnd max that cwnd attained just before a window decrease. Then, it tries to reach again this value using a binary search technique. When cwnd exceeds cwnd max, BIC enters a phase called max probing, in which it increases cwnd exponentially and then linearly /9/$ IEEE 569

2 CUBIC was proposed to improve the behavior of BIC in lowspeed networks, or when flows have short RTTs. It uses a cubic function to govern the window increase algorithm; the slope of the window is the lowest when it is close to the target value cwnd max. CUBIC is now the default congestion control algorithm in the Linux kernel. CTCP [7] is a delay-based protocol implemented in Windows Vista and Windows Server 28. It uses a window wnd computed as the sum of a classical congestion window cwnd (governed exactly as in standard TCP), and a delay window dwnd which depends on RTT measurements. CTCP estimates the number of backlogged packets along the path, and compares it to a threshold γ (a protocol parameter); if < γ then dwnd is increased, whereas if γ then dwnd is decreased. dwnd is updated so that it is always non negative, and when it is equal to zero, CTCP behaves like standard TCP. Thus, CTCP aggressively increases wnd when the backlog is small; once the threshold is attained, it switches to a more conservative window growth regime. B. Router buffer sizing and loss synchronization Starting with the seminal work by Appenzeller et al. [3], many authors (e.g., [8] [2]) have questioned the validity of the C RTT rule-of-thumb for buffer sizing. New rules have been proposed to estimate a minimum appropriate amount of buffering, while guaranteeing a high link utilization. The problem of buffer sizing is related to that of loss synchronization. Loss synchronization happens whenever two or more TCP flows experience packet loss in a short time interval, resulting in their lowering their windows in unison. The amount of loss synchronization is an important parameter for reducing buffering requirements. In fact, the analysis in [3] shows that when we have very low synchronization, we can reduce the buffer sizes; conversely, when the synchronization is high, large buffers should be provided to absorb the bursts of packets due to synchronization. Appenzeller et al. [3] proposed to use the sizing rule B = C RTT / N, where N is the number of long-lived flows sharing the bottleneck buffer. Similar rules (e.g., [8]) are also based on the hypothesis that synchronization decreases as the number of competing flows increases. Other authors [9] [] have proposed using much smaller buffers. For instance, Enachescu et al. [] assert that a few tens of packets may be sufficient if TCP senders use a pacing scheme. Even if there is no consensus yet on what is the right amount of buffering [3], some recent experimental results suggest that buffer sizes may indeed be reduced in practice [4]. Note that all the above rules are based on the behavior of standard, low-speed TCP. Besides, as our previous results have suggested [5], high-speed TCP variants may be prone to high drop synchronization, even under conditions that would tend to lessen phase effects. Hence, we believe the adoption of more aggressive TCPs in mainstream operating systems means the buffer sizing problem remains an open issue. C. Active queue management The main goal of active queue management (AQM) algorithms is to react to congestion in a proactive way [6]. When AQM is used in router buffers, congestion signals are sent to traffic sources to indicate incipient buffer overload. Transport endpoints that detect congestion signals would react before the buffer actually gets full, thus avoiding loss bursts and (in theory) improving overall performance. Such signals may consist simply in dropping packets. Also, AQM methods can be used in conjunction with mechanisms like Explicit Congestion Notification (ECN) [7], to tag packets instead of discarding them. Random Early Detection (RED) [8] is probably the bestknown AQM algorithm. With RED, the decision of dropping (or tagging) a packet is based on a running estimate of the average queue q at the buffer. This average value is updated with every incoming packet. A piecewise-linear drop probability function p(q) is used to select the packets that will be marked with a congestion signal (i.e., discarded or tagged). When q gets above a given threshold θ min, incoming packets are marked with probability p(q) >, as shown in Fig., up to a maximum value of p max which is typically. If q becomes greater than a threshold θ max, then p =. In the so-called gentle RED variant, p gradually increases from p max to when q is in the [θ max, 2 θ max ] range. p(q) p max Fig.. θ min θ max gentle RED RED: drop probability function. 2 θ max One important effect of RED is that it tends to break loss synchronization between TCP flows or, more precisely, it avoids causing global synchronization as much as possible [8]. This is because of the way RED drops packets in its congestion-avoidance mode, i.e., when q (θ min, θ max ). First, the algorithm tries to uniformly spread losses over time. Besides, p max should generally take fairly low values. Hence, the probability of dropping consecutively-arriving packets would often be low. In other words, RED tries to avoid loss bursts from happening. Finally, note that drop synchronization can lead to unfairness between TCP flows with different RTTs. In such cases, RED may help in mitigating RTT unfairness [5], [9], [2]. D. Motivation and structure of the paper As we found in [5], several high-speed TCP variants (HSTCP, H-TCP and BIC) are able to achieve a very good In fact, the computation of the actual drop probability is a little more involved, since the algorithm tries to avoid random loss bursts by spreading the losses over time; see [8] for details. q 57

3 performance, with Drop-Tail buffers as small as % of the BDP, in spite of very high levels of synchronization. The increased aggressiveness of these protocols tends to compensate for the higher losses; however, enough buffering must be provided to cope with the high burstiness induced by both high-speed congestion control and synchronization. In this paper we are interested in the relation between buffer sizes, active queue management and loss synchronization, when high-speed transport protocols are used. In particular, since RED may alleviate the loss synchronization between flows, one may wonder whether using RED may allow to further reduce the buffer size in a bottleneck link (say, to a size well below % of the BDP), while maintaining a high goodput and link utilization. We present some results of an ns-2 simulation study aimed at exploring the above conjecture. Like in [5], we have focused on scenarios with moderate levels of statistical multiplexing (i.e., tens of long-lived, highspeed flows), while allowing the flows of interest to really operate in a high-speed regime. The remainder of the paper is organized as follows. In Section II we briefly review some additional related work. The simulation scenarios and performance metrics used in this study are detailed in Section III. Simulation results are presented and discussed in Section IV. Finally, Section V concludes the paper. II. RELATED WORK As discussed above, much work has been devoted to improving the performance of TCP in high-speed networks, as well as to the buffer sizing problem. Also, several papers, like [2] [23], have studied fairness issues with high-speed TCPs (inter-protocol and/or intra-protocol fairness), but they have all focused on Drop-Tail queues. However, we are aware of few papers dealing with the interaction of high-speed flows with AQM-enabled buffers. In [5], Xu et al. present a comparative study of several high-speed protocols, focusing on the RTT unfairness problem, and its relation with drop synchronization. Simulations are conducted to evaluate the performance of HSTCP, BIC and STCP [24] in term of link utilization, RTT fairness, TCPfriendliness and convergence to fairness, both with Drop- Tail and RED queues. Note that they only consider buffers sized as per the C RT T rule. They explain the unfairness characteristic of some protocols in terms of their increased aggressiveness. Such protocols adapt their increase/decrease parameters according to the current value of the congestion window. The latter grows faster as the RTT is smaller. As a consequence, the increase parameter will get more and more higher and cause unfairness with flows that have a larger RTT. Such unfairness may increase with a high loss synchronization. This may explain the improved RTT fairness they observed with RED. Indeed, in terms of fairness, they find that highspeed protocols converge faster to a fair share with RED queues than with Drop-Tail queues, due to the lower loss synchronization observed with RED. Lee et al. [9] studied the effect of the background traffic on drop synchronization between flows. They show that background traffic may accentuate RTT unfairness, in particular with the high-speed TCP versions they considered (HSTCP and STCP). They found also that random drop queuing disciplines such as RED may reduce unfairness in such settings. The impact of background traffic on high-speed protocols has also been investigated by Ha et al. [25]. Barman et al. [26] focused on HSTCP performance under Drop-Tail and RED queues with two different buffer sizes. They presented an analytical model and simulated a dumbell topology with HSTCP flows on the same direction with a buffer size of % and 2% of the BDP. They observed that, with such settings, the link utilization was always > 9%. Tokuda et al. [27] briefly discuss the impact that the settings of RED s thresholds may have on HSTCP in the slow-start phase. In [2], Chen and Bensaou focused on TCP unfairness in scenarios with multiple congested links. They considered simulated scenarios with a low level of statistical multiplexing, and either Drop-Tail or RED queues with ECN enabled; bottleneck buffer sizes were set either to the BDP or to 2% of the BDP. They found that, with Drop-Tail queues, unfairness among flows is high, even when flows have similar RTTs. Moreover, unfairness with high-speed TCPs in highspeed networks seems to be higher than that with Reno or NewReno TCP in normal -speed networks. This seems to be due to the prevalence of synchronized loss patterns in the former case. They argue for the use of RED for reducing loss synchronization and, thus, for mitigating unfairness. III. SIMULATION SCENARIOS A simulation study using the ns-2 network simulator was performed, in order to assess the impact of RED on the performance of high-speed flows, as a function of the buffer size. A. Network and traffic characteristics We considered a dumbbell topology, in which two routers are connected through a single 5 Gb/s bottleneck link with ms one-way delay. End-nodes are connected to one of the two routers with a link of bandwidth 2 Mb/s, and one-way delay chosen at random between 5 ms and 2 ms, to avoid phase effects. Node buffers corresponding to access links are large enough (>.5 C RT T ) to avoid congestion in those links; such buffers are always of the Drop-Tail kind. Hence, all losses will take place in the 5 Gb/s link. A simulation scenario consisted in a choice of: a highspeed TCP variant, a bottleneck buffer size B, and a queue management method for such buffers. For every scenario, 3 independent simulation runs were performed. Each run corresponds to a simulated time of 8 s; except for the fairness index, all measurements were done after a warm-up time of 2 s. Dimensioning of bottleneck buffers followed one of these four rules: (a) the C RT T rule-of-thumb [2]; (b) the 57

4 .63 C RT T / N rule [8], with N equal to the number of (long-lived) high-speed flows on the buffer; (c) an intermediate size of 2, packets, equivalent to 2% of the BDP; (d) a very small buffer of packets, similar to []. Rules (a) and (b) yield B = 5, packets and, packets, respectively. Note that not all these values are necessarily realistic; moreover, these rules are not necessarily optimal for all the protocols considered. Rather, they are used simply to cover a wide range of buffer sizes. We considered two types of traffic, simultaneously sharing the bottleneck in both directions: long file transfers and web traffic. In each direction, N = 4 long-lived flows were simulated 2 ; in a given scenario, all these flows use the same high-speed TCP version (HSTCP, H-TCP, BIC, CUBIC or CTCP). After an interval of seconds, long-lived connections were started successively in each direction, with an inter-start time interval uniformly distributed in [.4, 2.4] s. We simulated 5 small, web-like TCP connections in each direction as background traffic. For every long-lived flow, there are one or more background flows sharing the corresponding access links. Each background flow is modeled as an on-off process, with exponentially-distributed off (thinking time) periods of mean 4 s alternating with on (activity) periods; the amount of data transferred during on periods follows a Pareto distribution with shape parameter.2 and mean kb, 2 kb or kb. The start time of each background flow (in seconds) was chosen randomly on the interval [, 8], according to a uniform distribution. The version of TCP used by background traffic is NewReno. Loss events in which only background flows lose packets are not taken into account for computing the synchronization metrics. ) Bottleneck queue management algorithm: For both directions of the bottleneck link, we simulated either Drop-Tail queues or RED queues. Correctly setting the parameters of RED is known to be a difficult problem (see e.g. [28]); besides, tuning RED for high-speed TCP versions is outside the scope of this work. Therefore, we chose to use the Adaptive RED (A-RED) algorithm [29] currently implemented in ns-2, with the gentle RED option on and operating in packet mode. A-RED aims at stabilizing the average queue occupation around some predefined value θ. For this purpose, A-RED dynamically adapts the slope of the drop probability function (i.e., the p max parameter), to try to keep q in a small interval around θ. RED parameters were adapted to the buffer size B in the following manner. The target mean queue θ was set as a fixed fraction of the buffer size: θ = B/3. Thus, according to A-RED s rules, RED thresholds were taken as: θ min = B/6, θ max = B/2. In other words, the target queue is centered between the two thresholds defining RED s congestion avoidance mode. As in [29], the weight w q of the mean queue estimator used by RED is calculated as a function of the link speed C (in packets per second), as: w q = exp( /C). Remark 2 There are N = 4 end-nodes and access links connected to each bottleneck router, so the aggregate input rate to bottleneck buffers may be > 5 Gb/s. that ECN was not used, so RED will actually discard packets whenever q θ min. The study of ECN in this context is left for future work. B. Metrics Our evaluation focuses on the following metrics: loss synchronization, goodput, link utilization, packet loss rate, and convergence to fairness for high-speed flows. Regarding loss synchronization, we compute a global synchronization rate R as in [5]. R(l) measures the proportion of flows suffering packet drops in a loss event 3 l: R(l) = M M d k,l, () k= where M denotes the number of flows that lose at least one packet during a long measuring interval τ. d k,l is an indicator variable such that d k,l = if flow k loses one or more packets at loss event l, and d k,l = otherwise. In our scenarios, τ = 6 s and M = N = 4. Note that R(l) [/M, ]. Fairness between high-speed flows is measured by means of the well-known Jain fairness index [3]. We start counting the bytes sent by each flow at t = 6 s, i.e., a few seconds after the last high-speed flow started. At time instants t j = + j, with j =, 2,... we measure the average throughput of each flow over the time interval between t and t j, then we compute a cumulative fairness index JFI j as: JFI j = ( N i= x i,j) 2 N, (2) N i= x2 i,j where N = 4 is the number of long-lived, high-speed flows and x i,j is the j th measurement of the throughput of flow i. A. Loss synchronization IV. RESULTS Figure 2 shows the distribution of the global synchronization rates R, for both RED and Drop-Tail queues, with different high-speed TCPs. First, note that all loss-based protocols show a qualitatively similar behavior. For large buffers (i.e., B = C RT T and B =.63 C RT T / N), RED strongly reduces the synchronization rate as expected, whereas with Drop-Tail, R often takes values close to. However, with intermediate and small buffers, the loss synchronization rate is roughly similar with both types of queues in fact, note that, when RED is used, synchronization may even be higher with the intermediate buffer size than with the smallest buffer. This may be explained by the relatively small sizes of these two buffers, with respect to the size of the packet bursts that arrive at the router. When they reach large window values, high-speed TCP flows may behave in a very bursty manner; such bursts can easily saturate the smaller-sized buffers, so that 3 A loss event is common to two flows if their respective losses take place in an interval of duration RTT. 572

5 BIC TCP HSTCP ' -.8.8!"&.6.6!"% CDF.4 CDF.4 ;<=!"$.2.2!"# Global Synchronization Rate (R) (a) BIC TCP Global Synchronization Rate (R) (b) HSTCP.! -!!"#!"$!"%!"& ' ()*+,)-./23*45,64*-7,68-97: (c) H-TCP. CUBIC Compound TCP.8.8!"&+, CDF Global Synchronization Rate (R) (d) CUBIC. CDF !"&( Global Synchronization Rate (R) (e) Compound TCP. Fig. 2. CDF of global synchronization rate R.!"&%*!"&%!"&**!"&*!"&(*!"&)*, -./,2,34.56!73 899:;//<!73 -./,2,34.56!-B7 899:;//<!-B7 these buffers end up most of the time working in Drop-Tail mode. Besides, to compute the global synchronization rate, we are looking at losses in a short observation window. With the smaller buffers using RED, we can detect losses caused by a buffer overflowing during this interval; however, even if the buffer empties immediately after, the average queue q may still take a large value, so we will also observe random losses (note that the response time of the average queue estimator, as given by w q, is independent of the buffer size). These random drops will be seen as losses experienced at the same time as the losses due to actual buffer overflow and, thus, we will obtain a higher synchronization rate. We can also see that, with Drop-Tail queues, the synchronization rate of CTCP flows is roughly independent of the buffer size. In fact, in the Drop-Tail case the distribution of R for CTCP is qualitatively similar to that for SACK TCP [5]. Note also that, with large RED buffers, there is almost no drop synchronization. B. Goodput and link utilization Figures 3 and 4 present the average goodput and the link utilization, respectively, with RED and Drop-Tail queues; 95% confidence intervals are also shown. In general, it can be said that the larger the buffers are, the better both the goodput and the utilization are, irrespective of the queue management algorithm used. Compared with the theoretical value of the per-flow fair bandwidth share (5 Gbps/4 flows = 25 Mbps), the goodput obtained with large buffers is quite high. The difference between the actual goodput and the fair share becomes significant when using the intermediate and smallsized buffers. Goodput (Mbps) Goodput (Mbps) Packets 2, Packets.63xCxRTT/sqrt(N) Rule of Thumb Buffer size (a) With RED queues. Packets 2, Packets.63xCxRTT/sqrt(N) Rule of Thumb Buffer size (b) With Drop-Tail queues. Fig. 3. Goodput of high-speed flows. bic cubic hstcp htcp ctcp bic cubic hstcp htcp ctcp We can notice that RED does not clearly improve the good- 573

6 put of these protocols (for a given buffer size and protocol, confidence intervals for Drop-Tail and RED overlap most of the time). However, we can observe a drastic degradation of the goodput (and, thus, of the utilization) of H-TCP with RED and a buffer size equal to the BDP; this seemingly abnormal behavior can be explained in terms of the very high synchronization (Fig. 2c) and loss rate (see Section IV-C). Finally, remark that CTCP seems to be the most sensitive with respect to the buffer size, in terms of both goodput and utilization. CD**#2%)(!"!+!"!/!"!<!"!; EA&#F3 G*)&H#F3 G)&H#F3 &>EA&#F3 &)&H#F3 EA&#2IF G*)&H#2IF G)&H#2IF &>EA&#2IF &)&H#2IF #!"!. Link utilization bic cubic hstcp htcp ctcp!"!: #!""#$%&'()* +,"""#$%&'()* "-./233#4#*56) =>??(6#@AB( Fig. 5. Loss rate of long-lived flows. 2 Link utilization Packets 2, Packets.63xCxRTT/sqrt(N) Rule of Thumb Buffer size (a) With RED queues. Packets 2, Packets.63xCxRTT/sqrt(N) Rule of Thumb Buffer size (b) With Drop-Tail queues. Fig. 4. Bottleneck link utilization. bic cubic hstcp htcp ctcp C. Loss rate Figure 5 shows the loss rate for different scenarios, with 95% confidence intervals. Note first that the impact of RED on the loss rate may depend on the buffer size, as well as on the aggressiveness of the transport protocol. Remark that CTCP experiences a very low loss rate, compared to the other protocols; this may be related to the behavior of its delay-based congestion control. In fact, a CTCP sender increases its sending window quickly (by a fast increase of the delay window dwnd) when buffers are almost empty i.e., when the RTT is small. However, the sender slows down as soon as buffers start filling up, and keeps an almost constant value of the sending window (the delay component of the window stops increasing); thus, the increase of the window becomes as slow as standard TCP, i.e., the window grows by roughly one segment per RTT. We can notice also that, with RED queues and B = C RTT, H-TCP experiences a high loss rate, which is consistent with the lower link utilization and goodput observed in the last section for such value of B. This could be explained as follows. The window decrease parameter β of H-TCP is updated according to the following rule [4]:.5, if B(k+) B(k) B(k) >.2, β = RTT min RTT max, otherwise, where B(k) is the throughput measured just before the k th loss event. As suggested by Leith and Shorten [4], β should be in the [.5,.8] interval. With the RED algorithm, losses may occur either when the buffer actually overflows (possibly leading to a relatively high measured sending rate), or when q > θ min and the buffer is not full (possibly leading to a lower measured sending rate). In this case, it is more likely that the throughput may vary by more than 2% from a congestion event to another and, consequently, β will likely be set to.5 i.e., the window will likely suffer a larger reduction in size. On the other hand, with Drop-Tail queues losses occur only when the buffer overflows. In this case, it is more probable that, for the k th loss event, B(k) and B(k ) will be close to each other. Hence, β will depend on the ratio RTT min /RTT max. Indeed, in the case of H-TCP with RED, we observed that 3% of cwnd reductions were done with a decrease parameter β =.5. On the other hand, with Drop-Tail queues β =.5 only in less than 5% of the loss events; for most loss events, β varied between.75 and.8. In addition, the number of loss events with RED was, on the average, almost.5 times higher than that with Drop-Tail. Such degraded performance of H-TCP with RED doesn t appear with smaller buffers. Huge buffers allow cwnd to reach high values, hence, allowing the cwnd and the throughput to vary in a larger interval. With a smaller buffer, the throughput will tend to vary in a smaller interval, so the variation between the two measured throughputs could be less than 2%. In that case, the value of β will depend mostly on RTT min /RTT max. Furthermore, since buffers are smaller, the difference between the RTT min and RTT max will be small. In such case, β will be set to min(rtt min /RTT max,.8). In fact, we observed 574

7 that almost all values of β were very close or equal to.8, whether with DropTail or with RED, when the buffer size was equal to, packets. D. Convergence to fairness Figure 6 shows the convergence to a fair share of the bottleneck bandwidth between all competing high-speed flows. The JFI of H-TCP is always over.9 for every buffer size. BIC, CUBIC and HSTCP converge slower than H-TCP to a fair share. Nevertheless, we can observe that, for the former three protocols, RED improves the convergence speed towards a JFI >.9, but the degree of improvement decreases as the buffer size decreases. We can even see that, for small buffers, these protocols converge faster to a fair share with Drop-Tail queues than with RED queues. To get a concrete idea about the improvement on the JFI convergence time, consider the time needed for the JFI of CUBIC to get >.9, which we will consider a fair share. For example, with RED and B = C RTT, CUBIC has already reached the fair share at t = 2 s. However, with Drop-Tail queues, CUBIC needs at least 6 seconds more to reach a JFI of.9. In the case of,-packet buffers, the convergence seems to be slower than with the largest buffer; with Drop-Tail, the JFI of CUBIC takes 65 seconds more than with RED to reach the fair share. Finally, note how the convergence of the JFI towards the fair share becomes slower with RED queues, for medium and small-sized buffers. In the case of CTCP, the smaller the buffers are, the worse the JFI is. The JFI is always under.85 in all configurations. We can observe that CTCP flows have trouble in achieving a fair share of the bottleneck bandwidth with Drop-Tail, an issue that has been reported elsewhere [3]; further, remark that even RED could not resolve the fairness problem of CTCP. Note that the results in [3] also suggest that CTCP is not scalable with the number of flows, and that the default value of γ may be a wrong estimation of the appropriate number of packets that should be backlogged in the bottleneck link queue. V. CONCLUSIONS AND FUTURE WORK Previous research work has suggested that high-speed TCP versions may sustain high rates and link utilization in large- BDP networks, even when bottleneck buffers are sized well below the BDP. Values as low as % of the BDP seem to offer a good tradeoff between buffer memory size and TCP performance. The primary goal of this paper was to explore whether RED may help in further reducing buffer sizes, while avoiding a strong degradation in performance. Our main results can be summarized as follows. First, for moderate-sized buffers (i.e., 2 packets, or 2% of the BDP), the use of RED does not necessarily offer any gains. This result, which is somewhat counterintuitive, can be explained in terms of the burstiness of high-speed protocols. When the buffer gets too small, it cannot absorb the large bursts due to the faster window growth inherent of such protocols. Hence, most of the losses are due to actual buffer overflow; that is, the buffer mostly behaves as a Drop-Tail buffer. In this case, the effect of RED on the loss synchronization is negligible. Second, the effect of RED is most salient for larger buffers, which can absorb packet bursts more easily. In this case, the use of RED has a significant impact on synchronization, as expected. Note that this lower synchronization does not necessarily translate in a better goodput or link utilization. However, RED helps in improving the convergence to fairness; this is consistent with the results of some other studies (e.g., [5]). Several issues deserve further study. For instance, loss synchronization can be highly dependent on real-world conditions that are hard to reproduce accurately in simulations, like processing times [32]. Hence, it would be interesting to perform a similar study in an experimental testbed. Also, the impact of ECN in this context is worth investigating. We chose to focus the scope of the paper by restricting the choice of AQM algorithm and its parameters. In light of the potential interactions between RED and some highspeed protocols (e.g., H-TCP), we intend to evaluate a wider range of RED configurations, and their relation with the buffer size. Finally, we would like to explore the design of AQM mechanisms adapted to high-speed TCP versions. REFERENCES [] S. Floyd, HighSpeed TCP for Large Congestion Windows, Experimental RFC 3649, IETF, Dec. 23. [2] C. Villamizar and C. Song, High performance TCP in ANSNET, ACM SIGCOMM Computer Communications Review, vol. 24, no. 5, pp. 45 6, Oct [3] G. Appenzeller, I. Keslassy, and N. McKeown, Sizing router buffers, in Proceedings of ACM SIGCOMM 4, Portland (OR), USA, Sep. 24, pp [4] D. Leith and R. Shorten, H-TCP: TCP for high-speed and long-distance networks, in Proceedings of PFLDnet 24, Argonne (IL), USA, Feb. 24. [5] L. Xu, K. Harfoush, and I. Rhee, Binary Increase Congestion control for fast long-distance networks, in Proceedings of IEEE INFOCOM, Hong Kong, Mar. 24. [6] S. Ha, I. Rhee, and L. Xu, CUBIC: A new TCP-friendly high-speed TCP variant, ACM SIGOPS Operating System Review, vol. 42, no. 5, pp , Jul. 28. [7] K. Tan, J. Song, Q. Zhang, and M. Sridharan, A Compound TCP approach for high-speed and long distance networks, in Proceedings of IEEE INFOCOM 26, Barcelona, Spain, Apr. 26. [8] D. Wischik and N. McKeown, Part I: Buffer sizes for core routers, ACM SIGCOMM Computer Communications Review, vol. 35, no. 2, pp , Jul. 25. [9] S. Gorinsky, A. Kantawala, and J. Turner, Link buffer sizing: A new look at the old problem, in Proceedings of IEEE ISCC, Cartagena, Spain, Jun. 25. [] G. Raina and D. Wischik, Buffer sizes for large multiplexers: TCP queueing theory and instability analysis, in Proceedings of NGI 25. Rome: IEEE, Apr. 25, pp [] M. Enachescu, Y. Ganjali, A. Goel, N. McKeown, and T. Roughgarden, Part III: Routers with very small buffers, ACM SIGCOMM Computer Communications Review, vol. 35, no. 2, pp , Jul. 25. [2] K. Avrachenkov, U. Ayesta, and A. Piunovskiy, Optimal choice of the buffer size in the Internet routers, in Proceedings of IEEE CDC-ECC, Seville, Spain, Dec. 25, pp [3] G. Vu-Brugier, R. Stanojević, D. Leith, and R. Shorten, A critique of recently proposed buffer-sizing strategies, ACM SIGCOMM Computer Communications Review, vol. 37, no., pp , Jan. 27. [4] N. Beheshti, Y. Ganjali, M. Ghobadi, N. McKeown, and G. Salmon, Experimental study of router buffer sizing, in Proceedings of ACM IMC 8, Vouliagmeni, Greece, Oct. 28, pp

8 Time (s) (a) BIC TCP Time (s) (b) HSTCP Times (s) (c) H-TCP Time (s) (d) CUBIC Time (s) (e) Compound TCP. Fig. 6. Evolution of the. Rule of Thumb - DropTail.63 C RT T/ N - DropTail 2, Packets - DropTail Packets - DropTail Rule of Thumb - RED.63 C RT T/ N - RED 2, Packets- RED Packets - RED [5] S. Hassayoun and D. Ros, Loss synchronization, router buffer sizing and high-speed versions of TCP, in Proceedings of the IEEE INFOCOM High-Speed Networks Workshop (HSN 28), Phoenix (AZ), USA, Apr. 28. [6] B. Braden et al., Recommendations on queue management and congestion avoidance in the Internet, Informational RFC 239, IETF, Apr [7] K. Ramakrishnan, S. Floyd, and D. Black, The Addition of Explicit Congestion Notification (ECN) to IP, Internet Standards Track RFC 368, IETF, Sep. 2. [8] S. Floyd and V. Jacobson, Random early detection gateways for congestion avoidance, IEEE/ACM Transactions on Networking, vol., no. 4, pp , Aug [9] J. Lee, S. Bohacek, J.-P. Hespanha, and K. Obraczka, A study of TCP fairness in high-speed networks, University of California, Santa Barbara, Technical report, Apr. 25. [Online]. Available: hespanha/techrep.html [2] S. Chen and B. Bensaou, Can high-speed networks survive with DropTail queues management? Computer Networks, vol. 5, no. 7, pp , May 27. [2] Y. Li, D. Leith, and R. Shorten, Experimental evaluation of TCP protocols for high-speed networks, IEEE/ACM Transactions on Networking, vol. 5, no. 5, pp. 9 22, Oct. 27. [22] M. Weigle, P. Sharma, and J. Freeman, Performance of competing highspeed TCP flows, in Proceedings of NETWORKING 26, ser. Lecture Notes in Computer Science, no Coimbra, Portugal: Springer, May 26, pp [23] R. Shorten and D. Leith, Impact of drop synchronisation on TCP fairness in high bandwidth-delay product networks, in Proceedings of PFLDnet26, Nara (Japan), Feb. 26. [24] T. Kelly, Scalable TCP: Improving performance in highspeed wide area networks, in Proceedings of PFLDnet 23, Geneva, Feb. 23. [25] S. Ha, L. Le, I. Rhee, and L. Xu, Impact of background traffic on performance of high-speed TCP variant protocols, Computer Networks, vol. 5, no. 7, pp , May 27. [26] D. Barman, G. Smaragdakis, and I. Matta, The effect of router buffer size on HighSpeed TCP performance, in Proceedings of IEEE GLOBECOM 4, 24, pp [27] K. Tokuda, G. Hasegawa, and M. Murata, Performance analysis of HighSpeed TCP and its improvements for high throughput and fairness against TCP Reno connections, in Proceedings of the IEEE High-Speed Networking Workshop (HSN), San Francisco, Mar. 23. [28] V. Firoiu and M. Borden, A study of active queue management for congestion control, in Proceedings of IEEE INFOCOM, Tel Aviv, Mar. 2, pp [29] S. Floyd, R. Gummadi, and S. Shenker, Adaptive RED: An algorithm for increasing the robustness of RED s active queue management, ICIR, Technical report, Aug. 2. [Online]. Available: [3] R. Jain, D. Chiu, and W. Hawe, A quantitative measure of fairness and discrimination for resource allocation in shared computer systems, DEC Research, Technical report TR-3, Sep [Online]. Available: jain/papers/fairness.htm [3] X. Wu, A simulation study of Compound TCP, School of Computing, National University of Singapore, Technical Report, 27. [32] L. Qiu, Y. Zhang, and S. Keshav, Understanding the performance of many TCP flows, Computer Networks, vol. 37, no. 3 4, pp ,

COMPARISON OF HIGH SPEED CONGESTION CONTROL PROTOCOLS

COMPARISON OF HIGH SPEED CONGESTION CONTROL PROTOCOLS COMPARISON OF HIGH SPEED CONGESTION CONTROL PROTOCOLS Jawhar Ben Abed 1, Lâarif Sinda 2, Mohamed Ali Mani 3 and Rachid Mbarek 2 1 Polytech Sousse, 2 ISITCOM Hammam Sousse and 3 ISTLS Sousse, Tunisia ba.jawhar@gmail.com

More information

Impact of Short-lived TCP Flows on TCP Link Utilization over 10Gbps High-speed Networks

Impact of Short-lived TCP Flows on TCP Link Utilization over 10Gbps High-speed Networks Impact of Short-lived TCP Flows on TCP Link Utilization over Gbps High-speed Networks Lin Xue, Chui-hui Chiu, and Seung-Jong Park Department of Computer Science, Center for Computation & Technology, Louisiana

More information

Effects of Applying High-Speed Congestion Control Algorithms in Satellite Network

Effects of Applying High-Speed Congestion Control Algorithms in Satellite Network Effects of Applying High-Speed Congestion Control Algorithms in Satellite Network Xiuchao Wu, Mun Choon Chan, and A. L. Ananda School of Computing, National University of Singapore Computing 1, Law Link,

More information

A Bottleneck and Target Bandwidth Estimates-Based Congestion Control Algorithm for High BDP Networks

A Bottleneck and Target Bandwidth Estimates-Based Congestion Control Algorithm for High BDP Networks A Bottleneck and Target Bandwidth Estimates-Based Congestion Control Algorithm for High BDP Networks Tuan-Anh Le 1, Choong Seon Hong 2 Department of Computer Engineering, Kyung Hee University 1 Seocheon,

More information

CUBIC. Qian HE (Steve) CS 577 Prof. Bob Kinicki

CUBIC. Qian HE (Steve) CS 577 Prof. Bob Kinicki CUBIC Qian HE (Steve) CS 577 Prof. Bob Kinicki Agenda Brief Introduction of CUBIC Prehistory of CUBIC Standard TCP BIC CUBIC Conclusion 1 Brief Introduction CUBIC is a less aggressive and more systematic

More information

ADVANCED TOPICS FOR CONGESTION CONTROL

ADVANCED TOPICS FOR CONGESTION CONTROL ADVANCED TOPICS FOR CONGESTION CONTROL Congestion Control The Internet only functions because TCP s congestion control does an effective job of matching traffic demand to available capacity. TCP s Window

More information

The effect of reverse traffic on the performance of new TCP congestion control algorithms

The effect of reverse traffic on the performance of new TCP congestion control algorithms The effect of reverse traffic on the performance of new TCP congestion control algorithms Saverio Mascolo and Francesco Vacirca Dipartimento di Elettrotecnica ed Elettronica Politecnico di Bari Via Orabona

More information

Performance Consequences of Partial RED Deployment

Performance Consequences of Partial RED Deployment Performance Consequences of Partial RED Deployment Brian Bowers and Nathan C. Burnett CS740 - Advanced Networks University of Wisconsin - Madison ABSTRACT The Internet is slowly adopting routers utilizing

More information

Adaptive Buffer size routing for Wireless Sensor Networks

Adaptive Buffer size routing for Wireless Sensor Networks Adaptive Buffer size routing for Wireless Sensor Networks Kalyani Upputhoola 1, Narasimha Rao Sirivella 2 Student, M.Tech (CSE), QIS College of Engineering and Technology, Hyderabad, AP, India 1 Assistant

More information

RED behavior with different packet sizes

RED behavior with different packet sizes RED behavior with different packet sizes Stefaan De Cnodder, Omar Elloumi *, Kenny Pauwels Traffic and Routing Technologies project Alcatel Corporate Research Center, Francis Wellesplein, 1-18 Antwerp,

More information

A Comparison of TCP Behaviour at High Speeds Using ns-2 and Linux

A Comparison of TCP Behaviour at High Speeds Using ns-2 and Linux A Comparison of TCP Behaviour at High Speeds Using ns-2 and Linux Martin Bateman, Saleem Bhatti, Greg Bigwood, Devan Rehunathan, Colin Allison, Tristan Henderson School of Computer Science, University

More information

Chapter II. Protocols for High Speed Networks. 2.1 Need for alternative Protocols

Chapter II. Protocols for High Speed Networks. 2.1 Need for alternative Protocols Chapter II Protocols for High Speed Networks 2.1 Need for alternative Protocols As the conventional TCP suffers from poor performance on high bandwidth delay product links [47] meant for supporting transmission

More information

Appendix B. Standards-Track TCP Evaluation

Appendix B. Standards-Track TCP Evaluation 215 Appendix B Standards-Track TCP Evaluation In this appendix, I present the results of a study of standards-track TCP error recovery and queue management mechanisms. I consider standards-track TCP error

More information

CS 356: Computer Network Architectures Lecture 19: Congestion Avoidance Chap. 6.4 and related papers. Xiaowei Yang

CS 356: Computer Network Architectures Lecture 19: Congestion Avoidance Chap. 6.4 and related papers. Xiaowei Yang CS 356: Computer Network Architectures Lecture 19: Congestion Avoidance Chap. 6.4 and related papers Xiaowei Yang xwy@cs.duke.edu Overview More on TCP congestion control Theory Macroscopic behavior TCP

More information

Performance of Competing High-Speed TCP Flows

Performance of Competing High-Speed TCP Flows Performance of Competing High-Speed TCP Flows Michele C. Weigle, Pankaj Sharma, and Jesse R. Freeman, IV Department of Computer Science, Clemson University, Clemson, SC 29634 {mweigle, pankajs, jessef}@cs.clemson.edu

More information

Fairness of High-Speed TCP Stacks

Fairness of High-Speed TCP Stacks Fairness of High-Speed TCP Stacks Dimitrios Miras Department of Computer Science University College London (UCL) London, WCE 6BT, UK d.miras@cs.ucl.ac.uk Martin Bateman, Saleem Bhatti School of Computer

More information

Performance Analysis of Loss-Based High-Speed TCP Congestion Control Algorithms

Performance Analysis of Loss-Based High-Speed TCP Congestion Control Algorithms Performance Analysis of Loss-Based High-Speed TCP Congestion Control Algorithms HABIBULLAH JAMAL, KIRAN SULTAN Electrical Engineering Department University Of Engineering and Technology Taxila PAKISTAN

More information

Performance of high-speed TCP Protocols over NS-2 TCP Linux

Performance of high-speed TCP Protocols over NS-2 TCP Linux Performance of high-speed TCP Protocols over NS-2 TCP Linux Masters Project Final Report Author: Sumanth Gelle Email: sgelle@cs.odu.edu Project Advisor: Dr. Michele Weigle Email: mweigle@cs.odu.edu Project

More information

Linux beats Windows! or the Worrying Evolution of TCP in Common Operating Systems 1

Linux beats Windows! or the Worrying Evolution of TCP in Common Operating Systems 1 43 Linux beats Windows! or the Worrying Evolution of TCP in Common Operating Systems 1 Kashif Munir, Michael Welzl, Dragana Damjanovic Institute of Computer Science, University of Innsbruck, Austria Abstract

More information

Performance of Competing High-Speed TCP Flows

Performance of Competing High-Speed TCP Flows Performance of Competing High-Speed TCP Flows Michele C. Weigle, Pankaj Sharma, and Jesse R. Freeman IV Department of Computer Science, Clemson University, Clemson, SC 29634 {mweigle, pankajs, jessef}@cs.clemson.edu

More information

Transmission Control Protocol (TCP)

Transmission Control Protocol (TCP) TETCOS Transmission Control Protocol (TCP) Comparison of TCP Congestion Control Algorithms using NetSim @2017 Tetcos. This document is protected by copyright, all rights reserved Table of Contents 1. Abstract....

More information

Title Problems of TCP in High Bandwidth-Delay Networks Syed Nusrat JJT University, Rajasthan, India Abstract:

Title Problems of TCP in High Bandwidth-Delay Networks Syed Nusrat JJT University, Rajasthan, India Abstract: Title Problems of TCP in High Bandwidth-Delay Networks Syed Nusrat JJT University, Rajasthan, India Abstract: The Transmission Control Protocol (TCP) [J88] is the most popular transport layer protocol

More information

Improving TCP Performance over Wireless Networks using Loss Predictors

Improving TCP Performance over Wireless Networks using Loss Predictors Improving TCP Performance over Wireless Networks using Loss Predictors Fabio Martignon Dipartimento Elettronica e Informazione Politecnico di Milano P.zza L. Da Vinci 32, 20133 Milano Email: martignon@elet.polimi.it

More information

Experimental Analysis and Demonstration of the NS2 Implementation of Dynamic Buffer Sizing Strategies for Based Wireless Networks

Experimental Analysis and Demonstration of the NS2 Implementation of Dynamic Buffer Sizing Strategies for Based Wireless Networks Experimental Analysis and Demonstration of the NS2 Implementation of Dynamic Buffer Sizing Strategies for 802.11 Based Wireless Networks Santosh Hosamani, G.S.Nagaraja Dept of CSE, R.V.C.E, Bangalore,

More information

QoS of High Speed Congestion Control Protocols

QoS of High Speed Congestion Control Protocols International Journal of Computer Science and Telecommunications [Volume 3, Issue 11, November 2012] 1 QoS of High Speed Congestion Control Protocols ISSN 2047-3338 M. A. Mani, J. B. Abed, S. Laârif and

More information

On the Effectiveness of CoDel for Active Queue Management

On the Effectiveness of CoDel for Active Queue Management 1 13 Third International Conference on Advanced Computing & Communication Technologies On the Effectiveness of CoDel for Active Queue Management Dipesh M. Raghuvanshi, Annappa B., Mohit P. Tahiliani Department

More information

Congestion control in TCP

Congestion control in TCP Congestion control in TCP If the transport entities on many machines send too many packets into the network too quickly, the network will become congested, with performance degraded as packets are delayed

More information

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks Hybrid Control and Switched Systems Lecture #17 Hybrid Systems Modeling of Communication Networks João P. Hespanha University of California at Santa Barbara Motivation Why model network traffic? to validate

More information

Congestion Control and Sizing Router Buffers in the Internet

Congestion Control and Sizing Router Buffers in the Internet Congestion Control and Sizing Router uffers in the Internet Saverio Mascolo and Francesco Vacirca Abstract The Internet is made of communication links and packet switching nodes named routers. Routers

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2014 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

EXPERIMENTAL EVALUATION OF TCP CONGESTION CONTORL MECHANISMS IN SHORT AND LONG DISTANCE NETWORKS

EXPERIMENTAL EVALUATION OF TCP CONGESTION CONTORL MECHANISMS IN SHORT AND LONG DISTANCE NETWORKS EXPERIMENTAL EVALUATION OF TCP CONGESTION CONTORL MECHANISMS IN SHORT AND LONG DISTANCE NETWORKS Mudassar Ahmad, Md Asri Ngadi, Mohd Murtadha Mohamad Department of Computer Science, Faculty of Computing,

More information

Congestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University

Congestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University Congestion Control Daniel Zappala CS 460 Computer Networking Brigham Young University 2/25 Congestion Control how do you send as fast as possible, without overwhelming the network? challenges the fastest

More information

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup Chapter 4 Routers with Tiny Buffers: Experiments This chapter describes two sets of experiments with tiny buffers in networks: one in a testbed and the other in a real network over the Internet2 1 backbone.

More information

Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks

Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks Zongsheng Zhang Go Hasegawa Masayuki Murata Osaka University Contents Introduction Analysis of parallel TCP mechanism Numerical

More information

CS519: Computer Networks. Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control

CS519: Computer Networks. Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control : Computer Networks Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control TCP performance We ve seen how TCP the protocol works Sequencing, receive window, connection setup and teardown And

More information

Random Early Detection (RED) gateways. Sally Floyd CS 268: Computer Networks

Random Early Detection (RED) gateways. Sally Floyd CS 268: Computer Networks Random Early Detection (RED) gateways Sally Floyd CS 268: Computer Networks floyd@eelblgov March 20, 1995 1 The Environment Feedback-based transport protocols (eg, TCP) Problems with current Drop-Tail

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2014 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

CTCP: Improving TCP-Friendliness Over Low- Buffered Network Links

CTCP: Improving TCP-Friendliness Over Low- Buffered Network Links CTCP: Improving TCP-Friendliness Over Low- uffered Network Links Kun Tan Jingmin Song Microsoft Research Asia eijing, China {kuntan,jingmins}@microsoft.com Murari Sridharan Microsoft Corporation Redmond,

More information

Congestion Propagation among Routers in the Internet

Congestion Propagation among Routers in the Internet Congestion Propagation among Routers in the Internet Kouhei Sugiyama, Hiroyuki Ohsaki and Makoto Imase Graduate School of Information Science and Technology, Osaka University -, Yamadaoka, Suita, Osaka,

More information

PERFORMANCE ANALYSIS OF HIGH-SPEED TRANSPORT CONTROL PROTOCOLS

PERFORMANCE ANALYSIS OF HIGH-SPEED TRANSPORT CONTROL PROTOCOLS PERFORMANCE ANALYSIS OF HIGH-SPEED TRANSPORT CONTROL PROTOCOLS A Thesis Presented to the Graduate School of Clemson University In Partial Fulfillment of the Requirements for the Degree Master of Science

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2015 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

An Enhanced Slow-Start Mechanism for TCP Vegas

An Enhanced Slow-Start Mechanism for TCP Vegas An Enhanced Slow-Start Mechanism for TCP Vegas Cheng-Yuan Ho a, Yi-Cheng Chan b, and Yaw-Chung Chen a a Department of Computer Science and Information Engineering National Chiao Tung University b Department

More information

Buffer Sizing in a Combined Input Output Queued (CIOQ) Switch

Buffer Sizing in a Combined Input Output Queued (CIOQ) Switch Buffer Sizing in a Combined Input Output Queued (CIOQ) Switch Neda Beheshti, Nick Mckeown Stanford University Abstract In all internet routers buffers are needed to hold packets during times of congestion.

More information

Report on Transport Protocols over Mismatched-rate Layer-1 Circuits with 802.3x Flow Control

Report on Transport Protocols over Mismatched-rate Layer-1 Circuits with 802.3x Flow Control Report on Transport Protocols over Mismatched-rate Layer-1 Circuits with 82.3x Flow Control Helali Bhuiyan, Mark McGinley, Tao Li, Malathi Veeraraghavan University of Virginia Email: {helali, mem5qf, taoli,

More information

A Compound TCP Approach for High-speed and Long Distance Networks

A Compound TCP Approach for High-speed and Long Distance Networks This is the Pre-Published Version A Compound TCP Approach for High-speed and Long Distance Networks Kun Tan Jingmin Song Microsoft Research Asia Beijing, China {kuntan, jmsong, }@microsoft.com Qian Zhang

More information

TCP testing: How well does ns2 match reality?

TCP testing: How well does ns2 match reality? TCP testing: How well does ns2 match reality? Martin Bateman CEPS University of Central Lancashire Preston, UK mbateman@uclan.ac.uk Saleem Bhatti School of Computer Science University of St Andrews St

More information

Modeling TCP in Small-Buffer Networks

Modeling TCP in Small-Buffer Networks Modeling TCP in Small-Buffer Networks Mark Shifrin and Isaac Keslassy Department of Electrical Engineering Technion - Israel Institute of Technology Haifa 32000, Israel {logout@tx,isaac@ee}.technion.ac.il

More information

Tuning RED for Web Traffic

Tuning RED for Web Traffic Tuning RED for Web Traffic Mikkel Christiansen, Kevin Jeffay, David Ott, Donelson Smith UNC, Chapel Hill SIGCOMM 2000, Stockholm subsequently IEEE/ACM Transactions on Networking Vol. 9, No. 3 (June 2001)

More information

Lecture 14: Congestion Control"

Lecture 14: Congestion Control Lecture 14: Congestion Control" CSE 222A: Computer Communication Networks George Porter Thanks: Amin Vahdat, Dina Katabi and Alex C. Snoeren Lecture 14 Overview" TCP congestion control review Dukkipati

More information

CS644 Advanced Networks

CS644 Advanced Networks What we know so far CS644 Advanced Networks Lecture 6 Beyond TCP Congestion Control Andreas Terzis TCP Congestion control based on AIMD window adjustment [Jac88] Saved Internet from congestion collapse

More information

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness Recap TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness 81 Feedback Signals Several possible signals, with different

More information

TCP/IP THROUGHPUT ENHANCEMENT FOR GLOBAL IP NETWORKS WITH TRANS-OCEANIC SUBMARINE LINK

TCP/IP THROUGHPUT ENHANCEMENT FOR GLOBAL IP NETWORKS WITH TRANS-OCEANIC SUBMARINE LINK / THROUGHPUT ENHANCEMENT FOR GLOBAL NETWORKS WITH TRANS-OCEANIC SUBMARINE LINK Yohei Hasegawa, Masahiro Jibiki, Tatsuhiro Nakada, Yasushi Hara and Yasuhiro Aoki (NEC Corporation) Email:

More information

Three-section Random Early Detection (TRED)

Three-section Random Early Detection (TRED) Three-section Random Early Detection (TRED) Keerthi M PG Student Federal Institute of Science and Technology, Angamaly, Kerala Abstract There are many Active Queue Management (AQM) mechanisms for Congestion

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2015 1 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

A COMPARATIVE STUDY OF TCP RENO AND TCP VEGAS IN A DIFFERENTIATED SERVICES NETWORK

A COMPARATIVE STUDY OF TCP RENO AND TCP VEGAS IN A DIFFERENTIATED SERVICES NETWORK A COMPARATIVE STUDY OF TCP RENO AND TCP VEGAS IN A DIFFERENTIATED SERVICES NETWORK Ruy de Oliveira Federal Technical School of Mato Grosso Brazil Gerência de Eletroeletrônica E-mail: ruy@lrc.feelt.ufu.br

More information

A Hybrid Systems Modeling Framework for Fast and Accurate Simulation of Data Communication Networks. Motivation

A Hybrid Systems Modeling Framework for Fast and Accurate Simulation of Data Communication Networks. Motivation A Hybrid Systems Modeling Framework for Fast and Accurate Simulation of Data Communication Networks Stephan Bohacek João P. Hespanha Junsoo Lee Katia Obraczka University of Delaware University of Calif.

More information

CS 268: Computer Networking

CS 268: Computer Networking CS 268: Computer Networking L-6 Router Congestion Control TCP & Routers RED XCP Assigned reading [FJ93] Random Early Detection Gateways for Congestion Avoidance [KHR02] Congestion Control for High Bandwidth-Delay

More information

CS Transport. Outline. Window Flow Control. Window Flow Control

CS Transport. Outline. Window Flow Control. Window Flow Control CS 54 Outline indow Flow Control (Very brief) Review of TCP TCP throughput modeling TCP variants/enhancements Transport Dr. Chan Mun Choon School of Computing, National University of Singapore Oct 6, 005

More information

15-744: Computer Networking. Overview. Queuing Disciplines. TCP & Routers. L-6 TCP & Routers

15-744: Computer Networking. Overview. Queuing Disciplines. TCP & Routers. L-6 TCP & Routers TCP & Routers 15-744: Computer Networking RED XCP Assigned reading [FJ93] Random Early Detection Gateways for Congestion Avoidance [KHR02] Congestion Control for High Bandwidth-Delay Product Networks L-6

More information

CS268: Beyond TCP Congestion Control

CS268: Beyond TCP Congestion Control TCP Problems CS68: Beyond TCP Congestion Control Ion Stoica February 9, 004 When TCP congestion control was originally designed in 1988: - Key applications: FTP, E-mail - Maximum link bandwidth: 10Mb/s

More information

On queue provisioning, network efficiency and the Transmission Control Protocol

On queue provisioning, network efficiency and the Transmission Control Protocol 1 On queue provisioning, network efficiency and the Transmission Control Protocol R. N. Shorten, D.J.Leith, Hamilton Institute, NUI Maynooth, Ireland. {robert.shorten, doug.leith}@nuim.ie Abstract In this

More information

Implementation Experiments on HighSpeed and Parallel TCP

Implementation Experiments on HighSpeed and Parallel TCP Implementation Experiments on HighSpeed and TCP Zongsheng Zhang Go Hasegawa Masayuki Murata Osaka University Outline Introduction Background of and g Why to evaluate in a test-bed network A refined algorithm

More information

On the Effect of Router Buffer Sizes on Low-Rate Denial of Service Attacks

On the Effect of Router Buffer Sizes on Low-Rate Denial of Service Attacks On the Effect of Router Buffer Sizes on Low-Rate Denial of Service Attacks Sandeep Sarat Andreas Terzis sarat@cs.jhu.edu terzis@cs.jhu.edu Johns Hopkins University Abstract Router queues buffer packets

More information

The Variation in RTT of Smooth TCP

The Variation in RTT of Smooth TCP The Variation in RTT of Smooth TCP Elvis Vieira and Michael Bauer University of Western Ontario {elvis,bauer}@csd.uwo.ca Abstract Due to the way of Standard TCP is defined, it inherently provokes variation

More information

Congestion Control for High Bandwidth-delay Product Networks. Dina Katabi, Mark Handley, Charlie Rohrs

Congestion Control for High Bandwidth-delay Product Networks. Dina Katabi, Mark Handley, Charlie Rohrs Congestion Control for High Bandwidth-delay Product Networks Dina Katabi, Mark Handley, Charlie Rohrs Outline Introduction What s wrong with TCP? Idea of Efficiency vs. Fairness XCP, what is it? Is it

More information

A Critique of Recently Proposed Buffer-Sizing Strategies

A Critique of Recently Proposed Buffer-Sizing Strategies A Critique of Recently Proposed Buffer-Sizing Strategies G.Vu-Brugier, R. S. Stanojevic, D.J.Leith, R.N.Shorten Hamilton Institute, NUI Maynooth ABSTRACT Internet router buffers are used to accommodate

More information

Analyzing the Receiver Window Modification Scheme of TCP Queues

Analyzing the Receiver Window Modification Scheme of TCP Queues Analyzing the Receiver Window Modification Scheme of TCP Queues Visvasuresh Victor Govindaswamy University of Texas at Arlington Texas, USA victor@uta.edu Gergely Záruba University of Texas at Arlington

More information

image 3.8 KB Figure 1.6: Example Web Page

image 3.8 KB Figure 1.6: Example Web Page image. KB image 1 KB Figure 1.: Example Web Page and is buffered at a router, it must wait for all previously queued packets to be transmitted first. The longer the queue (i.e., the more packets in the

More information

A Framework For Managing Emergent Transmissions In IP Networks

A Framework For Managing Emergent Transmissions In IP Networks A Framework For Managing Emergent Transmissions In IP Networks Yen-Hung Hu Department of Computer Science Hampton University Hampton, Virginia 23668 Email: yenhung.hu@hamptonu.edu Robert Willis Department

More information

P-XCP: A transport layer protocol for satellite IP networks

P-XCP: A transport layer protocol for satellite IP networks Title P-XCP: A transport layer protocol for satellite IP networks Author(s) Zhou, K; Yeung, KL; Li, VOK Citation Globecom - Ieee Global Telecommunications Conference, 2004, v. 5, p. 2707-2711 Issued Date

More information

Investigating the Use of Synchronized Clocks in TCP Congestion Control

Investigating the Use of Synchronized Clocks in TCP Congestion Control Investigating the Use of Synchronized Clocks in TCP Congestion Control Michele Weigle (UNC-CH) November 16-17, 2001 Univ. of Maryland Symposium The Problem TCP Reno congestion control reacts only to packet

More information

15-744: Computer Networking TCP

15-744: Computer Networking TCP 15-744: Computer Networking TCP Congestion Control Congestion Control Assigned Reading [Jacobson and Karels] Congestion Avoidance and Control [TFRC] Equation-Based Congestion Control for Unicast Applications

More information

Congestion Control for Small Buffer High Speed Networks

Congestion Control for Small Buffer High Speed Networks Congestion Control for Small Buffer High Speed Networks Yu Gu, Don Towsley, Chris Hollot, Honggang Zhang 1 Abstract There is growing interest in designing high speed routers with small buffers storing

More information

Analytical Model for Congestion Control and Throughput with TCP CUBIC connections

Analytical Model for Congestion Control and Throughput with TCP CUBIC connections Analytical Model for Congestion Control and Throughput with TCP CUBIC connections Sudheer Poojary and Vinod Sharma Department of Electrical Communication Engineering Indian Institute of Science, Bangalore

More information

TCP Congestion Control : Computer Networking. Introduction to TCP. Key Things You Should Know Already. Congestion Control RED

TCP Congestion Control : Computer Networking. Introduction to TCP. Key Things You Should Know Already. Congestion Control RED TCP Congestion Control 15-744: Computer Networking L-4 TCP Congestion Control RED Assigned Reading [FJ93] Random Early Detection Gateways for Congestion Avoidance [TFRC] Equation-Based Congestion Control

More information

Open Box Protocol (OBP)

Open Box Protocol (OBP) Open Box Protocol (OBP) Paulo Loureiro 1, Saverio Mascolo 2, Edmundo Monteiro 3 1 Polytechnic Institute of Leiria, Leiria, Portugal, loureiro.pjg@gmail.pt 2 Politecnico di Bari, Bari, Italy, saverio.mascolo@gmail.com

More information

Denial of Service Attacks in Networks with Tiny Buffers

Denial of Service Attacks in Networks with Tiny Buffers Denial of Service Attacks in Networks with Tiny Buffers Veria Havary-Nassab, Agop Koulakezian, Department of Electrical and Computer Engineering University of Toronto {veria, agop}@comm.toronto.edu Yashar

More information

Router buffer re-sizing for short-lived TCP flows

Router buffer re-sizing for short-lived TCP flows Router buffer re-sizing for short-lived TCP flows Takeshi Tomioka Graduate School of Law Osaka University 1- Machikaneyama, Toyonaka, Osaka, 5-3, Japan Email: q5h@lawschool.osaka-u.ac.jp Go Hasegawa, Masayuki

More information

Transmission Control Protocol. ITS 413 Internet Technologies and Applications

Transmission Control Protocol. ITS 413 Internet Technologies and Applications Transmission Control Protocol ITS 413 Internet Technologies and Applications Contents Overview of TCP (Review) TCP and Congestion Control The Causes of Congestion Approaches to Congestion Control TCP Congestion

More information

Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes

Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes Zhili Zhao Dept. of Elec. Engg., 214 Zachry College Station, TX 77843-3128 A. L. Narasimha Reddy

More information

Computer Networking

Computer Networking 15-441 Computer Networking Lecture 17 TCP Performance & Future Eric Anderson Fall 2013 www.cs.cmu.edu/~prs/15-441-f13 Outline TCP modeling TCP details 2 TCP Performance Can TCP saturate a link? Congestion

More information

Reliable Transport II: TCP and Congestion Control

Reliable Transport II: TCP and Congestion Control Reliable Transport II: TCP and Congestion Control Stefano Vissicchio UCL Computer Science COMP0023 Recap: Last Lecture Transport Concepts Layering context Transport goals Transport mechanisms and design

More information

A Modification to RED AQM for CIOQ Switches

A Modification to RED AQM for CIOQ Switches A Modification to RED AQM for CIOQ Switches Jay Kumar Sundararajan, Fang Zhao, Pamela Youssef-Massaad, Muriel Médard {jaykumar, zhaof, pmassaad, medard}@mit.edu Laboratory for Information and Decision

More information

Increase-Decrease Congestion Control for Real-time Streaming: Scalability

Increase-Decrease Congestion Control for Real-time Streaming: Scalability Increase-Decrease Congestion Control for Real-time Streaming: Scalability Dmitri Loguinov City University of New York Hayder Radha Michigan State University 1 Motivation Current Internet video streaming

More information

Understanding Synchronization in TCP Cubic

Understanding Synchronization in TCP Cubic Understanding Synchronization in TCP Cubic Sonia Belhareth, Dino Lopez-Pacheco, Lucile Sassatelli, Denis Collange, Guillaume Urvoy-Keller Orange Labs, Sophia Antipolis, France Univ. Nice Sophia Antipolis,

More information

Enhancing TCP Throughput over Lossy Links Using ECN-Capable Capable RED Gateways

Enhancing TCP Throughput over Lossy Links Using ECN-Capable Capable RED Gateways Enhancing TCP Throughput over Lossy Links Using ECN-Capable Capable RED Gateways Haowei Bai Honeywell Aerospace Mohammed Atiquzzaman School of Computer Science University of Oklahoma 1 Outline Introduction

More information

Lecture 14: Congestion Control"

Lecture 14: Congestion Control Lecture 14: Congestion Control" CSE 222A: Computer Communication Networks Alex C. Snoeren Thanks: Amin Vahdat, Dina Katabi Lecture 14 Overview" TCP congestion control review XCP Overview 2 Congestion Control

More information

Congestion Control Without a Startup Phase

Congestion Control Without a Startup Phase Congestion Control Without a Startup Phase Dan Liu 1, Mark Allman 2, Shudong Jin 1, Limin Wang 3 1. Case Western Reserve University, 2. International Computer Science Institute, 3. Bell Labs PFLDnet 2007

More information

100 Mbps. 100 Mbps S1 G1 G2. 5 ms 40 ms. 5 ms

100 Mbps. 100 Mbps S1 G1 G2. 5 ms 40 ms. 5 ms The Influence of the Large Bandwidth-Delay Product on TCP Reno, NewReno, and SACK Haewon Lee Λ, Soo-hyeoung Lee, and Yanghee Choi School of Computer Science and Engineering Seoul National University San

More information

Performance Comparison of TFRC and TCP

Performance Comparison of TFRC and TCP ENSC 833-3: NETWORK PROTOCOLS AND PERFORMANCE CMPT 885-3: SPECIAL TOPICS: HIGH-PERFORMANCE NETWORKS FINAL PROJECT Performance Comparison of TFRC and TCP Spring 2002 Yi Zheng and Jian Wen {zyi,jwena}@cs.sfu.ca

More information

On Standardized Network Topologies For Network Research Λ

On Standardized Network Topologies For Network Research Λ On Standardized Network Topologies For Network Research Λ George F. Riley Department of Electrical and Computer Engineering Georgia Institute of Technology Atlanta, GA 3332-25 riley@ece.gatech.edu (44)894-4767,

More information

CUBIC: A New TCP-Friendly High-Speed TCP Variant

CUBIC: A New TCP-Friendly High-Speed TCP Variant CUBIC: A New TCP-Friendly High-Speed TCP Variant Sangtae Ha, Injong Rhee and Lisong Xu Presented by Shams Feyzabadi Introduction As Internet evolves the number of Long distance and High speed networks

More information

Edge versus Host Pacing of TCP Traffic in Small Buffer Networks

Edge versus Host Pacing of TCP Traffic in Small Buffer Networks Edge versus Host Pacing of TCP Traffic in Small Buffer Networks Hassan Habibi Gharakheili 1, Arun Vishwanath 2, Vijay Sivaraman 1 1 University of New South Wales (UNSW), Australia 2 University of Melbourne,

More information

RECHOKe: A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks

RECHOKe: A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < : A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks Visvasuresh Victor Govindaswamy,

More information

Congestion Avoidance

Congestion Avoidance COMP 631: NETWORKED & DISTRIBUTED SYSTEMS Congestion Avoidance Jasleen Kaur Fall 2016 1 Avoiding Congestion: Strategies TCP s strategy: congestion control Ø Control congestion once it occurs Repeatedly

More information

Compound TCP: A Scalable and TCP-Friendly Congestion Control for High-speed Networks

Compound TCP: A Scalable and TCP-Friendly Congestion Control for High-speed Networks Compound TCP: A Scalable and TCP-Friendly Congestion Control for High-speed Networks Kun Tan Jingmin Song Microsoft Research Asia Beijing, China {kuntan, jmsong}@microsoft.com Qian Zhang DCS, Hong Kong

More information

Enhancing TCP Throughput over Lossy Links Using ECN-capable RED Gateways

Enhancing TCP Throughput over Lossy Links Using ECN-capable RED Gateways Enhancing TCP Throughput over Lossy Links Using ECN-capable RED Gateways Haowei Bai AES Technology Centers of Excellence Honeywell Aerospace 3660 Technology Drive, Minneapolis, MN 5548 E-mail: haowei.bai@honeywell.com

More information

The War Between Mice and Elephants

The War Between Mice and Elephants The War Between Mice and Elephants Liang Guo and Ibrahim Matta Computer Science Department Boston University 9th IEEE International Conference on Network Protocols (ICNP),, Riverside, CA, November 2001.

More information

Performance Analysis of TCP Variants

Performance Analysis of TCP Variants 102 Performance Analysis of TCP Variants Abhishek Sawarkar Northeastern University, MA 02115 Himanshu Saraswat PES MCOE,Pune-411005 Abstract The widely used TCP protocol was developed to provide reliable

More information

Design of Network Dependent Congestion Avoidance TCP (NDCA-TCP) for Performance Improvement in Broadband Networks

Design of Network Dependent Congestion Avoidance TCP (NDCA-TCP) for Performance Improvement in Broadband Networks International Journal of Principles and Applications of Information Science and Technology February 2010, Vol.3, No.1 Design of Network Dependent Congestion Avoidance TCP (NDCA-TCP) for Performance Improvement

More information