Experimental evaluation of Cubic-TCP

Size: px
Start display at page:

Download "Experimental evaluation of Cubic-TCP"

Transcription

1 1 Experimental evaluation of Cubic-TCP D.J. Leith, R.N.Shorten, G.McCullagh Hamilton Institute, Ireland Abstract In this paper we present an initial experimental evaluation of the recently proposed Cubic-TCP algorithm. Results are presented using a suite of benchmark tests that have been recently proposed in the literature [12], and a number of issues are of practical concern highlighted. I. INTRODUCTION In this paper we present the results of experimental tests on the recently proposed high-speed TCP variant, Cubic-TCP[11]. Consideration of the Cubic algorithm is particularly topical in view of recent discussions regarding its adoption in Linux[8]. In summary we find that, 1. Networks in which Cubic TCP is deployed suffer from slow convergence. The dependence of the cubic increase function on flow cwnd means that flows with higher congestion windows are more aggressive initially than flows with lower congestion windows. The resulting slow convergence behaviour yields poor network responsiveness and prolonged unfairness between flows. 2. In common with other high-speed protocols, Cubic TCP uses an aggressive additive increase action to maintain short congestion epochs on high bandwidth-delay product paths. We find that in unsynchronised environments, the associated cost of missing a drop (whereby flows that are not informed of congestion rapidly increase their cwnd) is similar for both Cubic TCP and HTCP. 3. At bandwidth-delay products above about 5 packets, Cubic TCP reverts to a linear increase function. This implies that in high-speed networks the congestion epoch duration eventually scales linearly with BDP (and so similarly to standard TCP). 4. At higher speeds, for buffer sizes below 3% BDP the link utilisation achieved by Cubic TCP collapses to around 5% of link capacity and is significantly lower than the link utilisation achieved by standard TCP. Although it requires further investigation, this behaviour appears to be associated with the generation of large packet bursts by the Cubic TCP algorithm. 5. For flows with different RTTs, Cubic exhibits unfairness that is strongly dependent on the start time of the flows. It is unclear at present why this non-convergence behaviour occurs it may be due to a fundamental stability issue or perhaps associated with implementation issues. A. Hardware and Software II. EXPERIMENTAL SETUP All tests were conducted on an experimental testbed. Commodity high-end PCs were connected to gigabit switches to form the branches of a dumbbell topology, see Figure 1. Fig. 1. TCP1 sender TCP2 sender GigE switch Experimental set-up. Dummynet Router GigE switch TCP1 receiver TCP2 receiver Description CPU Intel Xeon CPU 3.GHz 166 FSB Memory 1 Gbyte Motherboard Dell PowerEdge PE86 Kernel Linux with Cubic bug fix[9] txqueuelen 1 max backlog 25 NIC Intel Pro 1PT PCIe x4 NIC Driver e k4 TX & RX Descriptors 496 TABLE I HARDWARE AND SOFTWARE CONFIGURATION. Although all are not shown in Figure 1, a total of 14 end hosts are available for traffic generation. All sender and receiver machines used in the tests have identical hardware and software configurations as shown in Table I and are connected to the switches at 1Gb/sec. The router, running the FreeBSD dummynet software, can be configured with various bottleneck queue-sizes, capacities and round trip propagation delays to emulate a wide range network conditions. Apart from the router, all machines run an instrumented version of the Linux kernel. We note that the implementation of Cubic-TCP included in the Linux kernel distribution and earlier is known [9] to be incorrect (this has subsequently been corrected). In our tests we use a corrected implementation. To provide consistency, and control against the influence of differences in implementation as opposed to differences in the congestion control algorithm itself, we use a common kernel for all tests. It is known that the at high bandwidth-delay products SACK processing etc in the Linux network stack can impose a sufficiently high burden on end hosts that it leads to a significant performance degradation [1], [12]. We performed tests to confirm, on our hardware, appropriate network stack operation over the range of network conditions tested. The kernel is instrumented using custom RelayFS monitoring to allow measurement of TCP variables. In order to minimise the effects of local hosts queues and flow interactions, unless otherwise stated we only ran one

2 2 long-lived flow per PC with flows injected into the testbed using iperf. Web traffic sessions are generated by dedicated client and server PCs, with exponentially distributed intervals between requests and Pareto distributed page sizes. This is implemented using a client side script and custom CGI script running on an Apache server. Following [1], unless otherwise stated, we used a mean time between requests of 1 second and a Pareto shape parameter of 1.2 and mean 6.. Each individual test was run at least ten minutes each. Tests collecting statistics on unsynchronised operation were run for at least one hour in order to ensure reliable statistics. In the case of tests involving, we ran individual tests for up to an hour as the congestion epoch duration becomes very long on large bandwidth-delay products paths. B. Comparative Testing Our test setup corresponds to that in [12] and hence the Cubic TCP measurements reported here can be directly compared with previous measurements reported for, High-Speed TCP, Scalable TCP, BIC-TCP, FAST-TCP and H- TCP. C. Range of Network Conditions Similarly to [12], in this paper we consider round-trip propagation delays in the range 16ms-2ms and bandwidths ranging from 1Mb/s-5Mb/s. We do not consider these values to be definitive the upper value of bandwidth considered can, in particular, be expected to be subject to upwards pressure. We do, however, argue that these values are sufficient to capture an interesting range of network conditions that characterises current communication networks, and the behaviour of protocols across this range. In all of our tests we consider delay values of 16ms, 4ms, 8ms, 16ms, 2ms and bandwidths of 1Mb/s, 1Mb/s, 25Mb/s and 5Mb/s. In addition, we perform each test with various levels of competing bidirectional web sessions. This defines a three-dimensional grid of measurement points where, for each value of delay and level of web traffic, performance is measured for each of the values of bandwidth. Owing to space restrictions, we cannot include the results of all our tests here. We therefore present results for a subset of network conditions that are representative of the full test results obtained. A more complete collection of test results will be posted on the Hamilton Institute website. III. CUBIC TCP ALGORITHM IN LINUX Before proceeding, we briefly describe the Cubic TCP algorithm used in Linux. Cubic-TCP combines the basic ideas first proposed in High-Speed TCP, and H-TCP. Namely, the cwnd additive increase rate is a function of time since the last notification of congestion (as in H-TCP), and of the window size at the last notification of congestion (similarly to HS-TCP). Pseudo code for the main functionality of the Cubic algorithm is shown in Algorithm 1. The features of this algorithm can be summarised as follows, 1) Modified slow start. A modified slow start behaviour is employed at startup. Once cwnd rises above ssthresh Algorithm 1 : Pseudo code of main functionality in Linux Cubic algorithm 1: Initialise: 2: last max = ; loss cwnd = ; epoch start = ; ssthresh = 1 3: b = 2.5; c =.4 4: 5: On each ACK: 6: delay min = min(rtt, delay min) 7: if cwnd < ssthresh then 8: cwnd++ //slow start 9: else 1: if epoch start = then 11: epoch start = current time 12: K = max(, 3p (b (last max cwnd))) 13: origin point = max(cwnd, last max) 14: end if 15: t =current time +delay min epoch start 16: target = origin point + c (t K) 3 17: if target > cwnd then 18: cnt = cwnd/(target cwnd) 19: else 2: cnt = 1 cwnd 21: end if 22: if delay min > then 23: cnt = max(cnt, 8 cwnd/(2 delay min)) //max AI rate 24: end if 25: if loss cwnd == then 26: cnt=5 // continue exponential increase before first backoff 27: end if 28: if cwnd cnt > cnt then 29: cwnd++ 3: cwnd cnt = 31: else 32: cwnd cnt++ 33: end if 34: end if 35: 36: On packet loss: 37: epoch start = 38: if cwnd < last max then 39: last max =.9 cwnd 4: else 41: last max = cwnd 42: end if 43: loss cwnd = cwnd 44: cwnd =.8 cwnd // backoff cwnd by.8 (which is initalised to a value of 1 packets in Cubic), Cubic exits normal slow start and changes to use a less aggressive exponential increase where cwnd is increased by one packet for every 5 acks received or, equivalently, cwnd doubles approximately every 35 round-trip times. See lines in Algorithm 1. 2) Backoff factor.8. On packet loss, cwnd is decreased by a factor of.8 (compared with a factor of.5 in the standard TCP algorithm). See line 44 in Algorithm 1. 3) Clamp on maximum increase rate. The additive increase rate during AIMD operation is limited to be at most 2 delay min packets per RTT, where delay min is an estimate of the round-trip propagation delay of the flow. See line 23 in Algorithm 1. Converting from packets per RTT to packets per second, this clamp is roughly equivalent to a cap on the increase rate of 2 packets/s independent of RTT. 4) Cubic increase function. Subject to this clamp, the additive increase rate used is target cwnd packets per RTT. Note that the effect of this increase is to adjust

3 3 cwnd to be equal to target over the course of a single RTT. The value of target is calculated (see line 16 in algorithm) from: target = W max + c(t 3 (b(w max.8w)) 3 (1) where t is the elapsed time since last backoff (approximately the value of delay min is added to this value, see line 15) and W max is related to the cwnd at last backoff and is denoted origin point in the code. W is the cwnd value immediately before the last backoff, so that.8w is the cwnd value just after backoff has occurred. 5) Adaptation of cubic function. The value of W max is adjusted depending on whether the last backoff occurred before or after cwnd reached the previous W max value. Let W denote the cwnd value immediately before backoff. Then, W max is set equal to the W when W is larger than the previous value of W max. Otherwise W max is set equal to.9w. See lines in algorithm. The Linux Cubic algorithm also includes code which ensures that the Cubic algorithm is at least as aggressive as standard TCP, but this plays little role in our discussion and we refer the interested reader to the Linux code for further details. A. Other Cubic Variants It is important to note that a number of variants of the Cubic TCP algorithm exist. In this paper we focus on the algorithm contained in the standard Linux distribution as this is both the most recent variant and the variant in production use. This Linux Cubic algorithm differs from that described in the original Cubic paper [11], and from algorithms used and documented in recent tests. Amongst other things, the original cubic algorithm proposed in [11] lacks the clamp on increase rate and the mode switch based on the parameter W max in Algorithm 1. The experimental results reported by the Cubic authors[3] make use of a custom patch to the Linux kernel 1 which differs from Algorithm 1. IV. FAIRNESS WITH SAME RTT We begin by considering the simplest case of flows sharing a single bottleneck and with the same round-trip time. A basic requirement is for flows to be allocated bandwidth in a fair manner in this situation. Figure 2 shows typical measured time histories with Cubic TCP following startup of a second flow. A number of features are immediately evident. Slow convergence, prolonged unfairness. It can seen from Figure 2 that the cwnds converge very slowly. The rate of convergence decreases as the bandwidth-delay product (BDP) is increased, with convergence times in excess of 3s commonly observed in our tests. Long-term fairness. Provided tests are run for a sufficiently long period, our measurements indicate that the 1 See bitcp/tiny release/installbic htm flows do asymptotically converge to an approximately fair share of the bottleneck bandwith. See also Figure 3. Linear increase at high BDPs. For cwnd s above about 5 packets it can be seen that the cubic increase function becomes replaced by a simple linear AIMD increase. This is particularly evident for flow 1 in the lower plot in Figure 2. This implies that in high-speed networks the congestion epoch duration eventually scales linearly with BDP. This is similar to standard TCP. We note that although in the examples shown in Figure 2 the flows are nearly synchronised (i.e. both flows experience drops at most congestion events), we find that the slow convergence behaviour is not confined to such situations and that the same qualitative behaviour is evident in unsynchronised environments. For example, Figure 4 shows the corresponding results when tests are carried out with 2 bi-directional web flows sharing the bottleneck link. Figure 4 plots the throughput time histories of the two long-lived flows, averaged over 25 runs. This is an ensemble average i.e. each time point is the average over 25 runs. We observe that while de-synchronisation of drops sometimes speeds up convergence (e.g. when the newly started flow misses drops), it also sometimes slows convergence (when the incumbent slow misses drops) and on balance this yields similarly slow convergence to synchronised situations. A. Source of slow convergence. We can gain some insight into the slow convergence exhibited by Cubic TCP by noting that the cubic increase function depends on the flow cwnd at the last congestion event. Generally, flows with larger cwnd s are more aggressive than flows with smaller cwnd s, and are consequently more able to acquire bandwidth as it becomes available. New flows are thus at a disadvantage and sustained unfairness can occur. We comment that a similar feature has previously been highlighted in other high-speed protocols, which are also known to exhibit slow convergence [5], [4], [6], [7]. The key here is that the convergence behaviour is determined by interactions between competing flows. Hence, bath of noise type mean-field analysis such as that used in the popular Padhye fluid model provides little insight. Roughly speaking, the convergence rate of the network depends on two factors: (a) the rate at which individual flows release bandwidth when informed of congestion; and (b) the rate at which individual flows acquire available bandwidth. The first depends on the backoff factors employed in the network, the second on the additive increase strategy employed by the individual sources. Together they determine the net rate at which bandwidth can be distributed in the network. In standard TCP flows with the same RTT probe for available bandwidth at the same rate. Strategies such as H-TCP seek to mimic this behaviour by making all sources probe in the same manner (at least in the synchronised case). Other strategies, such as High-Speed TCP, BIC-TCP and Cubic- TCP, result in flows with larger window sizes probing more aggressively than those with smaller windows. This basic asymmetry between flows increase rates has the effect of slowing convergence.

4 4 35 Convergence 1Mbit/sec Bottleneck Fairness Ratio Convergence 25Mbit/sec Bottleneck 1 1 RTT (ms) Cubici 1Mb/s Cubic 25Mb/s Fig. 3. Ratio of throughputs of two Cubic TCP flows with the same RTT (also sharing same bottleneck link and operating same congestion control algorithm) as path propagation delay is varied. Flow throughputs are averaged over the last 2s of each test run and so approximate asymptotic behaviour, neglecting initial transients. Results are shown for 1Mbit/sec and 25Mbit/sec bottleneck bandwidths. The bottleneck queue size is 1% BDP, no web traffic (Mbps) Fig. 4. Impact of web traffic on convergence. Evolution of mean bandwidth, averaged over 2 test runs, following startup of a second flow. 2 background web flows (1 in each direction). Link bandwidth is 25 Mbit/sec, RTT is 2ms, queue size 1% BDP Fig. 2. Cubic TCP cwnd time histories following startup of a second flow. Bandwidth is 1Mbits/s (top), 25 Mbit/sec (middle) and 5 Mbit/sec (bottom). RTT is 2ms, queue size 1% BDP, no web traffic. This effect is reinforced by changes to the AIMD backoff factor. In standard TCP flows backoff cwnd by.5 on detecting packet loss. Strategies such as BIC-TCP and Cubic-TCP instead use a backoff factor of.8. As a result, flows release bandwidth more slowly when informed of congestion, again having the effect of slowing convergence. B. Slow convergence implies prolonged unfairness. One consequence of slow convergence is that periods of extreme unfairness between flows may persist for long periods; even in situations where flows do eventually converge to fairness. Such situations are masked when fairness results are presented purely in terms of long-term averages. However, this behaviour is immediately evident, for example, in the time histories shown in Figure 2 and it seems clear that it has im- portant practical implications. For example, two identical file transfers may have very different completion times depending on the order in which they are started. Also, long-lived flows can gain a substantial throughput advantage at the expense of shorter-lived flows. The latter seems particularly problematic as the majority of TCP flows are short to medium sized and so a single long-lived flow may potentially penalize a large number of users (akin to a form of denial of service). With regard to the last point, the impact of a long-lived flow on a short-lived flow is illustrated, for example, in Figure 5. Here, we measure the completion time for a download versus the size of the download. Measurements are shown (i) for the baseline case where no other flow shares the bottleneck link and (ii) for the case where a single long-lived flow shares the link and competes for bandwidth. It can be seen that in the baseline situation, Cubic-TCP, standard TCP and H-TCP all exhibit similar completion times. It is perhaps initially surprising that standard TCP performs so well in this test, in view of concerns about performance in high-speed paths. However, we note that the link in this example is provisioned with a BDP of buffering. A standard TCP flow slow-starts to

5 Cubic HTCP baseline HTCP baseline Cubic baseline 4 Flow 3 35 Mean completion Connection size (MB) Fig. 5. Completion time vs connection size for (i) baseline case with no competing flows and (ii) with a competing long-lived flow. Measurements are shown for standard TCP, Cubic-TCP and H-TCP. Bandwidth is 25 Mbit/sec, RTT is 5ms, queue size 1% BDP Flow 3 fill the pipe and when it then backs off cwnd the buffer just empties and the link remains fully utilised, hence achieving low completion times. More interesting is the performance when a long-lived flow is present. It can be seen that the completion time of the short-lived flow approximately doubles with H-TCP, as might be expected with two flows now sharing the link. In contrast, the completion time for the short-lived flow increases by more than a factor of four. This arises because of the sluggishness with which the long-lived flow releases bandwidth to the short-lived flow, so that the shortlived flow is starved of bandwidth for a prolonged period. It is interesting to also contrast this behaviour with that of standard TCP. It can be seen that the completion time of the shortlived flow is essentially the same both in the baseline and competing long-lived flow cases. Further inspection reveals that the throughput of the long-lived flow is much lower (by a factor 5) compared with that of the long-lived flow when using H-TCP. The long-lived standard TCP flow releases bandwidth but is very slow to regain throughput following losses induced by startup of a short-lived flow. The impact of slow convergence is not confined to situations where new flows start up but is of importance more generally. Figure 6 plots example cwnd time histories measured in conditions where packet drops are unsynchronised. While the long-term (measured over a period of an hour) throughputs of the flows are similar, it can be seen that sustained periods (extending to hundreds of seconds) of unfairness occur. What is happening here is that when a flow misses a drop, it is able to grab a larger share of the available bandwidth and, owing to the slow convergence behaviour of the congestion control algorithm, the resulting unfairness is able to persist for long periods. C. Impact of missing a drop A common feature of loss-based high-speed protocols is their aggressive additive increase phase. This is perfectly natural as it is the primary mechanism for ensuring that the time between congestion events remains small on high BDP paths. However, a consequence of this action is that flows are able to rapidly grab additional bandwidth as a result of Fig. 6. Two examples of Cubic TCP cwnd time histories. Three long-lived flows sharing a bottleneck link with 25 on-off sessions (mean time between requests of 1 seconds, connection sizes are Pareto distributed with shape parameter of 1.2 and mean 6 packets). Bandwidth is 25 Mbit/sec, RTT is 2ms, queue size 1% BDP. missing a drop at a network congestion event. This issue has been previously discussed by a number of authors, e.g. see [2] and references therein. We begin here by first noting that under synchronised conditions, where every flow sees packet loss at every congestion event, the issue of missing a drop does not arise. The increase function used by the original Cubic TCP algorithm[11] is tailored to synchronised conditions. Namely, the inflection point (centre point of the flat section) of the cubic increase curve is placed at the cwnd value at which the last backoff occurred. Under synchronised conditions this leads to concave shape. The Cubic TCP algorithm algorithm as implemented in Linux is modified to adapt the inflection point based on whether the cwnd at backoff is increasing or decreasing compared to its value at the last backoff, see lines in Algorithm 1. This adaptation improves the convergence rate of Cubic TCP over the original algorithm. Under synchronised conditions this adaptive action leads to a period two cycle, illustrated in Figure 7 and is also evident in many of the cwnd time histories shown elsewhere in the paper. Observe that in every second period the cubic increase is aggressively probing for bandwidth at the point of backoff. Consider now the situation where flows are not synchronised. When a Cubic TCP flow misses a drop, the cubic increase function continues past the inflection point and probes for additional bandwidth at a much faster rate than flows recently experiencing a drop. This probing action is cubic in

6 Cubic W max =.8W Cubic W max =.9W 2 H TCP Fig. 7. Example of Cubic TCP cwnd evolution under synchronised conditions. shape i.e. cwnd increases as a cubic function. An example of the resulting cubic increase is shown in Figure 8. The curves marked Cubic W max =.8W and Cubic W max =.8W in Figure 8 correspond, respectively, to the two possible increase functions that can occur due to the adaptation of the inflection point noted previously. Note that due to the period two nature of cubic cwnd evolution, missing a drop on the second period leads immediately to an aggressive cubic increase as the inflection point occurs earlier. In both cases the potential for large excursions is evident, and is demonstrated experimentally in Figure 1. For comparison, also plotted on Figure 8 is the cwnd increase function used by H-TCP. This is also a cubic increase function, although lacking an inflection point. It can be seen that on the right-hand side of the plot, corresponding to the behaviour after missing a drop, the H-TCP cubic function is similar to that of Cubic TCP 2. To confirm the similar natures of the Cubic-TCP and H-TCP increases functions in unsynchronised environments, Figure 9 plots the measured cwnd distributions for both algorithms. To control for the differences in backoff factor used in Cubic and the standard H-TCP algorithm, measurements are taken using a backoff factor of.8 for H-TCP (but without any other change to the algorithm). It can be seen that the cwnd distributions are extremely similar, as might be expected from the foregoing discussion. Note that this is in line with previous simulation results reported in [2]. It appears that reported differences between the coefficient of variation of the cwnd distributions of Cubic-TCP and H-TCP may therefore be primarily 3 attributed to the differences in backoff factors used by the algorithms, rather than to the increase functions. On a broader note, a reasonable question, not yet answered, is whether it in fact matters that high-speed TCP flows may 2 The H-TCP cubic function is slightly less aggressive than that of Cubic TCP. Since cwnd increases slowly on the flat section in the Cubic TCP increase curve, this must be compensated for by a more aggressive increase rate in order to maintain a low congestion epoch duration. 3 But possibly not solely. For example, different algorithms can yield different patterns of packet drops under the same network conditions, and thereby affect the degree of synchronisation/unsynchronisation this sort of issue is well known in the context of pacing in standard TCP. Fig. 8. Cubic TCP increase function for the situation where the cwnd at last backoff is 1 packets. The y-axis is normalised so that the origin lies at the cwnd immediately after backoff, the dashed line then marks the normalised cwnd at last backoff. It can be seen that the inflection point of the Cubic W max =.8W curve is located at this value. frequency Cubic TCP H-TCP.8 backoff Fig. 9. Measured distribution of cwnd. Measurements are shown for both Cubic-TCP and H-TCP using a backoff factor of.8. Experimental setup as in Figure 6 three Cubic flows and 25 background sessions. Bandwidth is 25 Mbit/sec, RTT 2ms, queue size 1% BDP. exhibit large variations in cwnd under unsynchronised conditions. Firstly, we note that network paths contain extensive buffering and so variations in cwnd need not translate into variations in throughput. Most applications also use receive buffers to further smooth out variations in arriving traffic. Secondly, TCP is designed for best-effort traffic. Traffic that requires a nearly constant arrival rate, and which cannot use buffering to ensure this, should probably use a congestion control algorithm specifically targetted at this requirement, such as TFRC. With this in mind, it seems reasonable to argue that the most important features to consider when designing new TCP congestion control algorithms are (i) the ability to send packets quickly (short file transfer times), (ii) in a fair manner (avoiding prolonged unfairness), and (iii) without causing congestion collapse (maintaining the integrity of the Internet). In this context, issues such as convergence rate, fairness, the ability to fill the network pipe quickly, and how these properties scale with increasing bandwidth, are surely paramount, and fluctuations in flow cwnd s area are a minor secondary issue.

7 7 25 Flow Total Throughput (Mbit/sec) Fig. 1. Example of Cubic TCP cwnd time histories with three long-lived flows sharing a bottleneck link with 5 on-off sessions (mean time between requests of 1 seconds, connection sizes are Pareto distributed with shape parameter of 1.2 and mean 6 packets). Bandwidth is 25 Mbit/sec, RTT is 2ms, queue size 1% BDP. Cubic 1Mb/s Cubic 25Mb/s Queue size (Fraction of BDP) Fig. 11. Aggregate throughput of two competing Cubic TCP flows with 1Mbit/sec and 25 Mbit/sec bottleneck bandwidths. Both flows have end-toend round-trip propagation delays of 1ms. 1 V. EFFICIENCY To evaluate the link utilisation of Cubic TCP, we consider two flows having the same propagation delay and measure average throughput as the buffer provisioning is varied from 2.5% to 1% of the bandwidth-delay product, see Figure 11. Results are shown for a 1Mb/s and a 25Mb/s link. As a reference, also plotted on Figure 11 is the efficiency for standard TCP. In the case of a 1Mb/s link, it can be seen that for buffers sized about 5% BDP Cubic TCP achieves greater link utilisation than standard TCP. This is to be expected owing to the larger AIMD backoff factor of.8 used by Cubic as opposed to the backoff factor of.5 used by standard TCP (so that Cubic decreases cwnd by less than standard TCP on detecting packet loss). At buffer sizes below 2.5%, the link utilisation achieved by both standard TCP and Cubic TCP falls substantially, presumably due to micro-scale packet bursts flooding the queue once it reaches such a small size. Somewhat surprisingly, we observe quite different behaviour at 25Mb/s. It can be seen from the lower plot in Figure 11 that for buffer sizes below 3% BDP the link utilisation achieved by Cubic TCP falls to around 5% of link capacity and is significantly lower than the link utilisation achieved by standard TCP. Although it requires further investigation, this behaviour appears to be associated with the generation of large packet bursts by the Cubic TCP algorithm and might warrant a modified implementation to mitigate this effect. VI. FAIRNESS WITH DIFFERENT RTTS Figure 12 shows the ratio of measured throughputs when the propagation delay of the first flow is held constant at 162ms and the propagation delay of the second flow is varied. Results are shown both for a bottleneck link bandwidth of 1 Mb/s and 25Mb/s. Results are shown when the queue is sized at 1% BDP since, as discussed above, there appear to be additional issues when smaller sized queues are employed. Also marked on Figure 12, for reference, are the corresponding measurements obtained using standard TCP. It can be seen that Cubic TCP is generally significantly more fair than standard Fairness Ratio RTT (ms) Cubic 1Mb/s Cubic 25Mb/s Fig. 12. Ratio of throughputs of two competing Cubic TCP flows as the propagation delay of the second flow is varied. Results are shown for 1Mbit/sec and 25Mbit/sec bottleneck bandwidths. has RTT of 162ms, the RTT of is marked on the x-axis of the plots. Queue size is 1% BDP, no web traffic. TCP, with the ratio of flow throughputs never falling below about.25. It is perhaps unexpected, however, that there is in fact any unfairness at all between Cubic TCP flows as the Cubic TCP increase function used does not depend on flow RTT. That is, the Cubic TCP increase function is defined as a function time in seconds, in contrast to standard TCP where the increase is specified per RTT. On closer inspection, we find that the degree of unfairness in Cubic TCP is strongly dependent on the start time of the flows. See for example Figure 13. It is unclear at present why this behaviour occurs. We note that at low cwnd s Cubic TCP is observed exhibit gross unfairness. This is illustrated for example in Figure 14, where it can be seen that one flow is essentially starved of bandwidth. This effect appears to be associated with quantisation of the Cubic TCP increase function and cwnd backoff. VII. BACKWARD COMPATIBILITY Figure 15 plots the ratio of measured throughputs of two flows with the same propagation delay and a shared bottleneck link. The first flow operates the standard TCP algorithm while the second flow operates Cubic TCP variant. Results are shown both for bottleneck link bandwidths of 1 Mb/s and 25Mb/s.

8 Fairness Ratio RTT (ms) Cubic 1Mb/s Cubic 25Mb/s Fig. 15. Ratio of throughputs of competing Cubic TCP and standard TCP flows as path propagation delay is varied. Results are shown for 1Mbit/sec and 25Mbit/sec bottleneck bandwidths. Both flows have the same RTT. Queue size is 1% BDP, no web traffic. Results for two competing standard TCP flows are also shown for reference dest1ng dest2ng Fig. 13. Cubic TCP cwnd time histories following startup of a second flow. Startup time of second flow is 93s in top plot and 98s in lower plot. 25Mbps link, flow 1 RTT is 16ms, flow 2 RTT is 162ms, 1% BDP queue. Throughput (Kbits) dest1ng dest2ng Fig. 16. Cubic TCP (flow 1) and standard TCP (flow 2) flows. 1Mbps link, BDP queue, flow 1 RTT 16ms, flow 2 RTT 16ms. 8 Throughput (Kbits) 6 4 concern. Amongst these, the issue of slow convergence that is a feature of Cubic-TCP, and of other algorithms, seems most worrying. Fig ms Two Cubic TCP flows. 1Mbps link, flow 1 RTT 16ms, flow 2 RTT IX. ACKNOWLEDGEMENTS This work was supported by Science Foundation Ireland grants /PI.1/C67 and 4/IN3/I46. Discussions and experimental tests carried out by Yee-Ting Lee are gratefully acknowledged. We note that at low cwnd s, Cubic TCP can exhibit gross unfairness when competing with standard TCP flows. See for example, Figure 16. As discussed previously, this appears to be due to quantisation issues in the Cubic TCP algorithm. VIII. SUMMARY In this paper we present an initial experimental evaluation of the recently proposed Cubic-TCP algorithm. Results are presented using a suite of benchmark tests that have been recently proposed in the literature [12], and a number of issues are of practical concern highlighted. While some of these issues may be considered minor (including questions surrounding transient cwnd fluctuations), a number are of significant REFERENCES [1] D.J.Leith. Linux implementation issues in high-speed networks. Hamilton Institute Technical Report. LinuxHighSpeed.pdf, 23. [2] D.J.leith and R.N.Shorten. Impact of drop synchronisation on TCP fairness in high bandwidth-delay product networks. In Proc. Workshop on Protocols for Fast Long Distance Networks, Nara, Japan., 26. [3] I.Rhee and et al. convex-ordering/. [4] C. King, R. Shorten, F. Wirth, and M. Akar. Growth conditions for the global stability of highspeed communication networks. Accepted for publication in IEEE Transactions on Automatic Control, 27. [5] R.N.Shorten, D.J.Leith, and F.Wirth. Products of random matrices and the internet: Asymptotic results. IEEE Transactions on Networking, 14(6), pp , 26. [6] R.N.Shorten, C. King, D.J.Leith, and J.Foy. Modelling TCP in drop-tail and other environments. Accepted for publication in Automatica, 27.

9 9 [7] U. Rothblum and R.N.Shorten. Analysis of high-speed congestion control protocols: A contraction mapping approach. Accepted for publication in SIAM, Control and Optimization, 27. [8] S.Hemminger. vger.kernel.org/msg23215.html. [9] S.Hemminger. 6/ChangeLog [1] W.Willinger, M.S.Taqqu, R.Sherman, and D.V.Wilson. Self-similarity through high-variability: Statistical analysis of ethernet lan traffic at the source level. IEEE/ACM Trans Networking, 5, [11] L. Xu and I. Rhee. CUBIC: A new TCP-Friendly high-speed TCP variant. In Proc. Workshop on Protocols for Fast Long Distance Networks, 25, 25. [12] Y.Lee, D. Leith, and R.N.Shorten. Experimental evaluation of TCP protocols for high-speed networks. Accepted for publication in IEEE Transactions on Networking, 27.

Next Generation TCP: Open Questions

Next Generation TCP: Open Questions 1 Next Generation TCP: Open Questions Douglas.J. Leith, Robert N. Shorten Hamilton Institute, Ireland Abstract While there has been significant progress in recent years in the development of TCP congestion

More information

A Critique of Recently Proposed Buffer-Sizing Strategies

A Critique of Recently Proposed Buffer-Sizing Strategies A Critique of Recently Proposed Buffer-Sizing Strategies G.Vu-Brugier, R. S. Stanojevic, D.J.Leith, R.N.Shorten Hamilton Institute, NUI Maynooth ABSTRACT Internet router buffers are used to accommodate

More information

BicTCP Implemenation in Linux Kernels

BicTCP Implemenation in Linux Kernels BicTCP Implemenation in Linux Kernels Yee-Ting Li and Doug Leith Hamilton Institute, NUI Maynooth 15th February 2004 Abstract This document describes a bug in BicTCP, which has been implemented into the

More information

Delay-based AIMD congestion control

Delay-based AIMD congestion control 1 Delay-based AIMD congestion control D. Leith 1, R.Shorten 1, G.McCullagh 1, J.Heffner 2, L.Dunn 3, F.Baker 3 Abstract Our interest in the paper is investigating whether it is feasible to make modifications

More information

Effects of Applying High-Speed Congestion Control Algorithms in Satellite Network

Effects of Applying High-Speed Congestion Control Algorithms in Satellite Network Effects of Applying High-Speed Congestion Control Algorithms in Satellite Network Xiuchao Wu, Mun Choon Chan, and A. L. Ananda School of Computing, National University of Singapore Computing 1, Law Link,

More information

Impact of Short-lived TCP Flows on TCP Link Utilization over 10Gbps High-speed Networks

Impact of Short-lived TCP Flows on TCP Link Utilization over 10Gbps High-speed Networks Impact of Short-lived TCP Flows on TCP Link Utilization over Gbps High-speed Networks Lin Xue, Chui-hui Chiu, and Seung-Jong Park Department of Computer Science, Center for Computation & Technology, Louisiana

More information

On queue provisioning, network efficiency and the Transmission Control Protocol

On queue provisioning, network efficiency and the Transmission Control Protocol 1 On queue provisioning, network efficiency and the Transmission Control Protocol R. N. Shorten, D.J.Leith, Hamilton Institute, NUI Maynooth, Ireland. {robert.shorten, doug.leith}@nuim.ie Abstract In this

More information

H-TCP: A framework for congestion control in high-speed and long-distance networks

H-TCP: A framework for congestion control in high-speed and long-distance networks 1 H-TCP: A framework for congestion control in high-speed and long-distance networks D.J. Leith, R.N. Shorten, Y.Lee Hamilton Institute, NUI Maynooth Abstract In this paper we present a new AIMD congestion

More information

Transmission Control Protocol (TCP)

Transmission Control Protocol (TCP) TETCOS Transmission Control Protocol (TCP) Comparison of TCP Congestion Control Algorithms using NetSim @2017 Tetcos. This document is protected by copyright, all rights reserved Table of Contents 1. Abstract....

More information

Impact of Drop Synchronisation on TCP Fairness in High Bandwidth-Delay Product Networks

Impact of Drop Synchronisation on TCP Fairness in High Bandwidth-Delay Product Networks 1 Impact of Drop Synchronisation on TCP Fairness in High Bandidth-Delay Product Netorks D.J. Leith and R. Shorten Hamilton Institute, National University of Ireland, Maynooth, Ireland {robert.shorten,

More information

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup Chapter 4 Routers with Tiny Buffers: Experiments This chapter describes two sets of experiments with tiny buffers in networks: one in a testbed and the other in a real network over the Internet2 1 backbone.

More information

Fairness of High-Speed TCP Stacks

Fairness of High-Speed TCP Stacks Fairness of High-Speed TCP Stacks Dimitrios Miras Department of Computer Science University College London (UCL) London, WCE 6BT, UK d.miras@cs.ucl.ac.uk Martin Bateman, Saleem Bhatti School of Computer

More information

CUBIC. Qian HE (Steve) CS 577 Prof. Bob Kinicki

CUBIC. Qian HE (Steve) CS 577 Prof. Bob Kinicki CUBIC Qian HE (Steve) CS 577 Prof. Bob Kinicki Agenda Brief Introduction of CUBIC Prehistory of CUBIC Standard TCP BIC CUBIC Conclusion 1 Brief Introduction CUBIC is a less aggressive and more systematic

More information

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness Recap TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness 81 Feedback Signals Several possible signals, with different

More information

Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks

Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks Zongsheng Zhang Go Hasegawa Masayuki Murata Osaka University Contents Introduction Analysis of parallel TCP mechanism Numerical

More information

Performance Consequences of Partial RED Deployment

Performance Consequences of Partial RED Deployment Performance Consequences of Partial RED Deployment Brian Bowers and Nathan C. Burnett CS740 - Advanced Networks University of Wisconsin - Madison ABSTRACT The Internet is slowly adopting routers utilizing

More information

High bandwidth, Long distance. Where is my throughput? Robin Tasker CCLRC, Daresbury Laboratory, UK

High bandwidth, Long distance. Where is my throughput? Robin Tasker CCLRC, Daresbury Laboratory, UK High bandwidth, Long distance. Where is my throughput? Robin Tasker CCLRC, Daresbury Laboratory, UK [r.tasker@dl.ac.uk] DataTAG is a project sponsored by the European Commission - EU Grant IST-2001-32459

More information

PERFORMANCE ANALYSIS OF HIGH-SPEED TRANSPORT CONTROL PROTOCOLS

PERFORMANCE ANALYSIS OF HIGH-SPEED TRANSPORT CONTROL PROTOCOLS PERFORMANCE ANALYSIS OF HIGH-SPEED TRANSPORT CONTROL PROTOCOLS A Thesis Presented to the Graduate School of Clemson University In Partial Fulfillment of the Requirements for the Degree Master of Science

More information

Appendix B. Standards-Track TCP Evaluation

Appendix B. Standards-Track TCP Evaluation 215 Appendix B Standards-Track TCP Evaluation In this appendix, I present the results of a study of standards-track TCP error recovery and queue management mechanisms. I consider standards-track TCP error

More information

Equation-Based Congestion Control for Unicast Applications. Outline. Introduction. But don t we need TCP? TFRC Goals

Equation-Based Congestion Control for Unicast Applications. Outline. Introduction. But don t we need TCP? TFRC Goals Equation-Based Congestion Control for Unicast Applications Sally Floyd, Mark Handley AT&T Center for Internet Research (ACIRI) Jitendra Padhye Umass Amherst Jorg Widmer International Computer Science Institute

More information

A Comparison of TCP Behaviour at High Speeds Using ns-2 and Linux

A Comparison of TCP Behaviour at High Speeds Using ns-2 and Linux A Comparison of TCP Behaviour at High Speeds Using ns-2 and Linux Martin Bateman, Saleem Bhatti, Greg Bigwood, Devan Rehunathan, Colin Allison, Tristan Henderson School of Computer Science, University

More information

Delay-based Congestion Control: Sampling and Correlation Issues Revisited

Delay-based Congestion Control: Sampling and Correlation Issues Revisited 1 Delay-based Congestion Control: Sampling and Correlation Issues Revisited G.D. McCullagh, D.J.Leith Hamilton Institute, Ireland Abstract In this paper we revisit the commonly voiced concern that low

More information

CUBIC: A New TCP-Friendly High-Speed TCP Variant

CUBIC: A New TCP-Friendly High-Speed TCP Variant CUBIC: A New TCP-Friendly High-Speed TCP Variant Sangtae Ha, Injong Rhee and Lisong Xu Presented by Shams Feyzabadi Introduction As Internet evolves the number of Long distance and High speed networks

More information

Chapter II. Protocols for High Speed Networks. 2.1 Need for alternative Protocols

Chapter II. Protocols for High Speed Networks. 2.1 Need for alternative Protocols Chapter II Protocols for High Speed Networks 2.1 Need for alternative Protocols As the conventional TCP suffers from poor performance on high bandwidth delay product links [47] meant for supporting transmission

More information

Modelling TCP Dynamics in Wireless Networks

Modelling TCP Dynamics in Wireless Networks Modelling TCP Dynamics in Wireless Networks D.J. Leith, P.Clifford Hamilton Institute, NUI Maynooth Abstract In this paper we develop an analytic model of the behaviour of competing TCP flows in wireless

More information

Computer Networks. Course Reference Model. Topic. Congestion What s the hold up? Nature of Congestion. Nature of Congestion 1/5/2015.

Computer Networks. Course Reference Model. Topic. Congestion What s the hold up? Nature of Congestion. Nature of Congestion 1/5/2015. Course Reference Model Computer Networks 7 Application Provides functions needed by users Zhang, Xinyu Fall 204 4 Transport Provides end-to-end delivery 3 Network Sends packets over multiple links School

More information

TCP and BBR. Geoff Huston APNIC

TCP and BBR. Geoff Huston APNIC TCP and BBR Geoff Huston APNIC Computer Networking is all about moving data The way in which data movement is controlled is a key characteristic of the network architecture The Internet protocol passed

More information

TCP and BBR. Geoff Huston APNIC. #apricot

TCP and BBR. Geoff Huston APNIC. #apricot TCP and BBR Geoff Huston APNIC The IP Architecture At its heart IP is a datagram network architecture Individual IP packets may be lost, re-ordered, re-timed and even fragmented The IP Architecture At

More information

Experimental Evaluation of Latency Induced in Real-Time Traffic by TCP Congestion Control Algorithms

Experimental Evaluation of Latency Induced in Real-Time Traffic by TCP Congestion Control Algorithms Experimental Evaluation of Latency Induced in Real-Time Traffic by TCP Congestion Control Algorithms Alana Huebner 1 Centre for Advanced Internet Architectures, Technical Report 080818A Swinburne University

More information

Congestion control in TCP

Congestion control in TCP Congestion control in TCP If the transport entities on many machines send too many packets into the network too quickly, the network will become congested, with performance degraded as packets are delayed

More information

Evaluation of Advanced TCP Stacks on Fast Long-Distance Production Networks p. 1

Evaluation of Advanced TCP Stacks on Fast Long-Distance Production Networks p. 1 Evaluation of Advanced TCP Stacks on Fast Long-Distance Production Networks Hadrien Bullot & R. Les Cottrell {hadrien,cottrell}@slac.stanford.edu Stanford Linear Accelerator Center, Menlo Park Evaluation

More information

A transport-layer approach for achieving predictable throughput for Internet applications

A transport-layer approach for achieving predictable throughput for Internet applications Seventh International Conference on Networking A transport-layer approach for achieving predictable throughput for Internet applications Go Hasegawa, Kana Yamanegi and Masayuki Murata Graduate School of

More information

CS4700/CS5700 Fundamentals of Computer Networks

CS4700/CS5700 Fundamentals of Computer Networks CS4700/CS5700 Fundamentals of Computer Networks Lecture 15: Congestion Control Slides used with permissions from Edward W. Knightly, T. S. Eugene Ng, Ion Stoica, Hui Zhang Alan Mislove amislove at ccs.neu.edu

More information

Lecture 21: Congestion Control" CSE 123: Computer Networks Alex C. Snoeren

Lecture 21: Congestion Control CSE 123: Computer Networks Alex C. Snoeren Lecture 21: Congestion Control" CSE 123: Computer Networks Alex C. Snoeren Lecture 21 Overview" How fast should a sending host transmit data? Not to fast, not to slow, just right Should not be faster than

More information

ADVANCED TOPICS FOR CONGESTION CONTROL

ADVANCED TOPICS FOR CONGESTION CONTROL ADVANCED TOPICS FOR CONGESTION CONTROL Congestion Control The Internet only functions because TCP s congestion control does an effective job of matching traffic demand to available capacity. TCP s Window

More information

Using IP-QOS for High Throughput TCP Transport Over Fat Long Pipes. Issues/summary

Using IP-QOS for High Throughput TCP Transport Over Fat Long Pipes. Issues/summary Using IP-QOS for High Throughput TCP Transport Over Fat Long Pipes Andrea Di Donato, Yee-Ting Li, Frank Saka, Peter Clarke Centre of Excellence on Networked Systems University College London, United Kingdom

More information

TCP and BBR. Geoff Huston APNIC

TCP and BBR. Geoff Huston APNIC TCP and BBR Geoff Huston APNIC The IP Architecture At its heart IP is a datagram network architecture Individual IP packets may be lost, re-ordered, re-timed and even fragmented The IP Architecture At

More information

6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long

6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long 6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long Please read Chapter 19 of the 6.02 book for background, especially on acknowledgments (ACKs), timers,

More information

TCP and BBR. Geoff Huston APNIC

TCP and BBR. Geoff Huston APNIC TCP and BBR Geoff Huston APNIC Computer Networking is all about moving data The way in which data movement is controlled is a key characteristic of the network architecture The Internet protocol passed

More information

CS644 Advanced Networks

CS644 Advanced Networks What we know so far CS644 Advanced Networks Lecture 6 Beyond TCP Congestion Control Andreas Terzis TCP Congestion control based on AIMD window adjustment [Jac88] Saved Internet from congestion collapse

More information

Tuning RED for Web Traffic

Tuning RED for Web Traffic Tuning RED for Web Traffic Mikkel Christiansen, Kevin Jeffay, David Ott, Donelson Smith UNC, Chapel Hill SIGCOMM 2000, Stockholm subsequently IEEE/ACM Transactions on Networking Vol. 9, No. 3 (June 2001)

More information

A Bottleneck and Target Bandwidth Estimates-Based Congestion Control Algorithm for High BDP Networks

A Bottleneck and Target Bandwidth Estimates-Based Congestion Control Algorithm for High BDP Networks A Bottleneck and Target Bandwidth Estimates-Based Congestion Control Algorithm for High BDP Networks Tuan-Anh Le 1, Choong Seon Hong 2 Department of Computer Engineering, Kyung Hee University 1 Seocheon,

More information

Performance of high-speed TCP Protocols over NS-2 TCP Linux

Performance of high-speed TCP Protocols over NS-2 TCP Linux Performance of high-speed TCP Protocols over NS-2 TCP Linux Masters Project Final Report Author: Sumanth Gelle Email: sgelle@cs.odu.edu Project Advisor: Dr. Michele Weigle Email: mweigle@cs.odu.edu Project

More information

Report on Transport Protocols over Mismatched-rate Layer-1 Circuits with 802.3x Flow Control

Report on Transport Protocols over Mismatched-rate Layer-1 Circuits with 802.3x Flow Control Report on Transport Protocols over Mismatched-rate Layer-1 Circuits with 82.3x Flow Control Helali Bhuiyan, Mark McGinley, Tao Li, Malathi Veeraraghavan University of Virginia Email: {helali, mem5qf, taoli,

More information

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks Hybrid Control and Switched Systems Lecture #17 Hybrid Systems Modeling of Communication Networks João P. Hespanha University of California at Santa Barbara Motivation Why model network traffic? to validate

More information

Experimental Analysis and Demonstration of the NS2 Implementation of Dynamic Buffer Sizing Strategies for Based Wireless Networks

Experimental Analysis and Demonstration of the NS2 Implementation of Dynamic Buffer Sizing Strategies for Based Wireless Networks Experimental Analysis and Demonstration of the NS2 Implementation of Dynamic Buffer Sizing Strategies for 802.11 Based Wireless Networks Santosh Hosamani, G.S.Nagaraja Dept of CSE, R.V.C.E, Bangalore,

More information

Performance of Competing High-Speed TCP Flows

Performance of Competing High-Speed TCP Flows Performance of Competing High-Speed TCP Flows Michele C. Weigle, Pankaj Sharma, and Jesse R. Freeman, IV Department of Computer Science, Clemson University, Clemson, SC 29634 {mweigle, pankajs, jessef}@cs.clemson.edu

More information

TCP SIAD: Congestion Control supporting Low Latency and High Speed

TCP SIAD: Congestion Control supporting Low Latency and High Speed TCP SIAD: Congestion Control supporting Low Latency and High Speed Mirja Kühlewind IETF91 Honolulu ICCRG Nov 11, 2014 Outline Current Research Challenges Scalability in

More information

Performance Evaluation of TCP Westwood. Summary

Performance Evaluation of TCP Westwood. Summary Summary This project looks at a fairly new Transmission Control Protocol flavour, TCP Westwood and aims to investigate how this flavour of TCP differs from other flavours of the protocol, especially TCP

More information

Bandwidth Allocation & TCP

Bandwidth Allocation & TCP Bandwidth Allocation & TCP The Transport Layer Focus Application Presentation How do we share bandwidth? Session Topics Transport Network Congestion control & fairness Data Link TCP Additive Increase/Multiplicative

More information

Communication Networks

Communication Networks Communication Networks Spring 2018 Laurent Vanbever nsg.ee.ethz.ch ETH Zürich (D-ITET) April 30 2018 Materials inspired from Scott Shenker & Jennifer Rexford Last week on Communication Networks We started

More information

CS3600 SYSTEMS AND NETWORKS

CS3600 SYSTEMS AND NETWORKS CS3600 SYSTEMS AND NETWORKS NORTHEASTERN UNIVERSITY Lecture 24: Congestion Control Prof. Alan Mislove (amislove@ccs.neu.edu) Slides used with permissions from Edward W. Knightly, T. S. Eugene Ng, Ion Stoica,

More information

TCPDelay: Constant bit-rate data transfer over TCP

TCPDelay: Constant bit-rate data transfer over TCP The University of Manchester E-mail: Stephen.Kershaw@manchester.ac.uk Richard Hughes-Jones The University of Manchester E-mail: R.Hughes-Jones@manchester.ac.uk Transmission Control Protocol (TCP) is a

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2014 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2015 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

CS 268: Lecture 7 (Beyond TCP Congestion Control)

CS 268: Lecture 7 (Beyond TCP Congestion Control) Outline CS 68: Lecture 7 (Beyond TCP Congestion Control) TCP-Friendly Rate Control (TFRC) explicit Control Protocol Ion Stoica Computer Science Division Department of Electrical Engineering and Computer

More information

Performance Analysis of TCP Variants

Performance Analysis of TCP Variants 102 Performance Analysis of TCP Variants Abhishek Sawarkar Northeastern University, MA 02115 Himanshu Saraswat PES MCOE,Pune-411005 Abstract The widely used TCP protocol was developed to provide reliable

More information

Fast Retransmit. Problem: coarsegrain. timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission

Fast Retransmit. Problem: coarsegrain. timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission Fast Retransmit Problem: coarsegrain TCP timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission Packet 1 Packet 2 Packet 3 Packet 4 Packet 5 Packet 6 Sender Receiver

More information

Mininet Performance Fidelity Benchmarks

Mininet Performance Fidelity Benchmarks Mininet Performance Fidelity Benchmarks Nikhil Handigol, Brandon Heller, Vimalkumar Jeyakumar, Bob Lantz, Nick McKeown October 21, 2012 1 Introduction This initial Mininet technical report evaluates the

More information

Sally Floyd, Mark Handley, and Jitendra Padhye. Sept. 4-6, 2000

Sally Floyd, Mark Handley, and Jitendra Padhye. Sept. 4-6, 2000 A Comparison of Equation-Based and AIMD Congestion Control Sally Floyd, Mark Handley, and Jitendra Padhye Sept. 4-6, 2 Workshop on the Modeling of Congestion Control Algorithms Paris 1 Why look at non-tcp

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2014 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

Design and Performance Evaluation of High Efficient TCP for HBDP Networks

Design and Performance Evaluation of High Efficient TCP for HBDP Networks Design and Performance Evaluation of High Efficient TCP for HBDP Networks TaeJoon Park 1, ManKyu Park 2,JaeYongLee 2,, and ByungChul Kim 2 1 Electronics and Telecommunications Research Institute 161 Gajong-Dong,

More information

Variable Step Fluid Simulation for Communication Network

Variable Step Fluid Simulation for Communication Network Variable Step Fluid Simulation for Communication Network Hongjoong Kim 1 and Junsoo Lee 2 1 Korea University, Seoul, Korea, hongjoong@korea.ac.kr 2 Sookmyung Women s University, Seoul, Korea, jslee@sookmyung.ac.kr

More information

TCP Fairness in e WLANs

TCP Fairness in e WLANs 1 TCP Fairness in 82.11e WLANs D.J. Leith, P. Clifford Hamilton Institute, NUI Maynooth Abstract We investigate the use of the 82.11e MAC EDCF to address transport layer unfairness in WLANs. A simple solution

More information

Episode 4. Flow and Congestion Control. Baochun Li Department of Electrical and Computer Engineering University of Toronto

Episode 4. Flow and Congestion Control. Baochun Li Department of Electrical and Computer Engineering University of Toronto Episode 4. Flow and Congestion Control Baochun Li Department of Electrical and Computer Engineering University of Toronto Recall the previous episode Detailed design principles in: The link layer The network

More information

Master s Thesis. TCP Congestion Control Mechanisms for Achieving Predictable Throughput

Master s Thesis. TCP Congestion Control Mechanisms for Achieving Predictable Throughput Master s Thesis Title TCP Congestion Control Mechanisms for Achieving Predictable Throughput Supervisor Prof. Hirotaka Nakano Author Kana Yamanegi February 14th, 2007 Department of Information Networking

More information

Lecture 14: Congestion Control"

Lecture 14: Congestion Control Lecture 14: Congestion Control" CSE 222A: Computer Communication Networks George Porter Thanks: Amin Vahdat, Dina Katabi and Alex C. Snoeren Lecture 14 Overview" TCP congestion control review Dukkipati

More information

Networked Systems (SAMPLE QUESTIONS), COMPGZ01, May 2016

Networked Systems (SAMPLE QUESTIONS), COMPGZ01, May 2016 Networked Systems (SAMPLE QUESTIONS), COMPGZ01, May 2016 Answer TWO questions from Part ONE on the answer booklet containing lined writing paper, and answer ALL questions in Part TWO on the multiple-choice

More information

Performance of Competing High-Speed TCP Flows

Performance of Competing High-Speed TCP Flows Performance of Competing High-Speed TCP Flows Michele C. Weigle, Pankaj Sharma, and Jesse R. Freeman IV Department of Computer Science, Clemson University, Clemson, SC 29634 {mweigle, pankajs, jessef}@cs.clemson.edu

More information

CS 356: Computer Network Architectures Lecture 19: Congestion Avoidance Chap. 6.4 and related papers. Xiaowei Yang

CS 356: Computer Network Architectures Lecture 19: Congestion Avoidance Chap. 6.4 and related papers. Xiaowei Yang CS 356: Computer Network Architectures Lecture 19: Congestion Avoidance Chap. 6.4 and related papers Xiaowei Yang xwy@cs.duke.edu Overview More on TCP congestion control Theory Macroscopic behavior TCP

More information

Flow-start: Faster and Less Overshoot with Paced Chirping

Flow-start: Faster and Less Overshoot with Paced Chirping Flow-start: Faster and Less Overshoot with Paced Chirping Joakim Misund, Simula and Uni Oslo Bob Briscoe, Independent IRTF ICCRG, Jul 2018 The Slow-Start

More information

Increase-Decrease Congestion Control for Real-time Streaming: Scalability

Increase-Decrease Congestion Control for Real-time Streaming: Scalability Increase-Decrease Congestion Control for Real-time Streaming: Scalability Dmitri Loguinov City University of New York Hayder Radha Michigan State University 1 Motivation Current Internet video streaming

More information

Network Management & Monitoring

Network Management & Monitoring Network Management & Monitoring Network Delay These materials are licensed under the Creative Commons Attribution-Noncommercial 3.0 Unported license (http://creativecommons.org/licenses/by-nc/3.0/) End-to-end

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2015 1 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

Congestion Control End Hosts. CSE 561 Lecture 7, Spring David Wetherall. How fast should the sender transmit data?

Congestion Control End Hosts. CSE 561 Lecture 7, Spring David Wetherall. How fast should the sender transmit data? Congestion Control End Hosts CSE 51 Lecture 7, Spring. David Wetherall Today s question How fast should the sender transmit data? Not tooslow Not toofast Just right Should not be faster than the receiver

More information

ADVANCED COMPUTER NETWORKS

ADVANCED COMPUTER NETWORKS ADVANCED COMPUTER NETWORKS Congestion Control and Avoidance 1 Lecture-6 Instructor : Mazhar Hussain CONGESTION CONTROL When one part of the subnet (e.g. one or more routers in an area) becomes overloaded,

More information

Operating Systems and Networks. Network Lecture 10: Congestion Control. Adrian Perrig Network Security Group ETH Zürich

Operating Systems and Networks. Network Lecture 10: Congestion Control. Adrian Perrig Network Security Group ETH Zürich Operating Systems and Networks Network Lecture 10: Congestion Control Adrian Perrig Network Security Group ETH Zürich Where we are in the Course More fun in the Transport Layer! The mystery of congestion

More information

Where we are in the Course. Topic. Nature of Congestion. Nature of Congestion (3) Nature of Congestion (2) Operating Systems and Networks

Where we are in the Course. Topic. Nature of Congestion. Nature of Congestion (3) Nature of Congestion (2) Operating Systems and Networks Operating Systems and Networks Network Lecture 0: Congestion Control Adrian Perrig Network Security Group ETH Zürich Where we are in the Course More fun in the Transport Layer! The mystery of congestion

More information

Design of Network Dependent Congestion Avoidance TCP (NDCA-TCP) for Performance Improvement in Broadband Networks

Design of Network Dependent Congestion Avoidance TCP (NDCA-TCP) for Performance Improvement in Broadband Networks International Journal of Principles and Applications of Information Science and Technology February 2010, Vol.3, No.1 Design of Network Dependent Congestion Avoidance TCP (NDCA-TCP) for Performance Improvement

More information

COMPARISON OF HIGH SPEED CONGESTION CONTROL PROTOCOLS

COMPARISON OF HIGH SPEED CONGESTION CONTROL PROTOCOLS COMPARISON OF HIGH SPEED CONGESTION CONTROL PROTOCOLS Jawhar Ben Abed 1, Lâarif Sinda 2, Mohamed Ali Mani 3 and Rachid Mbarek 2 1 Polytech Sousse, 2 ISITCOM Hammam Sousse and 3 ISTLS Sousse, Tunisia ba.jawhar@gmail.com

More information

Exercises TCP/IP Networking With Solutions

Exercises TCP/IP Networking With Solutions Exercises TCP/IP Networking With Solutions Jean-Yves Le Boudec Fall 2009 3 Module 3: Congestion Control Exercise 3.2 1. Assume that a TCP sender, called S, does not implement fast retransmit, but does

More information

Title Problems of TCP in High Bandwidth-Delay Networks Syed Nusrat JJT University, Rajasthan, India Abstract:

Title Problems of TCP in High Bandwidth-Delay Networks Syed Nusrat JJT University, Rajasthan, India Abstract: Title Problems of TCP in High Bandwidth-Delay Networks Syed Nusrat JJT University, Rajasthan, India Abstract: The Transmission Control Protocol (TCP) [J88] is the most popular transport layer protocol

More information

ECE 610: Homework 4 Problems are taken from Kurose and Ross.

ECE 610: Homework 4 Problems are taken from Kurose and Ross. ECE 610: Homework 4 Problems are taken from Kurose and Ross. Problem 1: Host A and B are communicating over a TCP connection, and Host B has already received from A all bytes up through byte 248. Suppose

More information

Congestion Control Without a Startup Phase

Congestion Control Without a Startup Phase Congestion Control Without a Startup Phase Dan Liu 1, Mark Allman 2, Shudong Jin 1, Limin Wang 3 1. Case Western Reserve University, 2. International Computer Science Institute, 3. Bell Labs PFLDnet 2007

More information

Computer Networking

Computer Networking 15-441 Computer Networking Lecture 17 TCP Performance & Future Eric Anderson Fall 2013 www.cs.cmu.edu/~prs/15-441-f13 Outline TCP modeling TCP details 2 TCP Performance Can TCP saturate a link? Congestion

More information

CSE 461. TCP and network congestion

CSE 461. TCP and network congestion CSE 461 TCP and network congestion This Lecture Focus How should senders pace themselves to avoid stressing the network? Topics Application Presentation Session Transport Network congestion collapse Data

More information

Implementation Experiments on HighSpeed and Parallel TCP

Implementation Experiments on HighSpeed and Parallel TCP Implementation Experiments on HighSpeed and TCP Zongsheng Zhang Go Hasegawa Masayuki Murata Osaka University Outline Introduction Background of and g Why to evaluate in a test-bed network A refined algorithm

More information

An Enhanced Slow-Start Mechanism for TCP Vegas

An Enhanced Slow-Start Mechanism for TCP Vegas An Enhanced Slow-Start Mechanism for TCP Vegas Cheng-Yuan Ho a, Yi-Cheng Chan b, and Yaw-Chung Chen a a Department of Computer Science and Information Engineering National Chiao Tung University b Department

More information

Transmission Control Protocol. ITS 413 Internet Technologies and Applications

Transmission Control Protocol. ITS 413 Internet Technologies and Applications Transmission Control Protocol ITS 413 Internet Technologies and Applications Contents Overview of TCP (Review) TCP and Congestion Control The Causes of Congestion Approaches to Congestion Control TCP Congestion

More information

Performance Comparison of TFRC and TCP

Performance Comparison of TFRC and TCP ENSC 833-3: NETWORK PROTOCOLS AND PERFORMANCE CMPT 885-3: SPECIAL TOPICS: HIGH-PERFORMANCE NETWORKS FINAL PROJECT Performance Comparison of TFRC and TCP Spring 2002 Yi Zheng and Jian Wen {zyi,jwena}@cs.sfu.ca

More information

A Hybrid Systems Modeling Framework for Fast and Accurate Simulation of Data Communication Networks. Motivation

A Hybrid Systems Modeling Framework for Fast and Accurate Simulation of Data Communication Networks. Motivation A Hybrid Systems Modeling Framework for Fast and Accurate Simulation of Data Communication Networks Stephan Bohacek João P. Hespanha Junsoo Lee Katia Obraczka University of Delaware University of Calif.

More information

Master s Thesis. Congestion Control Mechanisms for Alleviating TCP Unfairness in Wireless LAN Environment

Master s Thesis. Congestion Control Mechanisms for Alleviating TCP Unfairness in Wireless LAN Environment Master s Thesis Title Congestion Control Mechanisms for Alleviating TCP Unfairness in Wireless LAN Environment Supervisor Professor Hirotaka Nakano Author Masafumi Hashimoto February 15th, 21 Department

More information

RED behavior with different packet sizes

RED behavior with different packet sizes RED behavior with different packet sizes Stefaan De Cnodder, Omar Elloumi *, Kenny Pauwels Traffic and Routing Technologies project Alcatel Corporate Research Center, Francis Wellesplein, 1-18 Antwerp,

More information

CS321: Computer Networks Congestion Control in TCP

CS321: Computer Networks Congestion Control in TCP CS321: Computer Networks Congestion Control in TCP Dr. Manas Khatua Assistant Professor Dept. of CSE IIT Jodhpur E-mail: manaskhatua@iitj.ac.in Causes and Cost of Congestion Scenario-1: Two Senders, a

More information

CS Networks and Distributed Systems. Lecture 10: Congestion Control

CS Networks and Distributed Systems. Lecture 10: Congestion Control CS 3700 Networks and Distributed Systems Lecture 10: Congestion Control Revised 2/9/2014 Transport Layer 2 Application Presentation Session Transport Network Data Link Physical Function:! Demultiplexing

More information

Media-Aware Rate Control

Media-Aware Rate Control Media-Aware Rate Control Zhiheng Wang zhihengw@eecs.umich.edu University of Michigan Ann Arbor, MI 4819 Sujata Banerjee sujata@hpl.hp.com Hewlett-Packard Laboratories Palo Alto, CA 9434 Sugih Jamin jamin@eecs.umich.edu

More information

TCP so far Computer Networking Outline. How Was TCP Able to Evolve

TCP so far Computer Networking Outline. How Was TCP Able to Evolve TCP so far 15-441 15-441 Computer Networking 15-641 Lecture 14: TCP Performance & Future Peter Steenkiste Fall 2016 www.cs.cmu.edu/~prs/15-441-f16 Reliable byte stream protocol Connection establishments

More information

CS Transport. Outline. Window Flow Control. Window Flow Control

CS Transport. Outline. Window Flow Control. Window Flow Control CS 54 Outline indow Flow Control (Very brief) Review of TCP TCP throughput modeling TCP variants/enhancements Transport Dr. Chan Mun Choon School of Computing, National University of Singapore Oct 6, 005

More information

image 3.8 KB Figure 1.6: Example Web Page

image 3.8 KB Figure 1.6: Example Web Page image. KB image 1 KB Figure 1.: Example Web Page and is buffered at a router, it must wait for all previously queued packets to be transmitted first. The longer the queue (i.e., the more packets in the

More information

Lecture 14: Congestion Control"

Lecture 14: Congestion Control Lecture 14: Congestion Control" CSE 222A: Computer Communication Networks Alex C. Snoeren Thanks: Amin Vahdat, Dina Katabi Lecture 14 Overview" TCP congestion control review XCP Overview 2 Congestion Control

More information