Impact of Short-lived TCP Flows on TCP Link Utilization over 10Gbps High-speed Networks

Similar documents
CUBIC. Qian HE (Steve) CS 577 Prof. Bob Kinicki

Effects of Applying High-Speed Congestion Control Algorithms in Satellite Network

Transmission Control Protocol (TCP)

A study on fairness and latency issues over high speed networks and data center networks

Performance Analysis of Loss-Based High-Speed TCP Congestion Control Algorithms

CUBIC: A New TCP-Friendly High-Speed TCP Variant

PERFORMANCE ANALYSIS OF HIGH-SPEED TRANSPORT CONTROL PROTOCOLS

Performance of high-speed TCP Protocols over NS-2 TCP Linux

COMPARISON OF HIGH SPEED CONGESTION CONTROL PROTOCOLS

A Comparison of TCP Behaviour at High Speeds Using ns-2 and Linux

Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks

Fairness of High-Speed TCP Stacks

Performance of Competing High-Speed TCP Flows

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup

A Bottleneck and Target Bandwidth Estimates-Based Congestion Control Algorithm for High BDP Networks

Design and Performance Evaluation of High Efficient TCP for HBDP Networks

TM ALGORITHM TO IMPROVE PERFORMANCE OF OPTICAL BURST SWITCHING (OBS) NETWORKS

BicTCP Implemenation in Linux Kernels

Performance of Competing High-Speed TCP Flows

Evaluation of Advanced TCP Stacks on Fast Long-Distance Production Networks p. 1

Comparing TCP Congestion Control Algorithms Based on Passively Collected Packet Traces

Report on Transport Protocols over Mismatched-rate Layer-1 Circuits with 802.3x Flow Control

Fair TCP: A Novel TCP Congestion Control Algorithm to Tackle TCP s Challenges

Chapter II. Protocols for High Speed Networks. 2.1 Need for alternative Protocols

CS 356: Computer Network Architectures Lecture 19: Congestion Avoidance Chap. 6.4 and related papers. Xiaowei Yang

EXPERIMENTAL EVALUATION OF TCP CONGESTION CONTORL MECHANISMS IN SHORT AND LONG DISTANCE NETWORKS

Title Problems of TCP in High Bandwidth-Delay Networks Syed Nusrat JJT University, Rajasthan, India Abstract:

Congestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University

Evaluation of TCP Based Congestion Control Algorithms Over High-Speed Networks

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks

RED behavior with different packet sizes

TCP on High-Speed Networks

The effect of reverse traffic on the performance of new TCP congestion control algorithms

ADVANCED TOPICS FOR CONGESTION CONTROL

Computer Networking

A Comparison of TCP Congestion Control Algorithms in 10G Networks

Test-Bed Based Comparison of Single and Parallel TCP and the Impact of Parallelism on Throughput and Fairness in Heterogeneous Networks

Implementation Experiments on HighSpeed and Parallel TCP

Studying Fairness of TCP Variants and UDP Traffic

TCP so far Computer Networking Outline. How Was TCP Able to Evolve

Improving TCP Performance over Wireless Networks using Loss Predictors

Appendix B. Standards-Track TCP Evaluation

TCP on High-Speed Networks

Bandwidth Allocation & TCP

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness

Experimental Analysis of TCP Behaviors against Bursty Packet Losses Caused by Transmission Interruption

Performance evaluation of TCP over optical channels and heterogeneous networks

Equation-Based Congestion Control for Unicast Applications. Outline. Introduction. But don t we need TCP? TFRC Goals

Experimental Evaluation of Latency Induced in Real-Time Traffic by TCP Congestion Control Algorithms

CS 638 Lab 6: Transport Control Protocol (TCP)

Analytical Model for Congestion Control and Throughput with TCP CUBIC connections

High bandwidth, Long distance. Where is my throughput? Robin Tasker CCLRC, Daresbury Laboratory, UK

Advanced Congestion Control (Hosts)

Impact of Bottleneck Queue Size on TCP Protocols and Its Measurement

Design of Network Dependent Congestion Avoidance TCP (NDCA-TCP) for Performance Improvement in Broadband Networks

TCP based Receiver Assistant Congestion Control

Chaoyang University of Technology, Taiwan, ROC Nan-Kai Institute of Technology, Taiwan, ROC

Linux beats Windows! or the Worrying Evolution of TCP in Common Operating Systems 1

Topics. TCP sliding window protocol TCP PUSH flag TCP slow start Bulk data throughput

Evaluation of Advanced TCP Stacks in the iscsi Environment using Simulation Model

IJSRD - International Journal for Scientific Research & Development Vol. 2, Issue 03, 2014 ISSN (online):

Congestion control in TCP

A Hybrid Systems Modeling Framework for Fast and Accurate Simulation of Data Communication Networks. Motivation

31 An Evaluation Study of Data Transport Protocols for e-vlbi

Performance Consequences of Partial RED Deployment

Experimental evaluation of Cubic-TCP

Enabling Large Data Transfers on Dynamic, Very High-Speed Network Infrastructures

IMPROVING START-UP AND STEADY-STATE BEHAVIOR OF TCP IN HIGH BANDWIDTH-DELAY PRODUCT NETWORKS DAN LIU

A Compound TCP Approach for High-speed and Long Distance Networks

Computer Networks. Course Reference Model. Topic. Congestion What s the hold up? Nature of Congestion. Nature of Congestion 1/5/2015.

TCP testing: How well does ns2 match reality?

Lecture 14: Congestion Control"

QoS of High Speed Congestion Control Protocols

CS268: Beyond TCP Congestion Control

Congestion Control Without a Startup Phase

Improving XCP to Achieve Max-Min Fair Bandwidth Allocation

Reliable Transport II: TCP and Congestion Control

Reliable Transport II: TCP and Congestion Control

Transport Layer (Congestion Control)

Shrinking the Timescales at Which Congestion-control Operates

Performance Enhancement Of TCP For Wireless Network

Compound TCP: A Scalable and TCP-Friendly Congestion Control for High-speed Networks

Congestion Control based analysis of High Delay Tolerant network Transport Protocol

CS321: Computer Networks Congestion Control in TCP

Fast Retransmit. Problem: coarsegrain. timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission

CS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP

Rethinking the Timescales at Which Congestion-control Operates

Next Generation TCP: Open Questions

COMP/ELEC 429/556 Introduction to Computer Networks

Outline Computer Networking. TCP slow start. TCP modeling. TCP details AIMD. Congestion Avoidance. Lecture 18 TCP Performance Peter Steenkiste

ENRICHMENT OF SACK TCP PERFORMANCE BY DELAYING FAST RECOVERY Mr. R. D. Mehta 1, Dr. C. H. Vithalani 2, Dr. N. N. Jani 3

Lecture 14: Congestion Control"

CS644 Advanced Networks

image 3.8 KB Figure 1.6: Example Web Page

TCP Westwood: Efficient Transport for High-speed wired/wireless Networks

Inferring TCP Congestion Control Algorithms by Correlating Congestion Window Sizes and their Differences

On Standardized Network Topologies For Network Research Λ

Communication Networks

Receiver-initiated Sending-rate Control based on Data Receive Rate for Ad Hoc Networks connected to Internet

Experimental Evaluation of FAST TCP Performance and Fairness in DOCSIS Cable Modem Networks

Transcription:

Impact of Short-lived TCP Flows on TCP Link Utilization over Gbps High-speed Networks Lin Xue, Chui-hui Chiu, and Seung-Jong Park Department of Computer Science, Center for Computation & Technology, Louisiana State University, LA, USA Abstract Short-lived TCP flows have been found to be able to improve link utilization of long-lived TCP flows. Randomized packet losses caused by short-lived TCP flows make desynchronization among long-lived TCP flow, and therefore TCP link utilization is improved. In this paper, we present for the first time the impact of short-lived TCP flows on TCP link utilization in a Gbps high-speed networking environment. We find that with Mbps short-lived TCP flows over Gbps highspeed networks, we get the highest link utilization for relatively unaggressive TCP variants, such as TCP-SACK and CUBIC. For relatively aggressive TCP variants, such as BIC and HSTCP, link utilization can not be improved much in the presence of shortlived TCP flows. I. INTRODUCTION Gbps high-speed networks, such as Internet [] and NLR (National Lambda Rail) [], have been widely used among application developers and networking researchers. Studies [], [], have been done on the performance of TCP over Gbps high-speed networks. Among all the metrics of network performance, link utilization has been the most concerned one. Traditional AIMD (Additive increase/multiplicative decrease) has its sawtooth behavior. In high-bandwidth and long-delay networks, since the sawtooth wave becomes larger, maintaining high link utilization becomes a challenging issue. Several high-speed TCP variants have been developed to improve TCP link utilization, such as TCP-SACK [], HSTCP [], BIC [], CUBIC [], etc. Recent study on TCP congestion avoidance algorithm identification [] shows that these high-speed TCP variants have dominated computer networks nowadays. However, there are still room for improvement in terms of link utilization. As shown in [], even with many long-lived TCP flows, link utilization over Gbps high-speed networks is around % to %. Recent traffic statistics of high-speed networks [] show the distribution of network traffic consists of elephant and mice. The elephant is long-lived TCP flow, which sends large fraction of traffic, such as bulk data transfer, and etc. The mice is short-lived TCP flow, which only consumes little bandwidth, such as web browsing, small file transfer, and etc. Interestingly, some studies [], [] show that short-lived TCP flows can improve link utilization; which means if many short-lived TCP flows are pushed into the network together with longlived TCP flows, the long-lived TCP flows can utilize more network bandwidth. In general, short-lived TCP flows make more randomized drops to long-lived TCP flows. Thus, longlived TCP flows get de-synchronized. The de-synchronization makes the aggregated congestion window of long-lived TCP flows less sawtooth, which in consequence, improves TCP link utilization. However, some questions still remain unclear. How and why do short-lived TCP flows impact TCP link utilization over Gbps high-speed networks? How many short-lived TCP flows is enough for the highest link utilization over Gbps high-speed networks? In this paper, we try to find the answers to these questions. We focus on the impact of shortlived TCP flows on TCP link utilization over Gbps highspeed networks. For the first time, we present the experimental results and discussions by using a Gbps high-speed network testbed CRON []. With different amount of short-lived TCP flows pushed into the network, we examine popular high-speed TCP variants, such as TCP-SACK, CUBIC, BIC, and HSTCP. Based on our experimental results, we figure out how and why do short-lived TCP flows impact on TCP link utilization. We find out the amount of short-lived TCP flows which results in the highest TCP link utilization over Gbps high-speed networks. We introduce other related works in Section II. The impact of packet loss for TCP is presented in Section III. We then present our experimental design and results in Section IV and V respectively. We conclude our findings in Section VI. II. RELATED WORK In [], authors evaluated TCP performance both with and without background traffic in a Mbps bottleneck link. Their background traffic consists of short-lived and longlived TCP flows. They found that most protocols obtain good utilization with background traffic. Then in [], authors focused on impact of background traffic in the same Mbps environment. They concluded that most high-speed protocols have different behaviors while mixing with background traffic. Although they had different background traffic types, they did not have different amount of short-lived TCP flows in their experiments. In this paper, we do all our experiments in a Gbps bottleneck link, and have different traffic volumes of short-lived TCP flows. Our recent study [] showed shortlived TCP flows improve link utilization of heterogeneous TCP flows in Gbps high-speed networks. However, how different amount of short-lived TCP flows impact link utilization of each high-speed TCP variant is still unknown. III. PACKET LOSS IMPACT ON TCP Because short-lived TCP flows add randomized losses to long-lived TCP flows, it is important we understand the packet

loss impact on long-lived TCP flows first. For loss-based TCP, Cwnd (congestion window) size at time t can be calculated as following: W (t) = W loss β + α t () RT T where W loss is the Cwnd size just before the last window reduction, which is caused by packet losses. RT T is round-trip time. α and β are increase parameter and decrease parameter respectively. Generally, α is determined by Cwnd growth function; β is value less than. Every TCP has its own α and β. From Equation () we can see that TCP with larger α and β grows its Cwnd faster, which means it is more aggressive. While TCP with smaller α and β grows its Cwnd slower, which means it is less aggressive. From Equation (), we find the RT (recovery time) which will be taken to recover the Cwnd to W loss is: ( β) RT = W loss RT T () α So for more aggressive TCP variants, which has larger α and β, RT is short. While for less aggressive TCP variants, which has smaller α and β, RT is long. Have this in mind, we present our experiments next. IV. EXPERIMENTAL DESIGN As shown in Figure, we setup a dumbbell topology in CRON [], a Gbps high-speed networking testbed. A. Experimental Testbed In the dumbbell topology, long-lived TCP senders send long-lived TCP flows simultaneously to long-lived TCP receivers. pair of short-lived TCP sender and receiver has -way short-lived TCP flows continuously pushed into the network. There are intermediate routers, and the link between them is bottleneck link. The bottleneck queue is at Router output queue; we set the queue size to % of BDP (bandwidth-delay product) to exclude the impact of router queue. Delay node runs Dummynet in the middle to create ms network delay. CRON has high-end servers Sun Fire X with Gbps NICs. In addition, we did system tuning for Gbps highspeed environment, such as kernel optimizations for Linux and FreeBSD, enlarged TCP buffer in senders and receivers, optimizations for NIC driver, jumbo frame, enlarged Dummynet queue size and bandwidth, and etc. B. Generation of TCP Flows We have two kinds of TCP flows, long-lived TCP flows and short-lived TCP flows. To generate long-lived TCP flows, we use zero-copy Iperf [], which generates high-speed TCP flows with less system overhead by avoiding copy data from user-space to kernel space. We run Harpoon traffic generator [] to generate continuous -way short-lived TCP flows. Harpoon generates shortlived TCP requests from TCP client to TCP server. We set Harpoon traffic parameters according to recent Internet Sender Sender Short-lived TCP Sender/Receiver Router Delay Node Router ( ms) Fig.. Receiver Receiver Short-lived TCP Sender/Receiver Experimental Topology traffic characteristics []. The inter-connection times from TCP client to TCP server follow exponential distribution with second of mean. The request file sizes follow Pareto distribution with alpha=. and shape=. In all our experiments, we have long-lived TCP flows together with different amount of -way short-lived TCP flow in the bottleneck link. We use six different scenarios of short-lived TCP flows: ) without short-lived TCP flows ) with Mbps short-lived TCP flows ) with Mbps short-lived TCP flows ) with Mbps short-lived TCP flows ) with Mbps short-lived TCP flows ) with Mbps short-lived TCP flows We run each experiment for five to ten times; each run takes minutes. All the presented results are averaged over these experiments. V. EXPERIMENTAL RESULTS AND DISCUSSION In this section, we present our experimental results by examining different kinds of TCP variants. A. -SACK Flows We first examine the impact of short-lived TCP flows on long-lived TCP-SACK flows. We simultaneously start longlived TCP-SACK flows; each of them is between pair of long-lived TCP sender and receiver. Figure shows TCP-SACK link utilization vs traffic volume of short-lived TCP flows. TCP-SACK generally shows low link utilization. Without short-lived TCP flows, TCP-SACK utilizes less than half of link capacity. This is because of the instinct of traditional AIMD based TCP, which has weakness in high-bandwidth long-delay networks. TCP-SACK has α = and β =.. Compared to other TCP variants, TCP-SACK is unaggressive. Such unaggressive behavior creates large bandwidth waste, especially in Gbps high-speed networks. We see that with Mbps short-lived TCP flows, link utilization gets improved by around %. With Mbps shortlived TCP flows, link utilization is not improved much than the case of no short-lived TCP flows. With more than Mbps short-lived TCP flows, like - Mbps, link utilization is not improved neither. The error bar becomes longer, which is because with more short-lived TCP flows, the network starts to behave randomly. Sometimes TCP-SACK flows get high throughput; while sometimes they get low throughput.

% Link Utilization % Link Utilization.... Fig.. Link Utilization of -SACK Flows with Different Amount Fig.. Link Utilization of Long-lived CUBIC Flows with Different Amount In order to see why Mbps short-lived TCP flows make maximum link utilization, we plot out throughput of longlived TCP-SACK flows during - seconds in Figure. Figure (a) is for the case without short-lived TCP flows. We see that TCP-SACK makes unfairness for two flows; one flow always outperforms the other. In Figure (b), Mbps shortlived TCP flows are added. Mbps short-lived TCP flows make randomized packet losses to long-lived TCP-SACK flows. Moreover, with such small amount of short-lived TCP flows, the faster flow has higher chance to have packet drops than the slower flow. According to Equation (), RT of TCP-SACK is relatively long, which gives the slower flow more time to grow. Thus, two flows get de-synchronized so that aggregated throughput gets improved. Figure (c) is for the case with Mbps short-lived TCP flows. We see both the fast flow and the slow flow get drops by short-lived TCP flows at different times (eg. - seconds the slower flow gets drops). As a result, both flows do not have much gain on Cwnd, and therefore the aggregated throughput is not improved. B. Long-lived CUBIC Flows We then start long-lived CUBIC flows. In Figure, CUBIC utilizes the network quite well. Without short-lived flows, CUBIC has almost % of link utilization. The reason is that CUBIC has β =.; and growth function is: W CUBIC C(t W loss β/c) + W loss () Where W CUBIC is Cwnd size of CUBIC, C is a scaling factor, t is elapsed time since last Cwnd reduction. According to this Cwnd growth function, we know that CUBIC has big but smooth growth because of its cubic growth function. Its α and β are larger than TCP-SACK. So it has more aggressiveness and shorter RT. If we have Mbps short-lived flows in the network, we get link utilization % more than the case without short-lived TCP flows. When the amount of short-lived TCP flows is more than Mbps, again we do not have much improvement in link utilization. The error bars are very small in all cases. We plot out the instant throughput of long-lived CUBIC flows in Figure. In Figure (a) without short-lived TCP flows, we see CUBIC makes two flows quite fair, every flow almost consumes.gbps bandwidth. Figure (b) is for the case with Mbps short-lived TCP flows. We do not see much difference to Figure (a), but randomized drops still make desynchronization of flows and % improvement on link utilization. The small improvement is because CUBIC has shorter RT. Whenever random losses happen, CUBIC flows always recover their throughput quickly. In Figure (c), we have Mbps short-lived TCP flows. With such amount of short-lived TCP flows, CUBIC flows get more randomized packet losses. Because of CUBIC s aggressiveness, CUBIC starts to behave not that fair. Short-lived TCP flows make one flow outperforms the other. The slower flow always gets more packet losses than the case in Figure (a) and (b). Thus, link utilization can not be improved......... (b) with Mbps short-lived TCP flows.... (c) with Mbps short-lived TCP flows Fig.. Instantaneous throughput of long-lived TCP-SACK flows

(b) with Mbps short-lived TCP flows Fig.. Instantaneous throughput of long-lived CUBIC flows (c) with Mbps short-lived TCP flows (b) with Mbps short-lived TCP flows Fig.. (c) with Mbps short-lived TCP flows Instantaneous throughput of long-lived BIC flows C. Long-lived BIC Flows % Link Utilization Now we see the results for long-lived BIC flows in Figure. Without short-lived TCP flows, BIC reaches around % link utilization. With Mbps short-lived TCP flows, link utilization is almost the same as the case without short-lived TCP flows. With more than Mbps short-lived flows, such as - Mbps, link utilization of BIC decreases a bit. The error bars for Mbps and Mbps are relatively long, which is caused by the randomization brought by larger amount of short-lived TCP flows. BIC has a Cwnd threshold: if Wloss > threshold, it has β to be. and growth function to be binary search increase; if Wloss < threshold, it behaves the same as AIMD. From here, we can see that BIC is even more aggressive than CUBIC because of its binary search increase. BIC s RT is even shorter. Thus, throughput dropped by randomized packet losses can be very quickly recovered, and therefore we see short-lived TCP flows can not contribute much on link utilization. Figure shows instant throughput of long-lived BIC flows. Figure (a) shows without short-lived TCP flows, throughput lines of BIC flows almost do not have overlaps; one BIC flow always has higher throughput than the other during seconds. In Figure (b), we have Mbps short-lived TCP flows. The faster flow has more changes to be randomly dropped (eg. - seconds drops on the faster flow). The slower flow has chance to grow. However, because of the aggressiveness of BIC, the faster flow can quickly recover. The slower flow does not have much space to increase its Cwnd. So the Fig.. Link Utilization of Long-lived BIC Flows with Different Amount aggregated throughput is not improved. With Mbps shortlived TCP flows in Figure (c), both BIC flows are impacted by short-lived TCP flows at different times. Consequently, link utilization is not improved. D. Long-lived HSTCP Flows Figure is the results for long-lived HSTCP flows. We see that short-lived TCP flows can not increase link utilization in all cases. Even with only Mbps short-lived TCP flows, link utilization gets decreased. Also, all error bars are around %. HSTCP also has a Cwnd threshold: If Wloss < threshold, it behaves same as AIMD; if Wloss > threshold, it has a table driven α to be to and β to be. to.. With that we know HSTCP is also an aggressive TCP vari-

% Link Utilization Fig.. Link Utilization of Long-lived HSTCP Flows with Different Amount ant; sometimes even more aggressive than BIC and CUBIC. Also, RT of HSTCP is short, which determines that HSTCP can not gain much bandwidth when there are randomized packet losses. We also plot out HSTCP flows instant throughput in Figure. Without short-lived TCP flows, Figure (a) shows two HSTCP flows converge at around seconds. With Mbps short-lived TCP flows, Figure (b) shows randomized packet losses make one HSTCP flow outperforms the other flow. In Figure (c), which is for the case with Mbps shortlived TCP flows, HSTCP flows still perform unfairly. We see that the slower flow gets packet dropped more often (eg. during all seconds). In all, HSTCP can not gain more link utilization in the presence of short-lived TCP flows. VI. CONCLUSION In this paper, we present impact of short-lived TCP flows on link utilization of popular high-speed TCP variants, namely TCP-SACK, CUBIC, BIC, HSTCP. All experiments are done in a Gbps high-speed networking environment. Because of the randomized packet drops, long-lived TCP flows generally get de-synchronized, and therefore link utilization is improved. Our findings suggest that relatively unaggressive TCP variants, such as TCP-SACK and CUBIC, get the highest TCP link utilization with only Mbps short-lived TCP flows in Gbps networks. While relatively aggressive TCP variants, such as BIC and HSTCP, can not gain more link utilization in the presence of short-lived TCP flows. Acknowledgement: This work has been supported in part by the NSF MRI Grant # (CRON project) and DEPSCoR project N---. REFERENCES [] Internet, http://www.internet.edu/. [] National LambdaRail, http://nlr.net/. [] Y. Wu, S. Kumar, and S. Park, Measurement and performance issues of transport protocols over gbps high-speed optical networks, Computer Networks, vol., no., pp.,. [] L. Xue, C. Cui, S. Kumar, and S. Park, Experimental evaluation of the effect of queue management schemes on the performance of high speed tcps in gbps network environment, in Computing, Networking and Communications (ICNC), International Conference on. IEEE,, pp.. [] M. Mathis, J. Mahdavi, S. Floyd, and A. Romanow, Tcp selective acknowledgment options, RFc, October, Tech. Rep.,. [] S. Floyd, Highspeed tcp for large congestion windows, RFC,. [] L. Xu, K. Harfoush, and I. Rhee, Binary increase congestion control (bic) for fast long-distance networks, in INFOCOM. Twenty-third AnnualJoint Conference of the IEEE Computer and Communications Societies, vol.. IEEE,, pp.. [] S. Ha, I. Rhee, and L. Xu, Cubic: A new tcp-friendly high-speed tcp variant, ACM SIGOPS Operating Systems Review, vol., no., pp.,. [] P. Yang, W. Luo, L. Xu, J. Deogun, and Y. Lu, Tcp congestion avoidance algorithm identification, in Distributed Computing Systems (ICDCS), st International Conference on. IEEE,, pp.. [] Internet Netflow Data, http://netflow.internet.edu. [] S. Ha, L. Le, I. Rhee, and L. Xu, Impact of background traffic on performance of high-speed tcp variant protocols, Computer Networks, vol., no., pp.,. [] L. Xue, S. Kumar, C. Cui, and S.-J. Park, An evaluation of fairness among heterogeneous TCP variants over gbps high-speed networks, in th Annual IEEE Conference on Local Computer Networks (LCN ). [] CRON, CRON Project: Cyberinfrastructure for Reconfigurable Optical Networking Environment,, http://www.cron.loni.org/. [] S. Ha, Y. Kim, L. Le, I. Rhee, and L. Xu, A step toward realistic performance evaluation of high-speed tcp variants, in Fourth International Workshop on Protocols for Fast Long-Distance Networks (PFLDNet),. [] T. Yoshino, Y. Sugawara, K. Inagami, J. Tamatsukuri, M. Inaba, and K. Hiraki, Performance optimization of TCP/IP over gigabit ethernet by precise instrumentation, in Proceedings of the ACM/IEEE conference on Supercomputing. IEEE Press,, p.. [] J. Sommers, H. Kim, and P. Barford, Harpoon: a flow-level traffic generator for router and network tests, in ACM SIGMETRICS Performance Evaluation Review, vol., no.. ACM,, pp.. (b) with Mbps short-lived TCP flows (c) with Mbps short-lived TCP flows Fig.. Instantaneous throughput of long-lived HSTCP flows