A study on fairness and latency issues over high speed networks and data center networks

Size: px
Start display at page:

Download "A study on fairness and latency issues over high speed networks and data center networks"

Transcription

1 Louisiana State University LSU Digital Commons LSU Doctoral Dissertations Graduate School 2014 A study on fairness and latency issues over high speed networks and data center networks Lin Xue Louisiana State University and Agricultural and Mechanical College, lxue2@tigers.lsu.edu Follow this and additional works at: Part of the Computer Sciences Commons Recommended Citation Xue, Lin, "A study on fairness and latency issues over high speed networks and data center networks" (2014). LSU Doctoral Dissertations This Dissertation is brought to you for free and open access by the Graduate School at LSU Digital Commons. It has been accepted for inclusion in LSU Doctoral Dissertations by an authorized graduate school editor of LSU Digital Commons. For more information, please contactgradetd@lsu.edu.

2 A STUDY ON FAIRNESS AND LATENCY ISSUES OVER HIGH SPEED NETWORKS AND DATA CENTER NETWORKS A Dissertation Submitted to the Graduate Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Department of Computer Science and Engineering by Lin Xue B.S., China Agricultural University, 2005 M.S., Beijing University of Posts and Telecommunications, 2008 May 2014

3 ACKNOWLEDGMENTS My most sincere gratitude and appreciation go to Dr. Seung-Jong Park as my major professor and committee chair. I thank him for introducing me to this fantastic world of scientific research. I thank him for his help on jumpstart of knowledge, his guidance, encouragement and support throughout my Ph.D. studies. This dissertation would not have been possible without him. My deepest appreciation also extends to Dr. Costas Busch and Dr. Feng Chen for taking their time to serve as my committee members. Their valuable feedback and comments have improved the quality of this research. I thank Dr. Mostafa Elseifi from Department of Civil and Environmental Engineering for serving on my committee as Dean s Representative. His suggestions and comments are highly appreciated. I also would like to thank all my lab mates and colleagues: Dr. Cheng Cui, Dr. Suman Kumar, Praveenkumar Kondikoppa, Chui-hui Chiu, Georgi Stoyanov, Richard Platania, and all people who helped me with testbed setup and paper writing and constantly support me through all the stress. I deeply appreciate their time and help during my PhD study. Finally, I will be forever grateful to my parents, my wife Yi Liang and all my college friends. Without their support and help, this journey would not have been possible. This work has been supported in part by the NSF MRI Grant No , NSF CC-NIE Grant No , GENI grant, and DEPSCoR project N ii

4 TABLE OF CONTENTS Acknowledgments ii List of tables v List of figures vi Abstract ix Chapter 1: Introduction Background High Speed Networks Data Center Networks Summary of Contribution Outline of Dissertation Chapter 2: Experimental Evaluation of the Performance of 10Gbps High Speed Networks An Experimental Study of the Impact of Queue Management Schemes and TCP Variants on 10Gbps High Speed Networks Overview Related Works Network Performance Consideration Experimental Preparation Results of Experimental Evaluation Summary A Study of Fairness among Heterogeneous TCP Variants over 10Gbps High-speed Optical Networks Overview Related Works Experimental Design Evaluation Results and Discussion Summary Chapter 3: AFCD: an Approximated-Fair and Controlled-Delay Queuing Scheme for High Speed Networks Overview Related Works Design of AFCD Design Goals iii

5 3.3.2 Architecture Design Algorithm Experimental Evaluation CUBIC Flow and 1 TCP-SACK Flow CUBIC Flow and 1 UDP Flow Flows Case: 1 CUBIC, 1 HSTCP, and 1 TCP-SACK Flow Many Multiplexed Heterogeneous TCP Flows Reduced RTT With Short-lived TCP Flows CPU and Memory Usage Conclusion of AFCD over High Speed Networks Chapter 4: FaLL: a Fair and Low Latency Queuing Scheme for Data Center Networks Overview Related Works End Severs and Switches End Severs Switches Design Overview of FaLL Algorithm Enqueue Function Efficiency Module Fairness Module Experimental Evaluation and Discussion Experimental Setup Results of Y-shape Topology Results of Tree Topology Conclusion of FaLL over Data Center Networks Chapter 5: Conclusion and Future Works References Vita iv

6 LIST OF TABLES 2.1 Parameter setup for 4 queue management schemes Fairness: Homogeneous TCP vs Heterogeneous TCP (buffer size = 20% of BDP, RTT = 120ms, heterogeneous TCP variants include CUBIC, HSTCP, and TCP-SACK) Parameter setup for 6 queue management schemes Experimental setup for all solutions v

7 LIST OF FIGURES Gbps high speed campus networks connected by Internet2 for large scale scientific computing, distance learning, etc A data center network showing aggregation and top of rack (TOR) switches CRON system architecture consisting of routers, delay links, and highend workstation operating up to 10Gbps bandwidth Experimental topology: dumbbell topology Link Utilization as a function of buffer size in each TCP variant Fairness among 20 flows as a function of buffer size in each TCP variant (RTT = 120ms) RTT fairness as a function of RTT in each TCP variant Delay as a function of buffer size in each TCP variant Memory consumption as a function of buffer size in each TCP variant Experimental Topology Fairness for 1 TCP-SACK, 1 CUBIC, and 1 HSTCP flow (RTT = 120ms) Instant Cwnd for 3 TCP Flows in Seconds (Buffer Size = 20% BDP, RTT = 120ms) Instant Cwnd for 3 TCP Flows in Seconds (Buffer Size = 1% BDP, RTT = 120ms) Fairness for Multiple Long-lived Flows: 10 TCP-SACK, 10 CUBIC, and 10 HSTCP flows (RTT = 120ms) Fairness for Multiple Long-lived flows with Short-lived flows: 10 TCP- SACK, 10 CUBIC, 10 HSTCP flows, and Short-lived flows (RTT = 120ms) vi

8 2.14 RTT Fairness for 2 Homogeneous TCP flows (Buffer Size = 20% BDP, RTT of one flow is fixed to 120ms, RTT of the other flow is changed from 30ms to 240ms shown on x-axis) RTT Fairness for Heterogeneous TCP Flows without Short-lived flows (Two kinds of TCP s RTTs are fixed to 120ms, the other TCP s RTT is changed from 30ms ms shown on x-axis. Buffer Size = 20% BDP) RTT Fairness for Heterogeneous TCP Flows with Short-lived flows (Two kinds of TCP s RTTs are fixed to 120ms, the other TCP s RTT is changed from 30ms ms shown on x-axis. Buffer Size = 20% BDP) Link Utilization for Heterogeneous TCP Flows A Topology for a Large Size High-speed Optical Network Fairness for a Large Size High-speed Optical Network: 24 CUBIC, 24 HSTCP, 24 TCP-SACK flows, and Short-lived flows (RTT = 120ms) Functional block diagram of AFCD queuing Dumbbell experimental topology with 10Gbps environment CUBIC and 1 TCP-SACK: performance of QM schemes when 1 CU- BIC flow and 1 TCP-SACK flow are competing at the 10Gbps bottleneck, Propagation Delay = 120ms, Queue Size = 100% BDP CUBIC and 1 TCP-SACK: instant congestion window size and delay for different QM schemes 1 CUBIC flow and 1 TCP-SACK flow are competing at the 10Gbps bottleneck, Propagation Delay = 120ms, Queue Size = 100% BDP CUBIC and 1 UDP: performance of QM schemes when 1 CUBIC flow and 1 UDP flow are competing at the 10Gbps bottleneck, Propagation Delay = 120ms, Queue Size = 100% BDP flows case: performance of queue management schemes in 10Gbps high speed networks 1 CUBIC, 1 HSTCP, and 1 TCP-SACK flow, Propagation Delay = 120ms, Queue Size = 100% BDP Many multiplexed heterogeneous TCP flows: performance of queue management schemes in 10Gbps high speed networks 10 CUBIC, 10 HSTCP, and 10 TCP-SACK flow, Propagation Delay = 120ms, Queue Size = 100% BDP vii

9 3.8 Reduced RTT: performance of queue management schemes 10 CUBIC, 10 HSTCP, and 10 TCP-SACK flow, RTT = 60ms, Queue Size = 100% BDP, Bottleneck = 10Gbps With short-lived TCP flows: performance of queue management schemes in 10Gbps high speed networks 10 CUBIC, 10 HSTCP, 10 TCP-SACK flow, and short-lived TCP flows, Propagation Delay = 120ms, Queue Size = 100% BDP CPU and memory usage of queue management schemes: 10 CUBIC, 10 HSTCP, 10 TCP-SACK flow, and short-lived TCP flows, Propagation Delay = 120ms, Queue Size = 100% BDP Functional Block Diagram of FaLL Experimental topologies: Y-shape (top) and Tree topology (botton). All nodes, switches, and links have 10Gbps capacity TCP outcast performance of 7 flows (Y-shape topologies, Flow1 to Flow6 are from s1 and Flow7 is from s2) Queuing delay and throughput (Y-shape topologies, 6 long-lived TCP flows from s1 and 1 long-lived TCP flow from s2) Without short-lived flows: Tree topology, 9 long-lived flows competing for 10Gbps bandwidth (s1 - s9 each sends 1 flow to r1) With short-lived flows: Tree topology with short-lived flows, 8 longlived flows competing for 10Gbps bandwidth (s8 sends many short-lived flows to r1, other 8 senders each sends 1 long-lived flow to r1) Multiplexed traffic: Tree topology with short-lived flows, 40 long-lived flows competing for 10Gbps bandwidth (s8 sends many short-lived flows to r1, other 8 senders each sends 5 long-lived flow to r1) Different start time: Tree topology with short-lived flows, 40 long-lived flows competing for 10Gbps bandwidth. 20 long-lived flows start 10 seconds earlier than other 20 long-lived flows (s1 - s4 first each sends 5 flows to r1. After 10 seconds, s5, s6, s7 and s9 each sends 5 flows to r1. s8 sends many short-lived flows to r1) viii

10 ABSTRACT Newly emerging computer networks, such as high speed networks and data center networks, have characteristics of high bandwidth and high burstiness which make it difficult to address issues such as fairness, queuing latency and link utilization. In this study, we first conduct extensive experimental evaluation of the performance of 10Gbps high speed networks. We found inter-protocol unfairness and larger queuing latency are two outstanding issues in high speed networks and data center networks. There have been several proposals to address fairness and latency issues at switch level via queuing schemes. These queuing schemes have been fairly successful in addressing either fairness issue or large latency but not both at the same time. We propose a new queuing scheme called Approximated-Fair and Controlled-Delay (AFCD) queuing scheme that meets following goals for high speed networks: approximated fairness, controlled low queuing delay, high link utilization and simple implementation. The design of AFCD utilizes a novel synergistic approach by forming an alliance between approximated fair queuing and controlled delay queuing. AFCD maintains very small amount of state information in sending rate estimation of flows and makes drop decision based on a target delay of individual flow. We then present FaLL, a Fair and Low Latency queuing scheme that meets stringent performance requirements of data center networks: fair share of bandwidth, low queuing latency, high throughput, and ease of deployment. FaLL uses an efficiency module, a fairness module and a target delay based dropping scheme to meet these goals. Through rigorous experiments on real testbed, we show that FaLL outperforms various peer solutions in variety of network conditions over data center networks. ix

11 CHAPTER 1 INTRODUCTION 1.1 Background In this section, we introduce the background of this work: high speed networks and data center networks High Speed Networks Advances in high speed networking technology coincide with the need for network infrastructure development to support scientific computing, distant learning, e-commerce, health, and many other unforeseen future applications. Consequently, 10Gbps high speed networks, such as Internet2 [36], NRL (National LambdaRail) [51], and LONI (Louisiana Optical Network Initiative) [48], have been the first ones that were developed to connect a wide range of academic institutes. Figure 1.1 shows a 10Gbps network infrastructure encompassing the campus network of LSU (Louisiana State University) and MAX (Mid-Atlantic Crossroads). Network operators use Internet2 network stitching [27] to federate all campus networks together, and core routers are responsible for connection across various facilities located in-campus. The 10Gbps high speed network enables resource sharing among different departments at different institutes. As gigabit connectivities are easily available for gigabit-based PCs, servers, data center storage and high performance computing, gigabit networking technology is preferred by many organizations. Therefore, various organizations are increasingly migrating to gigabit links in order to grow their networks to support new applications and traffic types. It is true that the availability of multi-gigabit switches/routers provides the opportunity to build high-performance, high-reliability networks but only if correct design approaches are followed at both hardware and software level. 1

12 LSU Campus Network Department of Chemistry... LONI Network: Louisiana Optical Network Initiative MAX Network: Mid-Atlantic Crossroads MAX Switch... Internet2 Department of Physics LSU Core Router LONI Switch Department of Biology... 10Gbps Link 1Gbps Link... MAX Resources FIGURE Gbps high speed campus networks connected by Internet2 for large scale scientific computing, distance learning, etc. To meet the demand of high speed networks, several TCP variants have been proposed. Extensive performance evaluations of these protocols convinced network users to adopt several of these seemingly promising proposals. Furthermore, open sourcing of these protocols makes it flexible for user to select the choice of their protocols. Therefore, current computer networks are ruled by heterogeneous TCP variants[77] such as TCP-SACK, HighSpeed TCP (HSTCP) [22], CUBIC TCP (CUBIC) [31], etc. TCP flows are dominant in Internet2 traffic [37] accounting for around 85% flows while rest being UDP flows with most of the network usages being long-lived bulk data transfer. One of the key design goals of the proposed TCP variants is fair bandwidth sharing with other competing TCP flows. However, our study [76] discloses severe inter-protocol unfairness among heterogeneous long-lived TCP flows in high speed networks where faster TCP flows consume most of the network bandwidth, whereas slow ones starve. Packet delay is an equally key high speed network performance measure with others being throughput and fairness. Although high speed networks do not have Bufferbloat 2

13 problem[28], queuing delay in high speed networks needs careful consideration. For example, in a typical setting of a 10Gbps router [15] in high speed networks, the output buffer size of routers could be up to 89MB, which may create large queuing delay in high speed networks. The large queuing delay may create bad user experience in live concert [35], video streaming, etc, over high speed networks like Internet2. Also, popular applications such as Online shopping, Voice over IP, HDTV, banking, and gaming, require not only high throughput but also low delay. In fact, importance of packet delay is growing with the emergence of a new class of high performance computing applications such as algorithmic trading and various data center applications with soft real time deadlines that demand extremely low end-to-end latency (ranging from microsecond to orders of milliseconds). It is clear that the large latency may result in poor performance of the networks. Therefore, predictable and controllable queuing delay is highly desired in contemporary high speed networks. Performance of heterogeneous TCP flows depends on router parameters [66], but high bandwidth, high latency, and bursty nature of high speed networks make it a challenging task to design queue management (QM) schemes that ensure minimal latency while maintaining fair share of bandwidth among heterogeneous flows. There have been considerable efforts to address these challenges through various QM proposals [61, 63, 52, 50]. The QM scheme in [61] uses stateful information of the flows to classify the incoming packets, and puts the classified packets into different queues which is not suitable for large scale networks. The QM scheme in [63] is stateless in core layer, but it needs stateful information in edge layer. In [52], the authors proposed a stateless active queue management (AQM) solution, and it achieves approximate fairness for heterogeneous flows. Although the fairness problem has been addressed in these QM schemes, controllable and predictable queuing delay has been overlooked. Recently proposed controlled delay (CoDel) AQM scheme [50] focuses on providing 3

14 extremely low queuing delay, and CoDel works well in a wide range of scenarios. While CoDel provides extremely low queuing delay, fairness issue is at bay. Only very recently that there is an attempt to bring fair queuing into CoDel and a variation of CoDel has been implemented in Linux kernel [19] that provides fairness by classifying different flows into different CoDel queues. However, the history of research on classification based QM suggests that any approach requiring huge amount of state information is not practical for large scale networks Data Center Networks Data centers are growing in capacity to host diverse Internet-scale web applications such as social networking, web search, video streaming, advertisement, and so on. Large IT enterprises such as Google, Facebook, Amazon, and Microsoft build their own data center networks to provide large scale online services. As more of society is relying on the growing use of pervasive devices creating new demand for storage and computing facilities, businesses are increasingly moving to wholesale data center model to take advantage of economy of scale [6]. Therefore, data center networks are becoming the cornerstone of always evolving web applications. Web technology trends suggest that low latency, fair share of bandwidth among applications, and high throughput are the major performance requirements that data center networks are expected to meet. In fact, low latency is becoming a stringent performance requirement for data center networks because of real-time nature of many applications that have become popular, whose performance is critical to service revenue industry. For example, in algorithmic high frequency trading where market data must arrive with minimal latency (of the order of microseconds), financial service providers rely on high speed networks that must provide low latency to carry these computational transactions [67]. Likewise, in retail web services, single page request may require calling more than 100 services 4

15 and because these services can be interdependent, low latency is a critical factor for user experience [18]. Many data center applications require task completion within their deadlines. Missing the deadlines may create bad user experience or even failure of the job resulting into lost revenue [17]. Fair bandwidth sharing is another critical performance metric for data center networks. Network traffic from diverse applications are multiplexed at network switches in data center networks. These network traffic may come from different data center tenants[42]. Enforcing fair share of bandwidth among these non-cooperating applications is considered to be an issue [59] that data center networks must address. Also, multi-rooted tree (Figure 1.2), a natural topology of data center is exposed to severe flow unfairness known as TCP outcast [57]. When a large set of flows and a small set of flows come in at two input ports of a switch and come out at one common output port, the small set of flows suffer from throughput starvation significantly. TCP outcast mainly occurs at drop-tail queues with a smaller TCP minimum retransmission timeout, which is a common case in commodity data center networks. Modern data center networks are complex network systems that contain hundreds to thousands of servers making it challenging to address above issues. DCTCP[2], an ECN (Explicit Congestion Notication) based server side congestion control mechanism has been successful to solve TCP incast, queue buildup, and buffer pressure in data center networks. Although DCTCP has been publicly available as a kernel patch for Linux , regular TCP variants are still default options in widely used open sourcing operating systems. To utilize the infrastructure of data center networks, network operators and users may have their own choice of transport protocols on servers. For example, there are heterogeneous regular TCP variants running in the Internet [77]. We argue that queuing mechanism at layer-2 switch is the key to improvement in fairness and latency while achieving high throughput. Figure 1.2 shows a data center 5

16 Internet Aggregation router Data Center Aggregation switch Network racks Top of rack switch (TOR) Rack servers FIGURE 1.2. A data center network showing aggregation and top of rack (TOR) switches network with switches connected to each other and to servers. Our study is based on the observation that in data center networks, aggregate queuing delays caused by layer-2 switches are a major contributor to in-network latencies [68, 2]. Although a data center usually resides in a single building with very low propagation delay, the high bandwidth and high burstiness still create high queuing delay resulting into high latency in traditional switches [2, 3]. Moreover, since non-cooperating applications and multi-tenants coexist in data center networks, fairness issue is better to be tackled at switch level through queuing scheme [66, 75]. 1.2 Summary of Contribution The main contributions of this study fall into following three parts: 1. Through extensive experimental evaluation of 10Gbps high speed networks, we present for the first time the interplay of queue management schemes, high speed TCP variants, and variety of buffer sizes over a 10Gbps high speed networking environment. We found it difficult to address issues such as fairness, low queuing delay and high link utilization. Current high speed networks carry heterogeneous TCP flows which makes it even more challenging to address these issues. We identify two critical issues high speed networks and data center networks now facing: inter-protocol unfairness and large queuing latency. 6

17 2. We propose a new queuing scheme called Approximated-Fair and Controlled- Delay (AFCD) queuing for high speed networks that aims to meet following design goals: approximated fairness, controlled low queuing delay, high link utilization and simple implementation. The design of AFCD utilizes a novel synergistic approach by forming an alliance between approximated fair queuing and controlled delay queuing. It uses very small amount of state information in sending rate estimation of flows and makes drop decision based on a target delay of individual flow. Through experimental evaluation in a 10Gbps high speed networking environment, we show AFCD meets our design goals. 3. We present FaLL, a Fair and Low Latency queuing scheme for data center networks that meets following performance requirements: fair share of bandwidth, low queuing latency, high throughput, and ease of deployment. FaLL uses an efficiency module, a fairness module and a target delay based dropping scheme to meet these goals. FaLL requires very small amount of state information. Through rigorous experiments on real testbed, we show that FaLL outperforms various peer solutions in variety of network conditions over data center networks. 1.3 Outline of Dissertation The rest of this dissertation is organized as following parts: Chapter 2 is the performance evaluation of 10Gbps high speed networks. We first setup a real high speed networking testbed. We then present the experimental evaluation of the interrelationship among queue management schemes, high speed TCP variants, and various buffer sizes. Fairness issue and latency issue are found to be the most outstanding issues over 10Gbps high speed networks. In Chapter 3, we propose the AFCD (Approximated-Fair and Controlled-Delay) queuing for high speed networks. AFCD addresses the fairness issue and the latency 7

18 issue at the same time. We explain the detailed design and algorithm of AFCD. Then we conduct experimental evaluation of AFCD in a high speed networking environment. We present the FaLL (Fair and Low Latency) queuing scheme for data center networks in Chapter 4. FaLL enforces fair share of bandwidth and low queuing latency with minimum state informations. We present the design, algorithm, and experimental evaluation of FaLL. We draw our conclusion in Chapter 5. 8

19 CHAPTER 2 EXPERIMENTAL EVALUATION OF THE PERFORMANCE OF 10GBPS HIGH SPEED NETWORKS We first conduct experimental evaluation the performance of 10Gbps high speed networks in this chapter. Performance results are presented for important metrics of interest such as link utilization, intro-protocol fairness, inter-protocol fairness, RTT fairness, queuing delay and computational complexity. The results of evaluation suggest two outstanding issues in contemporary networks: fairness and latency. 2.1 An Experimental Study of the Impact of Queue Management Schemes and TCP Variants on 10Gbps High Speed Networks In this section, we conduct a comprehensive experimental study of the impact of queue management schemes and TCP variants on the performance of 10Gbps high speed networks Overview Over the years, the Drop-tail queue mechanism has been under scrutiny and is found to be unsuitable choice to address issues such as transmission control protocols (TCP) global synchronization, underutilization of link bandwidth, high packet drop rate, high transmission delay, and high queuing delay. To address these issues, Random Early Detection (RED) [24] was proposed as an active queue management (AQM) solution in Thereafter, several AQMs, such as CHOKe(CHOose and Keep for responsive flows, CHOose and Kill for unresponsive flows) [53], SFB(Stochastic Fair Blue) [21] etc., have been proposed. Inspite of so many AQM proposals, there has been a significant lack of comparative performance evaluation studies on real production 9

20 networks to permit any conclusion on the merits of these QM schemes. There are two major consequences for the lack of comparative studies. Firstly, although these AQMs are theoretically superior to Drop-tail, these AQMs are still scarce in production networks; secondly, there is not much support from the testing to the development of future QM schemes. To address the challenges in design and development of QM schemes in future networks, our focus is to compare the performance of competing QM proposals in a systematic and repeatable manner in a 10Gbps high speed network environment. Simulation and experiment are two main methods to perform such studies. Most of the evaluation research works on AQM schemes rely on simulation models, such as Network Simulator 2 (ns-2) or OPNET modeler. However, a linear increase in bandwidth demands for an exponential increase in CPU-time and memory usage for these discrete simulation methods [46] which makes it very difficult to finish the simulation of high speed links in a reasonable time. Besides, to address issues in design and real network deployment of QM schemes demands a real experimental network environment [25]. Since the cost could be expensive for large businesses or ISP networks to deploy the AQM schemes in the Internet, recent network research [49] tries to check if it is beneficial to deploy the AQM schemes in the core routers. The existing research focuses on the nature of the AQM scheme itself, but overlooks the effect of the AQM scheme on transport layer congestion control. Often, the core of the network is a playground for transport protocols and therefore, it is difficult for a network service provider to address issues related to performance of their network. Due to the feedback nature of AQM schemes, TCPs will behave differently according to its own congestion control algorithm. Especially in production networks (or data centers), the pairing of a TCP variant and an appropriate QM scheme to compliment that TCP is highly 10

21 desirable. In other words, it is highly desirable to know the impact of QM schemes on transport protocols. Therefore, in this study, the performance metrics for QM schemes are chosen to be very TCP specific. We also consider the metrics such as memory usage and CPU requirement which we find in a direct correlation with ease of deployment and operational costs. We choose the popular QM schemes for evaluation, including Drop-tail, RED, CHOKe, and SFB. Among high speed TCP variants, we select CUBIC (CUBIC TCP) [31], HSTCP (HighSpeed TCP) [22], RENO (TCP-Reno), and VEGAS (TCP-Vegas) [10]. It was noted that CUBIC, HSTCP and RENO account for 2/3 of all TCP variants used on the Internet [77]. VEGAS represents the delay-based TCP, and is the only delay-based TCP supported in the current Linux kernel. We present the results in terms of link utilization, intro-protocol fairness, RTT fairness, delay, and computational complexity Related Works To evaluate performance of TCP variants, the authors in [29] and [30] evaluated high speed TCP protocols in a realistic high speed networking environment. The high speed TCP protocols were evaluated against itself in terms of several TCP performance metrics. However, when evaluating all TCP protocols, the authors did not consider the impact of queue management schemes in the router. In [8], the authors presented a framework to evaluate AQM schemes. Five metrics were chosen to characterize overall network performance of AQM schemes. The authors suggested simulation environments and scenarios, including ns-2 interfaces, traffic models and network topologies. As a continuing work [9], the authors gave simulation based evaluation and comparison of a subset of AQM schemes. Their framework was based on ns-2 simulation which is different than a real-world experiment, especially the 10Gbps high speed networks. 11

22 The authors in [14] evaluated new proposed AQM schemes with some specific network scenarios. They proposed a common testbed for the evaluation of AQM schemes which includes a specification of the network topology, link bandwidths and delays, traffic patterns, and metrics for the performance evaluation. Also, the authors realized that AQM schemes need to cooperate closely with TCP. However, they evaluated AQM schemes over regular Internet speed but not 10Gbps speed, and only presented the results of TCP-RENO in their evaluation. Moreover, the authors in [40] considered router buffer sizing in evaluation of high speed TCP protocol. They conducted an experimental evaluation of CUBIC TCP in small router buffers (e.g. a few tens of packets). Their work highlighted the need for a thorough investigation on the performance of high speed TCP variants with small router buffers for newly emerging high speed networks. In a recent work [47], the authors found the tradeoff between throughput and fairness in high speed networks. The network performance was evaluated by a model based simulation method, which shows some bottlenecks in evaluating high speed networks. In [74], the authors evaluated the impact of queue management schemes on the performance of TCP over 10Gbps high speed networks. However, the detailed setup of a 10Gbps environment was not unveiled. And some of the research results were not presented such as TCP-VEGAS result, intro-protocol fairness result, memory consumption, etc. In [76], fairness was evaluated thoroughly among heterogeneous high speed TCP variants by using different queue management schemes with varying degrees of buffer sizes, but other metrics have not been fully evaluated yet. The authors in [75] evaluated extensively both fairness and latency of AQM schemes over 10Gbps high speed networks. They proposed a new AQM schemes which works well in terms of fairness and latency over 10Gbps high speed networks. 12

23 In this work, we consider the most important metrics which need to be evaluated for the interrelationship between TCP variants and queue management schemes. Performance metrics have been defined in [23] previously. The authors discussed the metrics to be considered to evaluate congestion control mechanisms for the Internet. They brought 11 metrics in total, which could be used for evaluating new or modified transport layer protocols Network Performance Consideration Previous experimental studies treated evaluation of TCP, evaluation of AQMs and effect of router parameters running a particular AQM on TCP separately. Also, these studies do not include 10Gbps bandwidth consideration. In this study, we propose a complementary approach of combining TCP, router parameters and a high speed network of order of 10Gbps. The following equation summarizes the dynamics of the TCP congestion window W (t) at time t: W (t) = W max β + α t (2.1) RT T W max is the congestion window size just before the last window reduction, RTT is the round trip delay of this flow, and α and β are increase and decrease parameters respectively. W max depends on router parameters, such as a penalty signal P of queue management schemes, router buffer size Q, and the bottleneck capacity C: W max QC P (2.2) From the equation above, it is clear that the performance of a network depends on the combination of TCP variants, queue management schemes and router buffer sizes. Our argument is consistent with [64], which concluded that the TCP sending rate depends on both the congestion control algorithms and the queue management 13

24 schemes in the links. A real network measurement on high speed networks shows that burstiness increases with bandwidth because packets degenerate into extremely bursty out flows with data rates going beyond the available bandwidth for short periods of time [26]. It is clear that such surges can easily generate severe penalty signals and further degrade the overall network performance. The penalty signals for different queue management schemes are also different. When the router buffer is full, A Drop-tail queue drops all the packets. A RED queue drops packets early and randomly according to the queue length. CHOKe extends RED to compare random packets and drops packets for fast flows. SFB uses a bloomfilter to determine fast flows and drops packets from fast flows. Buffer size at the routers also impacts network performance. Router buffers cause queuing delay and delay-variance. And in the case of underflow, throughput degradation is observed. Appropriate sizing of buffers has been considered a difficult task for router or switch manufacturers. Given the importance of router parameters, different high speed TCP variants will differ in performance for the same router parameters. We elaborate on this point by considering the impact of penalty signals on congestion window TCP variants in consideration as below: 1) Traditional TCP s AIMD algorithm has the increase parameter α and decrease parameter β to be 1 and 0.5 respectively. 2) HSTCP s increase parameter α and decrease parameter β are functions of the current window size, namely α(w ) and β(w ). The range of α(w ) could be from 1 to 73 packets, and β(w ) from 0.5 to ) CUBIC updates the congestion window according to a cubic function as shown below: W CUBIC C(t 3 W max β/c) 3 + W max (2.3) 14

25 where C is a scaling factor, t is the elapsed time since the last window reduction, W max is the window size just before the last window reduction, and β is the decrease parameter. 4) VEGAS is a delay-based TCP variant. It has 2 thresholds, α and β, to control the amount of extra data, i.e T extra = T expected - T actual, where T expected is an estimation of expected throughput calculated by T expected = windowsize / smallestmeasuredrt T. Window size of VEGAS is updated as follows: If T expected < α, window size increased by 1. If α < T expected < β, no change in window size. If T expected > β, window size decreased by 1. A detailed understanding of the many facts of network parameters is critical for evaluating the performance of networking protocols, for assessing the effectiveness of proposed protocols, and for developing the next generation high speed networks. In this work, we narrow down our focus to three key components that affect the performance of 10Gbps networks: TCP variants, queue management schemes, and router buffer size Experimental Preparation 1. CRON Setup CRON [16] is an emulation-based 10Gbps high speed testbed, which is a cyberinfrastructure of reconfigurable optical networking environment that provides multiple networking testbeds operating up to 10Gbps bandwidth. As shown in Figure 2.1, CRON provides users with automatic configuration of arbitrary network topologies with 10Gbps bandwidth. Also, CRON can be federated with other 10Gbps high speed networks, such as Internet2, NLR, and LONI. Details of demonstrations of how to use the CRON testbed could be found here [1]. 15

26 CRON Automated Configuration Server Internet CRON user 10Gbps link Cisco N5000 Switch Internet2/NLR/LONI Resource 1Gbps link High-end Servers User level Collaborations: Network research Cloud computing Bioinformatics, and etc. FIGURE 2.1. CRON system architecture consisting of routers, delay links, and high-end workstation operating up to 10Gbps bandwidth As dumbbell topology is a widely accepted network topology, we create a dumbbell topology as shown in Figure 2.2 in CRON. In the topology, all nodes are Sun Firex4240 servers which have two quad core 2.7-GHz AMD Opteron 2384 processors, 8 GB/s bus, 8GB RAM and 10GE network interface cards. From a software perspective, two pairs of senders and receivers run a modified version of Linux kernel, which supports TCP variants of CUBIC, HSTCP, RENO, and VEGAS. The routers run a modified version of Linux kernel, which supports queuing disciplines of Droptail, RED, CHOKe, and SFB. The delay node runs a modified version of FreeBSD 8.1, which supports a 10Gbps version of Dummynet[11] with 10Gbps bandwidth and enlarged queue size. We set the RTT on the delay node to 120ms. All the links have a 10Gbps capacity, and the bottleneck link is the one between Router1 and Router2. We set the queue disciplines at the output queue of Router1, where the congestion happens. By default, we send 10 flows from Sender1 to Receiver1 and 10 flows from Sender2 to Receiver2. We choose the number of flows to be 10 according to recent statistics of network flows of Internet2 [37], which suggested that the number of long-lived TCP flows are always several tens of flows in 10Gbps high speed networks such as Internet2. 16

27 Sender1 Software Router1 Dummynet Delay Node RTT=120ms Software Router2 Receiver1 Router Queue Sender2 Receiver2 FIGURE 2.2. Experimental topology: dumbbell topology The duration of each emulation test is 20 minutes, and all tests run for 7 to 10 times. We get the final result based on the average value. 2. System Tuning for 10Gbps To get a systematic and repeatable 10Gbps network environment, we perform system tuning and software patching in the CRON testbed. Firstly, on the senders and receivers with Linux kernel , we enlarge the default TCP buffer size for high speed TCP transmit. According to [78], we implement the zerocopy Iperf to avoid the overhead of data copy from user-space to kernel space, and we enable packets large receive offload (LRO) and TCP segment offload on the NICs. We also set MTU to 9000 Bytes [72]. Secondly, on the routers with Linux kernel , the Linux default queuing discipline controller, traffic control (tc), does not support control for CHOKe and SFB in user-space. So we patch tc(8) to support CHOKe and SFB. In kernel-space, we find that for RED, the scaled parameter of maximum queue threshold only supports 24 bit value which means up to only 16MB. So we change it to a 56 bit value to support a higher maximum queue threshold. In addition, we use standard skbuf to forward packets and disable LRO on the router NICs. Thirdly, on the delay node which runs FreeBSD 8.1, we optimize memory utilization by creating a continuous memory space for received packets to overcome the drawback of the memory fragmentation of M buf allocation in FreeBSD. In Dummynet, we 17

28 change the type of bandwidth from int to long so that its bandwidth capacity gets an improvement from 2Gbps to 10Gbps. We also increase the value of the Dummynet hardware interruption storm threshold. 3. Queue Parameter Setup Queue management schemes in consideration along with its parameters are shown in Table 2.1. In [4], the authors suggest that a link needs only a buffer of size O( C / N), where C is the capacity of the link, and N is the number of flows sharing the link. In addition, we vary the router buffer size to examine all the combinations of TCP variants and queue management schemes. In [20], the authors suggest that buffers can be reduced even further to packets. Given the significance of their role, we vary the buffer size as 1%, 5%, 10%, 20%, 40%, and 100% of BDP to find out the impact of the router buffer sizing on AQM schemes in 10Gbps high speed networks. TABLE 2.1. Parameter setup for 4 queue management schemes Queue Parameter Setup Drop-tail queue length limit: from 1% to 100% BDP RED queue length limit: from 1% to 100% BDP minimum threshold qth m in: 0.1 limit maximum threshold qth m ax: 0.9 limit average packet size avpkt: 9000 maximum probability max P : 0.02 CHOKe queue length limit: from 1% to 100% BDP minimum threshold qth m in: 0.1 limit maximum threshold qth m ax: 0.9 limit SFB queue length limit: from 1% to 100% BDP increment of dropping probability increment: decrement of dropping probability decrement: Bloom filter uses two 8 x 16 bins target per-flow queue size target: 1.5/N of total buffer size N: number of flows maximum packets queued max: 1.2 target 18

29 4. Performance Metrics (1) Link Utilization. Link utilization is the percentage of total bottleneck capacity utilized during an experiment run. (2) Intra-protocol Fairness. Intra-protocol fairness represents the fairness among all TCP flows. Long term flow throughput is used for computing fairness according to Jain s fairness index [39]. (3) RTT fairness. RTT fairness is the fairness among TCP flows with different RTTs. Longer RTT TCP flows suffer from lower throughput and shorter RTT flows get more throughput. (4) Delay. Delay is the average end-to-end delay experienced by the flows, also known as Round-trip delay (RTT) of packets across the bottleneck paths. It includes the queuing delay created by the queue at router. (5) Computational Complexity. Computational complexity is the algorithm space and time complexity of the queue management schemes, which is the memory consumption and CPU usage on the servers in our experiments Results of Experimental Evaluation 1. Link Utilization We vary queue buffer size to observe the link utilization. Figure2.3(a), Figure2.3(b), Figure2.3(c), and Figure2.3(d) show link utilization for queue management schemes as a function of buffer size for CUBIC, HSTCP, RENO with SACK, and VEGAS respectively. With only 1% BDP buffer size on the bottleneck link; almost all of the queue management schemes for all TCP variants show more than 85% link utilization, which is close to the previous sizing router buffer researches [4], [45]. And if the buffer size reaches 10% of BDP, almost all the queue management schemes under all TCPs will get more than 90% link utilization except for TCP-VEGAS. 19

30 % Link Utilization % Link Utilization Drop-tail RED CHOKe SFB Fraction of BDP (a) Link Utilization vs buffer size for CUBIC Drop-tail RED CHOKe SFB Fraction of BDP (c) Link Utilization vs buffer size for RENO % Link Utilization % Link Utilization Drop-tail RED CHOKe SFB Fraction of BDP (b) Link Utilization vs buffer size for HSTCP Fraction of BDP Drop-tail RED CHOKe SFB (d) Link Utilization vs buffer size for VEGAS FIGURE 2.3. Link Utilization as a function of buffer size in each TCP variant Figure2.3(a) is for link utilization of queue management schemes for CUBIC, when the buffer size is very small to 1% BDP, Drop-tail performs worst, while SFB gets highest link utilization among others. If the buffer size increases up to 100% BDP, CHOKe and SFB get higher link utilization. In Figure2.3(b), link utilization of queue management schemes is for HSTCP, and SFB almost always outperforms other queuing schemes, while RED almost always gets lowest link utilization than others. Figure2.3(c) shows the link utilization of queue management schemes for RENO with SACK, Drop-tail performs best in this case, SFB is still better than the other two, while RED almost always gets the lowest link utilization. In Figure2.3(d), VEGAS shows different link utilization behaviors because of its delay-based nature, which depends on the queue size. In general, when the buffer size 20

31 becomes larger, all queue management schemes get higher link utilization. In the case of less than 10% BDP, Drop-tail almost always gets the highest link utilization. In the case of more than 10% BDP, AQM schemes almost always get higher link utilization. The reason is when the queue size becomes larger, AQM schemes do not have early drops because VEGAS controls the queue size in a limited range, and therefore, link utilization of the AQM scheme improves. 2. Intra-protocol Fairness Intra-protocol fairness is the fairness among TCP flows with the same kind of TCP. In our evaluation, Jains fairness index is calculated for intra-protocol fairness among 20 flows with same TCP variant and same RTT of 120ms. Figure2.4 shows the intra-protocol fairness for these 20 flows. In general, CUBIC shows the highest fairness index, which has a fairness index in the range of 0.97 to 1. HSTCP seconds with a fairness index in the range of 0.94 to RENO is third, which has a fairness index higher than VEGAS is last with a fairness index higher than 0.8. Figure2.4(a) shows the case for CUBIC, RED and CHOKe have a very high introprotocol fairness around 0.99 fairness index. SFB generally shows lower intro-protocol fairness than other queue management schemes. According to our observation, although SFB has bucket drops to limit the fast flows, SFB always has more tail drops than other queue management schemes in high speed networks because of its more complex queuing mechanism. That is the reason SFB can not perform as fairly as other AQM schemes. Figure2.4(b) is the result for HSTCP, which is similar to the case of CUBIC. RED and CHOKe still show higher intro-protocol fairness, while Drop-tail and SFB show lower intro-protocol fairness. 21

32 Fairness Index Drop-tail CHOKe RED SFB Fraction of BDP Fairness Index Drop-tail RED CHOKe SFB Fraction of BDP (a) Fairness for 20 CUBIC flows (b) Fairness for 20 HSTCP flows 1 1 Fairness Index Drop-tail RED CHOKe SFB Fraction of BDP Fairness Index Drop-tail RED CHOKe SFB Fraction of BDP (c) Fairness for 20 RENO flows (d) Fairness for 20 VEGAS flows FIGURE 2.4. Fairness among 20 flows as a function of buffer size in each TCP variant (RTT = 120ms) In the case of RENO, Figure2.4(c) shows that RED always gets the highest introprotocol fairness. Drop-tail is second, and CHOKe is the third. The slow instinct of RENO makes AQM schemes have a similar performance to Drop-tail in terms of fairness. Whenever RENO flows have early drops from AQM schemes, it takes some time for the flows to recover, which in consequence degrades the fairness of AQM schemes. SFB still almost always shows the lowest intro-protocol fairness because of having more tail drops than others. Figure2.4(d) shows the case for VEGAS. Since VEGAS is a delay-based TCP variant, it keeps the queue size as small as possible. AQM schemes all get better fairness than Drop-tail. Drop-tail makes relatively large changes on the queue size, and therefore it performs relatively unfair. 22

33 3. RTT Fairness We measure RTT fairness by Jain s fairness index for 20 competing flows of the same TCP variant but different RTTs. In these 20 flows, 10 of them have a fixed RTT, while the other 10 flows have a different RTT. The RTT of 10 flows from Sender1 is fixed at 120ms, while the RTT of the other 10 from Sender2 is changed as 30ms, 60ms, 120ms, and 240ms respectively. We set the bottleneck buffer to 10% BDP because link utilization of the bottleneck link shows good performance with buffer size 10% BDP in general. Also, in this case queuing delay can be neglected. Figure 2.5(a) shows CUBIC RTT fairness for four queue management schemes. In our measurement, CUBIC shows very good behavior of RTT fairness. Even under 240ms RTT, every queue management scheme shows more than 90% of RTT fairness. 1 1 Jain's Fairness Index Drop-tail RED CHOKe SFB RTT (ms) Jain's Fairness Index Drop-tail RED CHOKe SFB RTT (ms) (a) RTT fairness for CUBIC (b) RTT fairness for HSTCP 1 1 Jain's Fairness Index Drop-tail RED CHOKe SFB RTT(ms) RTT Fairness Drop-tail RED CHOKe SFB Fraction of BDP (c) RTT fairness for RENO (d) RTT fairness for VEGAS FIGURE 2.5. RTT fairness as a function of RTT in each TCP variant 23

34 SFB performs better than others; while RED gets least fairness. Figure 2.5(b) and 2.5(c) show HSTCP s and RENO s RTT fairness cases. HSTCP s RTT fairness is better than RENO s for all queue management schemes, and both HSTCP s RTT fairness and RENO s RTT fairness are quite lower than CUBIC s. We can still see SFB shows the best for all the TCP variants under different RTT scenarios, and when one RTT increases to 240ms, we get a low RTT fairness in both cases. Figure 2.5(d) shows VEGAS s RTT fairness. Every queuing scheme gets around 0.8 to 0.9 fairness index except that of Drop-tail which gets a low RTT fairness in the case of 240ms RTT. 4. Queuing Delay We observe performance in delay by varying buffer size. Since this measurement is based on round trip propagation delay of 120ms, the average RTT will be [0, max queuing delay]. Figure 2.6(a) shows CUBIC result, Drop-tail always has more queuing delays than others. SFB is the second with more queuing delay than RED and CHOKe. Figure 2.6(b) shows result for HSTCP, we can still see Drop-tail and SFB show more queuing delay than the others, and SFB shows an oscillation, and exhibits queue delays more than double the amount of propagation delays. Figure 2.6(c) is for RENO, Drop-tail and SFB queuing delays grow faster than the others. RED and CHOKe show nearly the same behavior and both grow more smoothly with the increase in buffer size as compared to Drop-tail and SFB. Figure 2.6(d) shows results for VEGAS. For all queue management schemes, VE- GAS almost does not create any queuing delay. In all cases, the average RTTs for VEGAS are only 120ms, which is the propagation delay we set. This is because VE- GAS itself maintains the queue in a very small size, such as several packets. The results confirm that delay-based TCP variant maintains a stable and small queue size in 10Gbps high speed networks. 24

35 Average RTT (ms) Drop-tail RED CHOKe SFB Average RTT (ms) Drop-tail RED CHOKe SFB Franction of BDP Fraction of BDP (a) Delay vs. buffer size in CUBIC (b) Delay vs. buffer size in HSTCP Average RTT (ms) Drop-tail RED CHOKe SFB Fraction of BDP Average RTT (ms) Drop-tail CHOKe RED SFB Fraction of BDP (c) Delay vs. buffer size in RENO (d) Delay vs. buffer size in VEGAS FIGURE 2.6. Delay as a function of buffer size in each TCP variant 5. Computational Complexity Figure 2.7 shows the average memory consumption in a function of buffer size on the bottleneck router. We can see that for CUBIC, HSTCP and RENO, the general trend is that when the buffer size increases, the memory consumption increases. When the buffer size is 100% BDP, the memory consumption reaches more than 500MB. Droptail generally needs more memory than other AQM schemes. Figure 2.7(d) shows that VEGAS almost does not create any additional memory for queuing mechanisms, because it maintains a very small size queue. The results of CPU usage reveal less than 10% of total CPU usage of all CPU cores in all of our experiments, and therefore we do not list the detailed CPU usage result here. 25

36 Memory Consumption (MB) Drop-tail RED CHOKe SFB Fraction of BDP 1 Memory Consumption (MB) Drop-tail RED CHOKe SFB Fraction of BDP Memory Consumption (MB) (a) Memory consumption in CUBIC Drop-tail RED CHOKe SFB Fraction of BDP (c) Memory consumption in RENO Memory Consumption (MB) (b) Memory consumption in HSTCP 550 Drop-tail RED 500 CHOKe SFB Fraction of BDP (d) Memory consumption in VEGAS FIGURE 2.7. Memory consumption as a function of buffer size in each TCP variant Summary We present the experimental study of the interplay of queue management schemes and high speed TCP variants over a 10Gbps high speed networking environment. TCP specific performance metrics such as link utilization, fairness, delay, and computational complexity are chosen to compare the impact of queuing schemes on the performance of TCP-RENO, CUBIC, HSTCP and VEGAS. Our test reveals that Drop-tail is most suited for TCP-RENO and observed to be worst for CUBIC and HSTCP. In our experiment scenario, we observe at least 10% BDP of buffer size is required for more than 90% link utilization. RED exhibits higher fairness as compared to other QMs for all the TCP variants. SFB is shown to be effective in RTT fairness improvement. TCP VEGAS shows very low queuing delay and memory consumption. 26

37 In summary, we observe differences in performance of queue management schemes for different TCP variants. We hope that even a preliminary understanding of key factors, when combined with critical performance metrics, can provide a perspective that is easily understood and can serve as guidelines for network designers of 10Gbps high speed networks. Also, the results of the study address the current need for the research on the impact of queue management schemes on the performance of the high speed TCP variants. It is also desirable to observe the same impacts on a more realistic experimental environment by considering background traffic. For our future work, it is interesting to observe the impact of these queue management schemes in a wide range of different network topologies. In this study, although our focus has been on homogeneous TCP flows, we expect a different behavior in the case of heterogeneous TCP flows. The presented work supports further research work on the design and deployment issues of queue management schemes for high speed networks. 2.2 A Study of Fairness among Heterogeneous TCP Variants over 10Gbps High-speed Optical Networks In this section, we conduct an experimental study of fairness among heterogeneous TCP variants over 10Gbps high-speed optical networks. We choose most popular TCP variants and present the fairness behavior of these heterogeneous TCP variants Overview High-speed optical networks such as National Lambda Rail (NLR) [51], Internet2 [36], Louisiana Optical Networks Initiative (LONI) [48], etc., with their capacity to deliver a data transfer rate in excess of 10Gbps, have been developed to serve the demand of end users. Efficiency and fairness of these high-speed optical networks among its end users have been concerned among researchers. Traditional congestion control mechanism Additive Increase Multiplicative Decrease (AIMD) has been outstanding 27

38 in fulfilling the requirements of efficiency and fairness, and has been a first choice for large networks. However, it is not scalable in high bandwidth and large delay networks. Therefore, researchers proposed several TCP variants, such as HighSpeed TCP (HSTCP) [22], Scalable TCP (STCP) [44], FAST TCP (FastTCP) [41], BIC TCP (BIC) [73], CUBIC TCP (CUBIC) [31], etc. Open sourcing of these protocols enables Linux/BSD users to choose their own transport layer protocols. Also, a recent study [77] on 5000 most popular web servers shows that AIMD, BIC/CUBIC, and HSTCP/CTCP are used by %, 44.5%, and % respectively among the web servers. Active use of these high-speed TCP variants by web servers suggests that our traditional homogeneous network is rapidly evolving into a heterogeneous one. Because every TCP employs its own congestion control mechanism differently, it is hard to predict behavior of a network where these different TCP variants interact at a bottleneck link. Since these TCP variants are designed for high-speed networks, we believe they are all able to provide high throughput, and we confirm the high link utilization for heterogeneous TCP flows in Section On the other hand, these TCP variants react differently to packet losses and use different mechanisms to quickly adapt to the available bandwidth. Fairness among heterogeneous TCP variants becomes hard problem, and mainly depends on router parameters such as queue management schemes and buffer size [65]. Most of TCP variants have been evaluated in fairness behavior against itself or traditional AIMD [29]. However, it is clear that future high-speed optical networks will be heterogeneous in nature, which means it will consist of flows of many different variants of TCP. Therefore, it is critical for us to evaluate fairness behavior among heterogeneous TCP flows in newly emerging high-speed optical networks. 28

39 To address the fairness behavior, it is important to understand the interaction of these TCP variants in a high-speed optical network environment. However, due to the high cost and limited availability of high-speed optical networks, it is hard for network researchers or operators to get privilege to evaluate network performance by investigating different network parameters, such as heterogeneous TCP variants, router parameters, etc. In this work, we evaluate fairness among heterogeneous TCP flows with various network parameters over a cyberinfrastructure of reconfigurable optical networking environment (CRON) [16], which is an emulation-based reconfigurable 10Gbps high-speed optical network testbed. Moreover, this study presents fairness issues among heterogeneous TCP variants, which could exist in all-optical routers that are designed to have very limited buffer size. One of the challenges in deploying high-speed optical networks is to manage all-optical routers with very small buffer to cooperate with various TCP variants [12, 60, 70]. This study brings the fairness issues to the table and matches the needs of preliminary research of deploying all-optical routers in high-speed optical networks. Our heterogeneous TCP flows consist of flows of TCP-SACK, HSTCP and CUBIC. It is to be noted that these three protocols have substantial presence in current Internet [77]. Experimental scenarios presented in this study have similar traffic characteristics as observed in high-speed optical networks such as Internet2 [37], where number of high speed flows (long-lived TCP flows, bulk data transfer, etc.) ranges from a few to a few tens of flows. Thus, we first show fairness behavior for single long-lived TCP flow case; then the case for many long-lived TCP flows; and finally the case with both long-lived TCP flows and short-lived TCP flows. We also show RTT fairness behavior with and without short-lived TCP flows. Among router parameters, we choose queue management schemes such as Drop-tail, RED(Random Early Detection) [24], CHOKe(CHOose and Keep for responsive flows, 29

40 CHOose and Kill for unresponsive flows) [53], and AFD(Approximated Fair Dropping) [52]. RED, CHOKe, and AFD, as active queue management (AQM) schemes, have been studied for a long time to avoid network congestion and to improve fairness. Furthermore, fairness behavior is presented for buffer sizes ranging from 1% to 100% of Bandwidth-Delay Product (BDP), where BDP = C RT T, C is the data rate of the link and RT T is the round trip time. Nowadays, researchers are looking to decrease router buffer size to a point where network performance can be optimized. In [4], authors suggested a link with n flows requires no more than buffer size of BDP/ N for long-lived or short-lived TCP flows. And authors in [20] argued that buffer size of packets could be sufficient to serve the requirements. Thus, we examine various router buffer sizes in our experiments to see the impact of buffer sizing on fairness among heterogeneous high-speed TCP flows. Our findings suggest that fairness becomes poor when there are heterogeneous TCP flows mixed at the bottleneck link. AQM schemes, such as RED, CHOKe, and AFD, can improve fairness to some extent when router buffer size is more than 10% of BDP. For buffer sizes less than 10% of BDP, AQM schemes lose their advantage on fairness. Furthermore, multiplexing of many long-lived flows and short-lived flows improve fairness. To the best of our knowledge, this work is the first one to present a comprehensive study on fairness behavior of heterogeneous TCP flows over a 10Gbps high-speed optical network testbed Related Works In [64, 66], authors explored that bandwidth allocation among heterogeneous flows is coupled with router parameters, such as router queue management schemes, router buffer size, etc. Their solution for fairness is source-based, which means every source should change its algorithm. The authors conducted pairwise comparison only between FastTCP and TCP-Reno through regular network simulations. In [52], authors 30

41 proposed a queue management scheme called Approximate Fair Dropping (AFD) to achieve reasonable fair bandwidth allocation. Then, AFD was evaluated by mixing heterogeneous TCP flows in the bottleneck. In their work, they assumed that router s buffer is sufficient. Although they considered different shadow buffer size b, they overlooked the impact of router buffer sizing on fairness behavior. Effect of buffer sizing on fairness has been investigated in [71]. Authors studied the interrelationship among fairness, small buffer size, and desynchronization of long-lived TCP flows. Their analysis is based on the pair of TCP-Reno and Drop-tail, but not on the heterogeneous instinct of current high-speed optical networks. In [32], authors added RED into consideration, and evaluated the impact of loss synchronization and buffer sizing on fairness behavior. However, they only evaluated intra-protocol fairness for homogeneous TCP flows. Also, in [72], authors raised TCP performance issue of intraprotocol fairness for 10Gbps high-speed optical networks, but they did not consider the influence of router parameters. In [70], the authors investigated the bandwidth sharing issue when heterogeneous traffic multiplex at an optical router with very small buffers. The authors explored buffer allocation strategies to share bandwidth among heterogeneous flows in optical packet switched networks. Their study only focused on the interplay between traditional TCP and UDP under different buffer sizes. Authors in [74] presented experimental results of intra-protocol fairness with different router parameters, and considered evaluation of inter-protocol fairness for heterogeneous TCP flows as future work. As a continuous work, authors in [76] evaluated fairness among heterogeneous high-speed TCP variants, but the results of AFD queue and RTT fairness for heterogeneous TCP variants were not presented. In this study, we evaluate the fairness behavior of heterogeneous TCP flows mixed in the bottleneck link. Different queue management schemes, as well as different router 31

42 buffer sizes, have been considered in our experiments. Moreover, this study has been done for the first time in a 10Gbps high-speed optical networking testbed Experimental Design 1. Testbed Setup As shown in Figure 2.8, we create a dumbbell topology in CRON. Multiple TCP sessions share a bottleneck link between two routers. Three separate senders, which run a modified version of Linux kernel, are used to initiate TCP flows. The two routers run a modified version of Linux kernel, which supports various Linux queuing disciplines. The delay node runs a modified version of FreeBSD 8.1, which supports 10Gbps version of Dummynet as a software network emulator. The bottleneck link is the link between Router1 and Router2. So the bottleneck queue is Router1 s output queue. All the presented results are averaged over five experiments and duration of each run is 20 minutes. All links in the testbed support 10Gbps data transfer. We perform system tuning for a 10Gbps network environment which includes kernel optimizations for Linux and FreeBSD, TCP parameters tuning, NIC driver optimizations, jumbo frame, patches for 10Gbps version of RED and CHOKe queuing disciplines both in Linux user-space and kernel-space, and patches for Dummynet with enlarged queue size and bandwidth FIGURE 2.8. Experimental Topology 32

Impact of Short-lived TCP Flows on TCP Link Utilization over 10Gbps High-speed Networks

Impact of Short-lived TCP Flows on TCP Link Utilization over 10Gbps High-speed Networks Impact of Short-lived TCP Flows on TCP Link Utilization over Gbps High-speed Networks Lin Xue, Chui-hui Chiu, and Seung-Jong Park Department of Computer Science, Center for Computation & Technology, Louisiana

More information

Transport Protocols for Data Center Communication. Evisa Tsolakou Supervisor: Prof. Jörg Ott Advisor: Lect. Pasi Sarolahti

Transport Protocols for Data Center Communication. Evisa Tsolakou Supervisor: Prof. Jörg Ott Advisor: Lect. Pasi Sarolahti Transport Protocols for Data Center Communication Evisa Tsolakou Supervisor: Prof. Jörg Ott Advisor: Lect. Pasi Sarolahti Contents Motivation and Objectives Methodology Data Centers and Data Center Networks

More information

ADVANCED TOPICS FOR CONGESTION CONTROL

ADVANCED TOPICS FOR CONGESTION CONTROL ADVANCED TOPICS FOR CONGESTION CONTROL Congestion Control The Internet only functions because TCP s congestion control does an effective job of matching traffic demand to available capacity. TCP s Window

More information

Appendix B. Standards-Track TCP Evaluation

Appendix B. Standards-Track TCP Evaluation 215 Appendix B Standards-Track TCP Evaluation In this appendix, I present the results of a study of standards-track TCP error recovery and queue management mechanisms. I consider standards-track TCP error

More information

Chapter II. Protocols for High Speed Networks. 2.1 Need for alternative Protocols

Chapter II. Protocols for High Speed Networks. 2.1 Need for alternative Protocols Chapter II Protocols for High Speed Networks 2.1 Need for alternative Protocols As the conventional TCP suffers from poor performance on high bandwidth delay product links [47] meant for supporting transmission

More information

Chapter III. congestion situation in Highspeed Networks

Chapter III. congestion situation in Highspeed Networks Chapter III Proposed model for improving the congestion situation in Highspeed Networks TCP has been the most used transport protocol for the Internet for over two decades. The scale of the Internet and

More information

Performance Consequences of Partial RED Deployment

Performance Consequences of Partial RED Deployment Performance Consequences of Partial RED Deployment Brian Bowers and Nathan C. Burnett CS740 - Advanced Networks University of Wisconsin - Madison ABSTRACT The Internet is slowly adopting routers utilizing

More information

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup Chapter 4 Routers with Tiny Buffers: Experiments This chapter describes two sets of experiments with tiny buffers in networks: one in a testbed and the other in a real network over the Internet2 1 backbone.

More information

SIMULATION STUDY OF AQM SCHEMES OVER LARGE SCALE NETWORKS IN DATA CENTERS USING NS-3

SIMULATION STUDY OF AQM SCHEMES OVER LARGE SCALE NETWORKS IN DATA CENTERS USING NS-3 SIMULATION STUDY OF AQM SCHEMES OVER LARGE SCALE NETWORKS IN DATA CENTERS USING NS-3 A Masters Project Submitted to the Graduate Faculty of The Louisiana State University, Agricultural and Mechanical College

More information

Study on the Performance of TCP over 10Gbps High Speed Networks

Study on the Performance of TCP over 10Gbps High Speed Networks Louisiana State University LSU Digital Commons LSU Doctoral Dissertations Graduate School 2013 Study on the Performance of TCP over 10Gbps High Speed Networks Cheng Cui Louisiana State University and Agricultural

More information

Equation-Based Congestion Control for Unicast Applications. Outline. Introduction. But don t we need TCP? TFRC Goals

Equation-Based Congestion Control for Unicast Applications. Outline. Introduction. But don t we need TCP? TFRC Goals Equation-Based Congestion Control for Unicast Applications Sally Floyd, Mark Handley AT&T Center for Internet Research (ACIRI) Jitendra Padhye Umass Amherst Jorg Widmer International Computer Science Institute

More information

Implementation Experiments on HighSpeed and Parallel TCP

Implementation Experiments on HighSpeed and Parallel TCP Implementation Experiments on HighSpeed and TCP Zongsheng Zhang Go Hasegawa Masayuki Murata Osaka University Outline Introduction Background of and g Why to evaluate in a test-bed network A refined algorithm

More information

Attaining the Promise and Avoiding the Pitfalls of TCP in the Datacenter. Glenn Judd Morgan Stanley

Attaining the Promise and Avoiding the Pitfalls of TCP in the Datacenter. Glenn Judd Morgan Stanley Attaining the Promise and Avoiding the Pitfalls of TCP in the Datacenter Glenn Judd Morgan Stanley 1 Introduction Datacenter computing pervasive Beyond the Internet services domain BigData, Grid Computing,

More information

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness Recap TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness 81 Feedback Signals Several possible signals, with different

More information

Evaluation of TCP Based Congestion Control Algorithms Over High-Speed Networks

Evaluation of TCP Based Congestion Control Algorithms Over High-Speed Networks Louisiana State University LSU Digital Commons LSU Master's Theses Graduate School 2006 Evaluation of TCP Based Congestion Control Algorithms Over High-Speed Networks Yaaser Ahmed Mohammed Louisiana State

More information

Congestion Control for High Bandwidth-delay Product Networks. Dina Katabi, Mark Handley, Charlie Rohrs

Congestion Control for High Bandwidth-delay Product Networks. Dina Katabi, Mark Handley, Charlie Rohrs Congestion Control for High Bandwidth-delay Product Networks Dina Katabi, Mark Handley, Charlie Rohrs Outline Introduction What s wrong with TCP? Idea of Efficiency vs. Fairness XCP, what is it? Is it

More information

15-744: Computer Networking. Overview. Queuing Disciplines. TCP & Routers. L-6 TCP & Routers

15-744: Computer Networking. Overview. Queuing Disciplines. TCP & Routers. L-6 TCP & Routers TCP & Routers 15-744: Computer Networking RED XCP Assigned reading [FJ93] Random Early Detection Gateways for Congestion Avoidance [KHR02] Congestion Control for High Bandwidth-Delay Product Networks L-6

More information

Performance Analysis of TCP Variants

Performance Analysis of TCP Variants 102 Performance Analysis of TCP Variants Abhishek Sawarkar Northeastern University, MA 02115 Himanshu Saraswat PES MCOE,Pune-411005 Abstract The widely used TCP protocol was developed to provide reliable

More information

Master s Thesis. Congestion Control Mechanisms for Alleviating TCP Unfairness in Wireless LAN Environment

Master s Thesis. Congestion Control Mechanisms for Alleviating TCP Unfairness in Wireless LAN Environment Master s Thesis Title Congestion Control Mechanisms for Alleviating TCP Unfairness in Wireless LAN Environment Supervisor Professor Hirotaka Nakano Author Masafumi Hashimoto February 15th, 21 Department

More information

Quality of Service Mechanism for MANET using Linux Semra Gulder, Mathieu Déziel

Quality of Service Mechanism for MANET using Linux Semra Gulder, Mathieu Déziel Quality of Service Mechanism for MANET using Linux Semra Gulder, Mathieu Déziel Semra.gulder@crc.ca, mathieu.deziel@crc.ca Abstract: This paper describes a QoS mechanism suitable for Mobile Ad Hoc Networks

More information

CS 268: Computer Networking

CS 268: Computer Networking CS 268: Computer Networking L-6 Router Congestion Control TCP & Routers RED XCP Assigned reading [FJ93] Random Early Detection Gateways for Congestion Avoidance [KHR02] Congestion Control for High Bandwidth-Delay

More information

Computer Networking

Computer Networking 15-441 Computer Networking Lecture 17 TCP Performance & Future Eric Anderson Fall 2013 www.cs.cmu.edu/~prs/15-441-f13 Outline TCP modeling TCP details 2 TCP Performance Can TCP saturate a link? Congestion

More information

CS 356: Computer Network Architectures Lecture 19: Congestion Avoidance Chap. 6.4 and related papers. Xiaowei Yang

CS 356: Computer Network Architectures Lecture 19: Congestion Avoidance Chap. 6.4 and related papers. Xiaowei Yang CS 356: Computer Network Architectures Lecture 19: Congestion Avoidance Chap. 6.4 and related papers Xiaowei Yang xwy@cs.duke.edu Overview More on TCP congestion control Theory Macroscopic behavior TCP

More information

Congestion Control in Datacenters. Ahmed Saeed

Congestion Control in Datacenters. Ahmed Saeed Congestion Control in Datacenters Ahmed Saeed What is a Datacenter? Tens of thousands of machines in the same building (or adjacent buildings) Hundreds of switches connecting all machines What is a Datacenter?

More information

The War Between Mice and Elephants

The War Between Mice and Elephants The War Between Mice and Elephants (by Liang Guo and Ibrahim Matta) Treating Short Connections fairly against Long Connections when they compete for Bandwidth. Advanced Computer Networks CS577 Fall 2013

More information

Congestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University

Congestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University Congestion Control Daniel Zappala CS 460 Computer Networking Brigham Young University 2/25 Congestion Control how do you send as fast as possible, without overwhelming the network? challenges the fastest

More information

On the Effectiveness of CoDel for Active Queue Management

On the Effectiveness of CoDel for Active Queue Management 1 13 Third International Conference on Advanced Computing & Communication Technologies On the Effectiveness of CoDel for Active Queue Management Dipesh M. Raghuvanshi, Annappa B., Mohit P. Tahiliani Department

More information

EXPERIENCES EVALUATING DCTCP. Lawrence Brakmo, Boris Burkov, Greg Leclercq and Murat Mugan Facebook

EXPERIENCES EVALUATING DCTCP. Lawrence Brakmo, Boris Burkov, Greg Leclercq and Murat Mugan Facebook EXPERIENCES EVALUATING DCTCP Lawrence Brakmo, Boris Burkov, Greg Leclercq and Murat Mugan Facebook INTRODUCTION Standard TCP congestion control, which only reacts to packet losses has many problems Can

More information

Tuning RED for Web Traffic

Tuning RED for Web Traffic Tuning RED for Web Traffic Mikkel Christiansen, Kevin Jeffay, David Ott, Donelson Smith UNC, Chapel Hill SIGCOMM 2000, Stockholm subsequently IEEE/ACM Transactions on Networking Vol. 9, No. 3 (June 2001)

More information

CUBIC. Qian HE (Steve) CS 577 Prof. Bob Kinicki

CUBIC. Qian HE (Steve) CS 577 Prof. Bob Kinicki CUBIC Qian HE (Steve) CS 577 Prof. Bob Kinicki Agenda Brief Introduction of CUBIC Prehistory of CUBIC Standard TCP BIC CUBIC Conclusion 1 Brief Introduction CUBIC is a less aggressive and more systematic

More information

High bandwidth, Long distance. Where is my throughput? Robin Tasker CCLRC, Daresbury Laboratory, UK

High bandwidth, Long distance. Where is my throughput? Robin Tasker CCLRC, Daresbury Laboratory, UK High bandwidth, Long distance. Where is my throughput? Robin Tasker CCLRC, Daresbury Laboratory, UK [r.tasker@dl.ac.uk] DataTAG is a project sponsored by the European Commission - EU Grant IST-2001-32459

More information

Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks

Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks Zongsheng Zhang Go Hasegawa Masayuki Murata Osaka University Contents Introduction Analysis of parallel TCP mechanism Numerical

More information

PERFORMANCE ANALYSIS OF HIGH-SPEED TRANSPORT CONTROL PROTOCOLS

PERFORMANCE ANALYSIS OF HIGH-SPEED TRANSPORT CONTROL PROTOCOLS PERFORMANCE ANALYSIS OF HIGH-SPEED TRANSPORT CONTROL PROTOCOLS A Thesis Presented to the Graduate School of Clemson University In Partial Fulfillment of the Requirements for the Degree Master of Science

More information

Data Center TCP (DCTCP)

Data Center TCP (DCTCP) Data Center TCP (DCTCP) Mohammad Alizadeh, Albert Greenberg, David A. Maltz, Jitendra Padhye Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, Murari Sridharan Microsoft Research Stanford University 1

More information

Unit 2 Packet Switching Networks - II

Unit 2 Packet Switching Networks - II Unit 2 Packet Switching Networks - II Dijkstra Algorithm: Finding shortest path Algorithm for finding shortest paths N: set of nodes for which shortest path already found Initialization: (Start with source

More information

Congestion Control for High Bandwidth-delay Product Networks

Congestion Control for High Bandwidth-delay Product Networks Congestion Control for High Bandwidth-delay Product Networks Dina Katabi, Mark Handley, Charlie Rohrs Presented by Chi-Yao Hong Adapted from slides by Dina Katabi CS598pbg Sep. 10, 2009 Trends in the Future

More information

Congestion control in TCP

Congestion control in TCP Congestion control in TCP If the transport entities on many machines send too many packets into the network too quickly, the network will become congested, with performance degraded as packets are delayed

More information

6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long

6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long 6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long Please read Chapter 19 of the 6.02 book for background, especially on acknowledgments (ACKs), timers,

More information

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction In a packet-switched network, packets are buffered when they cannot be processed or transmitted at the rate they arrive. There are three main reasons that a router, with generic

More information

Performance Comparison of TFRC and TCP

Performance Comparison of TFRC and TCP ENSC 833-3: NETWORK PROTOCOLS AND PERFORMANCE CMPT 885-3: SPECIAL TOPICS: HIGH-PERFORMANCE NETWORKS FINAL PROJECT Performance Comparison of TFRC and TCP Spring 2002 Yi Zheng and Jian Wen {zyi,jwena}@cs.sfu.ca

More information

Transmission Control Protocol. ITS 413 Internet Technologies and Applications

Transmission Control Protocol. ITS 413 Internet Technologies and Applications Transmission Control Protocol ITS 413 Internet Technologies and Applications Contents Overview of TCP (Review) TCP and Congestion Control The Causes of Congestion Approaches to Congestion Control TCP Congestion

More information

Effects of Applying High-Speed Congestion Control Algorithms in Satellite Network

Effects of Applying High-Speed Congestion Control Algorithms in Satellite Network Effects of Applying High-Speed Congestion Control Algorithms in Satellite Network Xiuchao Wu, Mun Choon Chan, and A. L. Ananda School of Computing, National University of Singapore Computing 1, Law Link,

More information

Studying Fairness of TCP Variants and UDP Traffic

Studying Fairness of TCP Variants and UDP Traffic Studying Fairness of TCP Variants and UDP Traffic Election Reddy B.Krishna Chaitanya Problem Definition: To study the fairness of TCP variants and UDP, when sharing a common link. To do so we conduct various

More information

Analyzing the Receiver Window Modification Scheme of TCP Queues

Analyzing the Receiver Window Modification Scheme of TCP Queues Analyzing the Receiver Window Modification Scheme of TCP Queues Visvasuresh Victor Govindaswamy University of Texas at Arlington Texas, USA victor@uta.edu Gergely Záruba University of Texas at Arlington

More information

Report on Transport Protocols over Mismatched-rate Layer-1 Circuits with 802.3x Flow Control

Report on Transport Protocols over Mismatched-rate Layer-1 Circuits with 802.3x Flow Control Report on Transport Protocols over Mismatched-rate Layer-1 Circuits with 82.3x Flow Control Helali Bhuiyan, Mark McGinley, Tao Li, Malathi Veeraraghavan University of Virginia Email: {helali, mem5qf, taoli,

More information

Investigating the Use of Synchronized Clocks in TCP Congestion Control

Investigating the Use of Synchronized Clocks in TCP Congestion Control Investigating the Use of Synchronized Clocks in TCP Congestion Control Michele Weigle Dissertation Defense May 14, 2003 Advisor: Kevin Jeffay Research Question Can the use of exact timing information improve

More information

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks Hybrid Control and Switched Systems Lecture #17 Hybrid Systems Modeling of Communication Networks João P. Hespanha University of California at Santa Barbara Motivation Why model network traffic? to validate

More information

XCo: Explicit Coordination for Preventing Congestion in Data Center Ethernet

XCo: Explicit Coordination for Preventing Congestion in Data Center Ethernet XCo: Explicit Coordination for Preventing Congestion in Data Center Ethernet Vijay Shankar Rajanna, Smit Shah, Anand Jahagirdar and Kartik Gopalan Computer Science, State University of New York at Binghamton

More information

Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes

Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes Zhili Zhao Dept. of Elec. Engg., 214 Zachry College Station, TX 77843-3128 A. L. Narasimha Reddy

More information

Lecture 15: Datacenter TCP"

Lecture 15: Datacenter TCP Lecture 15: Datacenter TCP" CSE 222A: Computer Communication Networks Alex C. Snoeren Thanks: Mohammad Alizadeh Lecture 15 Overview" Datacenter workload discussion DC-TCP Overview 2 Datacenter Review"

More information

RECHOKe: A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks

RECHOKe: A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < : A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks Visvasuresh Victor Govindaswamy,

More information

On Network Dimensioning Approach for the Internet

On Network Dimensioning Approach for the Internet On Dimensioning Approach for the Internet Masayuki Murata ed Environment Division Cybermedia Center, (also, Graduate School of Engineering Science, ) e-mail: murata@ics.es.osaka-u.ac.jp http://www-ana.ics.es.osaka-u.ac.jp/

More information

Congestion Control In the Network

Congestion Control In the Network Congestion Control In the Network Brighten Godfrey cs598pbg September 9 2010 Slides courtesy Ion Stoica with adaptation by Brighten Today Fair queueing XCP Announcements Problem: no isolation between flows

More information

DiffServ Architecture: Impact of scheduling on QoS

DiffServ Architecture: Impact of scheduling on QoS DiffServ Architecture: Impact of scheduling on QoS Abstract: Scheduling is one of the most important components in providing a differentiated service at the routers. Due to the varying traffic characteristics

More information

Incrementally Deployable Prevention to TCP Attack with Misbehaving Receivers

Incrementally Deployable Prevention to TCP Attack with Misbehaving Receivers Incrementally Deployable Prevention to TCP Attack with Misbehaving Receivers Kun Gao and Chengwen Chris Wang kgao, chengwen @cs.cmu.edu Computer Science Department Carnegie Mellon University December 15,

More information

CS268: Beyond TCP Congestion Control

CS268: Beyond TCP Congestion Control TCP Problems CS68: Beyond TCP Congestion Control Ion Stoica February 9, 004 When TCP congestion control was originally designed in 1988: - Key applications: FTP, E-mail - Maximum link bandwidth: 10Mb/s

More information

TCP LoLa Toward Low Latency and High Throughput Congestion Control

TCP LoLa Toward Low Latency and High Throughput Congestion Control TCP LoLa Toward Low Latency and High Throughput Congestion Control Mario Hock, Felix Neumeister, Martina Zitterbart, Roland Bless KIT The Research University in the Helmholtz Association www.kit.edu Motivation

More information

Transmission Control Protocol (TCP)

Transmission Control Protocol (TCP) TETCOS Transmission Control Protocol (TCP) Comparison of TCP Congestion Control Algorithms using NetSim @2017 Tetcos. This document is protected by copyright, all rights reserved Table of Contents 1. Abstract....

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2015 1 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

TCP so far Computer Networking Outline. How Was TCP Able to Evolve

TCP so far Computer Networking Outline. How Was TCP Able to Evolve TCP so far 15-441 15-441 Computer Networking 15-641 Lecture 14: TCP Performance & Future Peter Steenkiste Fall 2016 www.cs.cmu.edu/~prs/15-441-f16 Reliable byte stream protocol Connection establishments

More information

WASHINGTON UNIVERSITY SEVER INSTITUTE OF TECHNOLOGY DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

WASHINGTON UNIVERSITY SEVER INSTITUTE OF TECHNOLOGY DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING WASHINGTON UNIVERSITY SEVER INSTITUTE OF TECHNOLOGY DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING IMPROVING THE PERFORMANCE OF INTERNET DATA TRANSPORT by Anshul Kantawala, B.S. M.S. Prepared under the

More information

Fairness of High-Speed TCP Stacks

Fairness of High-Speed TCP Stacks Fairness of High-Speed TCP Stacks Dimitrios Miras Department of Computer Science University College London (UCL) London, WCE 6BT, UK d.miras@cs.ucl.ac.uk Martin Bateman, Saleem Bhatti School of Computer

More information

Packet Scheduling in Data Centers. Lecture 17, Computer Networks (198:552)

Packet Scheduling in Data Centers. Lecture 17, Computer Networks (198:552) Packet Scheduling in Data Centers Lecture 17, Computer Networks (198:552) Datacenter transport Goal: Complete flows quickly / meet deadlines Short flows (e.g., query, coordination) Large flows (e.g., data

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2015 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

Differential Congestion Notification: Taming the Elephants

Differential Congestion Notification: Taming the Elephants Differential Congestion Notification: Taming the Elephants Long Le, Jay Kikat, Kevin Jeffay, and Don Smith Department of Computer science University of North Carolina at Chapel Hill http://www.cs.unc.edu/research/dirt

More information

Congestion Control Without a Startup Phase

Congestion Control Without a Startup Phase Congestion Control Without a Startup Phase Dan Liu 1, Mark Allman 2, Shudong Jin 1, Limin Wang 3 1. Case Western Reserve University, 2. International Computer Science Institute, 3. Bell Labs PFLDnet 2007

More information

Performance of Competing High-Speed TCP Flows

Performance of Competing High-Speed TCP Flows Performance of Competing High-Speed TCP Flows Michele C. Weigle, Pankaj Sharma, and Jesse R. Freeman, IV Department of Computer Science, Clemson University, Clemson, SC 29634 {mweigle, pankajs, jessef}@cs.clemson.edu

More information

Lecture 14: Congestion Control"

Lecture 14: Congestion Control Lecture 14: Congestion Control" CSE 222A: Computer Communication Networks Alex C. Snoeren Thanks: Amin Vahdat, Dina Katabi Lecture 14 Overview" TCP congestion control review XCP Overview 2 Congestion Control

More information

CS 268: Lecture 7 (Beyond TCP Congestion Control)

CS 268: Lecture 7 (Beyond TCP Congestion Control) Outline CS 68: Lecture 7 (Beyond TCP Congestion Control) TCP-Friendly Rate Control (TFRC) explicit Control Protocol Ion Stoica Computer Science Division Department of Electrical Engineering and Computer

More information

Master s Thesis. TCP Congestion Control Mechanisms for Achieving Predictable Throughput

Master s Thesis. TCP Congestion Control Mechanisms for Achieving Predictable Throughput Master s Thesis Title TCP Congestion Control Mechanisms for Achieving Predictable Throughput Supervisor Prof. Hirotaka Nakano Author Kana Yamanegi February 14th, 2007 Department of Information Networking

More information

A Hybrid Systems Modeling Framework for Fast and Accurate Simulation of Data Communication Networks. Motivation

A Hybrid Systems Modeling Framework for Fast and Accurate Simulation of Data Communication Networks. Motivation A Hybrid Systems Modeling Framework for Fast and Accurate Simulation of Data Communication Networks Stephan Bohacek João P. Hespanha Junsoo Lee Katia Obraczka University of Delaware University of Calif.

More information

End-to-End Congestion Control for High Performance Data Transfer

End-to-End Congestion Control for High Performance Data Transfer Manuscript submitted to IEEE/ACM Transaction on Networking 1 End-to-End Congestion Control for High Performance Data Transfer Yunhong Gu, Student Member, IEEE and Robert L. Grossman, Member, IEEE Abstract

More information

WarpTCP WHITE PAPER. Technology Overview. networks. -Improving the way the world connects -

WarpTCP WHITE PAPER. Technology Overview. networks. -Improving the way the world connects - WarpTCP WHITE PAPER Technology Overview -Improving the way the world connects - WarpTCP - Attacking the Root Cause TCP throughput reduction is often the bottleneck that causes data to move at slow speed.

More information

IX: A Protected Dataplane Operating System for High Throughput and Low Latency

IX: A Protected Dataplane Operating System for High Throughput and Low Latency IX: A Protected Dataplane Operating System for High Throughput and Low Latency Belay, A. et al. Proc. of the 11th USENIX Symp. on OSDI, pp. 49-65, 2014. Reviewed by Chun-Yu and Xinghao Li Summary In this

More information

Working Analysis of TCP/IP with Optical Burst Switching Networks (OBS)

Working Analysis of TCP/IP with Optical Burst Switching Networks (OBS) Working Analysis of TCP/IP with Optical Burst Switching Networks (OBS) Malik Salahuddin Nasir, Muabshir Ashfaq and Hafiz Sabir Hussain CS&IT Department, Superior University, 17-km off Riwind Road, Lahore

More information

Improving TCP Performance over Wireless Networks using Loss Predictors

Improving TCP Performance over Wireless Networks using Loss Predictors Improving TCP Performance over Wireless Networks using Loss Predictors Fabio Martignon Dipartimento Elettronica e Informazione Politecnico di Milano P.zza L. Da Vinci 32, 20133 Milano Email: martignon@elet.polimi.it

More information

image 3.8 KB Figure 1.6: Example Web Page

image 3.8 KB Figure 1.6: Example Web Page image. KB image 1 KB Figure 1.: Example Web Page and is buffered at a router, it must wait for all previously queued packets to be transmitted first. The longer the queue (i.e., the more packets in the

More information

Toward a Reliable Data Transport Architecture for Optical Burst-Switched Networks

Toward a Reliable Data Transport Architecture for Optical Burst-Switched Networks Toward a Reliable Data Transport Architecture for Optical Burst-Switched Networks Dr. Vinod Vokkarane Assistant Professor, Computer and Information Science Co-Director, Advanced Computer Networks Lab University

More information

IMPROVING START-UP AND STEADY-STATE BEHAVIOR OF TCP IN HIGH BANDWIDTH-DELAY PRODUCT NETWORKS DAN LIU

IMPROVING START-UP AND STEADY-STATE BEHAVIOR OF TCP IN HIGH BANDWIDTH-DELAY PRODUCT NETWORKS DAN LIU IMPROVING START-UP AND STEADY-STATE BEHAVIOR OF TCP IN HIGH BANDWIDTH-DELAY PRODUCT NETWORKS by DAN LIU Submitted in partial fulfillment of the requirements For the degree of Master of Science Thesis Adviser:

More information

EVALUATION OF PROBABILISTIC EARLY RESPONSE TCP (PERT) FOR VIDEO DELIVERY AND EXTENSION WITH ACK COALESCING. A Thesis BIN QIAN

EVALUATION OF PROBABILISTIC EARLY RESPONSE TCP (PERT) FOR VIDEO DELIVERY AND EXTENSION WITH ACK COALESCING. A Thesis BIN QIAN EVALUATION OF PROBABILISTIC EARLY RESPONSE TCP (PERT) FOR VIDEO DELIVERY AND EXTENSION WITH ACK COALESCING A Thesis by BIN QIAN Submitted to the Office of Graduate Studies of Texas A&M University in partial

More information

Linux Plumbers Conference TCP-NV Congestion Avoidance for Data Centers

Linux Plumbers Conference TCP-NV Congestion Avoidance for Data Centers Linux Plumbers Conference 2010 TCP-NV Congestion Avoidance for Data Centers Lawrence Brakmo Google TCP Congestion Control Algorithm for utilizing available bandwidth without too many losses No attempt

More information

Lab Test Report DR100401D. Cisco Nexus 5010 and Arista 7124S

Lab Test Report DR100401D. Cisco Nexus 5010 and Arista 7124S Lab Test Report DR100401D Cisco Nexus 5010 and Arista 7124S 1 April 2010 Miercom www.miercom.com Contents Executive Summary... 3 Overview...4 Key Findings... 5 How We Did It... 7 Figure 1: Traffic Generator...

More information

On the Transition to a Low Latency TCP/IP Internet

On the Transition to a Low Latency TCP/IP Internet On the Transition to a Low Latency TCP/IP Internet Bartek Wydrowski and Moshe Zukerman ARC Special Research Centre for Ultra-Broadband Information Networks, EEE Department, The University of Melbourne,

More information

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 Question 344 Points 444 Points Score 1 10 10 2 10 10 3 20 20 4 20 10 5 20 20 6 20 10 7-20 Total: 100 100 Instructions: 1. Question

More information

Congestion Control End Hosts. CSE 561 Lecture 7, Spring David Wetherall. How fast should the sender transmit data?

Congestion Control End Hosts. CSE 561 Lecture 7, Spring David Wetherall. How fast should the sender transmit data? Congestion Control End Hosts CSE 51 Lecture 7, Spring. David Wetherall Today s question How fast should the sender transmit data? Not tooslow Not toofast Just right Should not be faster than the receiver

More information

Promoting the Use of End-to-End Congestion Control in the Internet

Promoting the Use of End-to-End Congestion Control in the Internet Promoting the Use of End-to-End Congestion Control in the Internet Sally Floyd and Kevin Fall IEEE/ACM Transactions on Networking May 1999 ACN: TCP Friendly 1 Outline The problem of Unresponsive Flows

More information

Data center Networking: New advances and Challenges (Ethernet) Anupam Jagdish Chomal Principal Software Engineer DellEMC Isilon

Data center Networking: New advances and Challenges (Ethernet) Anupam Jagdish Chomal Principal Software Engineer DellEMC Isilon Data center Networking: New advances and Challenges (Ethernet) Anupam Jagdish Chomal Principal Software Engineer DellEMC Isilon Bitcoin mining Contd Main reason for bitcoin mines at Iceland is the natural

More information

Supporting Quality of Service for Internet Applications A thesis presented for the degree of Master of Science Research

Supporting Quality of Service for Internet Applications A thesis presented for the degree of Master of Science Research Supporting Quality of Service for Internet Applications A thesis presented for the degree of Master of Science Research Department of Computer Systems Faculty of Information Technology University of Technology,

More information

Fast Retransmit. Problem: coarsegrain. timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission

Fast Retransmit. Problem: coarsegrain. timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission Fast Retransmit Problem: coarsegrain TCP timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission Packet 1 Packet 2 Packet 3 Packet 4 Packet 5 Packet 6 Sender Receiver

More information

MEASURING PERFORMANCE OF VARIANTS OF TCP CONGESTION CONTROL PROTOCOLS

MEASURING PERFORMANCE OF VARIANTS OF TCP CONGESTION CONTROL PROTOCOLS MEASURING PERFORMANCE OF VARIANTS OF TCP CONGESTION CONTROL PROTOCOLS Harjinder Kaur CSE, GZSCCET, Dabwali Road, Bathinda, Punjab, India, sidhuharryab@gmail.com Gurpreet Singh Abstract CSE, GZSCCET, Dabwali

More information

Dynamic Deferred Acknowledgment Mechanism for Improving the Performance of TCP in Multi-Hop Wireless Networks

Dynamic Deferred Acknowledgment Mechanism for Improving the Performance of TCP in Multi-Hop Wireless Networks Dynamic Deferred Acknowledgment Mechanism for Improving the Performance of TCP in Multi-Hop Wireless Networks Dodda Sunitha Dr.A.Nagaraju Dr. G.Narsimha Assistant Professor of IT Dept. Central University

More information

15-744: Computer Networking TCP

15-744: Computer Networking TCP 15-744: Computer Networking TCP Congestion Control Congestion Control Assigned Reading [Jacobson and Karels] Congestion Avoidance and Control [TFRC] Equation-Based Congestion Control for Unicast Applications

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2014 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

Experimental Analysis and Demonstration of the NS2 Implementation of Dynamic Buffer Sizing Strategies for Based Wireless Networks

Experimental Analysis and Demonstration of the NS2 Implementation of Dynamic Buffer Sizing Strategies for Based Wireless Networks Experimental Analysis and Demonstration of the NS2 Implementation of Dynamic Buffer Sizing Strategies for 802.11 Based Wireless Networks Santosh Hosamani, G.S.Nagaraja Dept of CSE, R.V.C.E, Bangalore,

More information

Multi-class Applications for Parallel Usage of a Guaranteed Rate and a Scavenger Service

Multi-class Applications for Parallel Usage of a Guaranteed Rate and a Scavenger Service Department of Computer Science 1/18 Multi-class Applications for Parallel Usage of a Guaranteed Rate and a Scavenger Service Markus Fidler fidler@informatik.rwth-aachen.de Volker Sander sander@fz.juelich.de

More information

STUDY OF SOCKET PROGRAMMING AND CLIENT SERVER MODEL

STUDY OF SOCKET PROGRAMMING AND CLIENT SERVER MODEL STUDY OF SOCKET PROGRAMMING AND CLIENT SERVER MODEL AIM: To conduct an experiment to demonstrate the working of file transfer with the UDP Server and Client. APPARATUS REQUIRED: PC with network simulation

More information

TCP SIAD: Congestion Control supporting Low Latency and High Speed

TCP SIAD: Congestion Control supporting Low Latency and High Speed TCP SIAD: Congestion Control supporting Low Latency and High Speed Mirja Kühlewind IETF91 Honolulu ICCRG Nov 11, 2014 Outline Current Research Challenges Scalability in

More information

Understanding TCP Parallelization. Qiang Fu. TCP Performance Issues TCP Enhancements TCP Parallelization (research areas of interest)

Understanding TCP Parallelization. Qiang Fu. TCP Performance Issues TCP Enhancements TCP Parallelization (research areas of interest) Understanding TCP Parallelization Qiang Fu qfu@swin.edu.au Outline TCP Performance Issues TCP Enhancements TCP Parallelization (research areas of interest) Related Approaches TCP Parallelization vs. Single

More information

Resource allocation in networks. Resource Allocation in Networks. Resource allocation

Resource allocation in networks. Resource Allocation in Networks. Resource allocation Resource allocation in networks Resource Allocation in Networks Very much like a resource allocation problem in operating systems How is it different? Resources and jobs are different Resources are buffers

More information

TCP on High-Speed Networks

TCP on High-Speed Networks TCP on High-Speed Networks from New Internet and Networking Technologies for Grids and High-Performance Computing, tutorial given at HiPC 04, Bangalore, India December 22nd, 2004 C. Pham University Lyon,

More information