UC Santa Cruz UC Santa Cruz Electronic Theses and Dissertations

Size: px
Start display at page:

Download "UC Santa Cruz UC Santa Cruz Electronic Theses and Dissertations"

Transcription

1 UC Santa Cruz UC Santa Cruz Electronic Theses and Dissertations Title Re-Thinking the TCP Congestion Control Design in Network Simulators Permalink Author Hume, Allison Publication Date Peer reviewed Thesis/dissertation escholarship.org Powered by the California Digital Library University of California

2 UNIVERSITY OF CALIFORNIA SANTA CRUZ RE-THINKING THE TCP CONGESTION CONTROL DESIGN IN NETWORK SIMULATORS A thesis submitted in partial satisfaction of the requirements for the degree of MASTER OF SCIENCE in COMPUTER SCIENCE by Allison Hume June 2016 The Thesis of Allison Hume is approved: Professor Katia Obraczka, Chair Professor Luca de Alfaro Professor Carlos Maltzahn Tyrus Miller Vice Provost and Dean of Graduate Studies

3 Copyright c by Allison Hume 2016

4 Table of Contents List of Figures List of Tables Abstract Acknowledgments iv vi vii viii 1 Introduction Motivation Background TCP Congestion Control Survey of TCP Congestion Control Algorithms History of TCP Design in ns Results Motivating Results: TCP Inigo Validating ns-3 Version Proposed Design Design Validation TCP New Reno TCP Westwood Implementing a New Algorithm: TCP Inigo Discussion Related Work Conclusions and Future Work Bibliography 45 iii

5 List of Figures 1.1 A 600 second trace of the congestion window (in bytes) of TCP Westwood. Experiment from Gangadhar et al[10] run on ns-3 release version 3.24 (left) and version 3.25 (right) Block diagram showing the implementation of TCP congestion control for ns-3 version Block diagram showing the implementation of TCP congestion control for ns-3 version Log of congestion window and slow start threshold (both in Segments) for TCP Inigo, run on Mininet. Plot courtesy of Andrew Shewmaker Log of congestion window and slow start threshold (both in Segments) for TCP Reno, run on Mininet. Plot courtesy of Andrew Shewmaker Log of congestion window and slow start threshold (both in Segments) for TCP Westwood, run on Mininet. Plot courtesy of Andrew Shewmaker Bottleneck queue length (in packets) for TCP Inigo, Reno, and Westwood, run on Mininet. Plot courtesy of Andrew Shewmaker Congestion window and slow start threshold (both in Segments) for TCP Inigo, NewReno, and Westwood, run on NS Bottleneck queue length (in packets) for TCP Inigo, NewReno, and Westwood, run on Mininet Congestion window (in Segments) for TCP Inigo and NewReno, run on NS3, with RTT of 2ms, 4ms, and 8ms, clockwise from top left Congestion window and slow start threshold for New Reno in ns (top row) and 3.25 (bottom row), with packet loss rates set to 10 3 and with fast recovery enabled (left column) and disabled (right column). 40 second trace taken from a 600 second experiment Congestion window and slow start threshold for New Reno in ns (top row) and 3.25 (bottom row), with packet loss rates set to 0 (left column) and 10 3 (right column). Full trace from a 1000 second experiment Congestion window and slow start threshold for New Reno in ns (top row) and 3.25 (bottom row), with packet loss rates set to 0 (left column) and 10 3 (right column). Full trace from a 100 second experiment iv

6 2.11 Proposed design for a modular and extensible TCP congestion control implementation Comparison between simulation results obtained with our design and implementation of New Reno and ns s New Reno. Left-side graph can be compared with Figure 2.8 s lower left graph, and right-side graph can be compared with Figure 2.10 s lower left graph (Duplicate of Figure 1.1): A 600 second trace of the congestion window (in bytes) of TCP Westwood. Experiment from Gangadhar et al[10] run on ns-3 release version 3.24 (left) and version 3.25 (right) A 600 second trace of the congestion window of TCP Westwood. Experiment from Gangadhar et al [10] run on ns after design implementation (left) and after artificially increasing the bandwidth estimate by a factor of 40 (right) Proposed design for a modular and extensible TCP congestion control implementation, applied to TCP Inigo, a new experimental TCP congestion control algorithm v

7 List of Tables 2.1 Summary of experiment parameters Summary of Design Changes vi

8 Abstract Re-Thinking the TCP Congestion Control Design in Network Simulators by Allison Hume In contrast to live hardware testbeds, which are evaluated largely on performance, network simulators are primarily used to test and validate new protocols and algorithms. As such, a network simulator should be designed to be modular, extensible, re-usable, and robust. This work examines the design of TCP congestion control in the ns-3 network simulator and presents a new design, not specific to ns-3, that will be more modular and extensible. We also demonstrate that the implementation in the current ns-3 release (version 3.25) is not able to duplicate results presented in previous literature. These findings, along with the improved design, are the main contributions of this paper. To validate our design, we show how its modular approach can be used to implement existing TCP congestion control variants, e.g., TCP New Reno and TCP Westwood, as well as new ones, e.g., TCP Inigo. vii

9 Acknowledgments Thank you to Andrew Shewmaker for the initial research direction, information about TCP refactoring in NS3, and help with interpreting results. viii

10 Chapter 1 Introduction 1.1 Motivation Network simulators are an essential component of the network protocol and application development cycle. They complement live hardware testbeds as they enable experimentation under a wide variety of networking environments and conditions. They also allow the reproduction of experiments, which is a fundamental scientific methodology principle. Because network simulators are used primarily for testing and validation, the primary goal of simulators does not need to be performance, as it is in hardware. Instead, the main goal should be to provide a robust environment that can be easily modified and extended so that new protocols can be added to the existing code base with minimal effort. As such, we believe that modularity, extensibility, re-usability, and robustness should be the main focus of network simulator code design. One active area of research in the networking community is introducing and comparing different TCP congestion control algorithms. Network simulation platforms have 1

11 Figure 1.1: A 600 second trace of the congestion window (in bytes) of TCP Westwood. Experiment from Gangadhar et al[10] run on ns-3 release version 3.24 (left) and version 3.25 (right). played a crucial role in advancing the TCP congestion control state-of-the-art and ns-3, and its previous incarnation, ns-2, have a long track record as essential tools for TCP congestion control research [1], [11]. ns-3 has been evolving, however, and there has not been enough of an effort to continue to validate and understand the simulator itself. The work in this paper was motivated by our experience in working on a new variant of TCP congestion control, TCP Inigo [27]. We were planning to use ns-3 as the simulation platform to validate TCP Inigo and to complement experiments run on a test-bed. It became clear, however, that the existing TCP implementation in ns-3, while potentially sufficient for comparing aggregate metrics such as throughput, or for supporting higher level protocols, was not robust enough to support the detailed analysis that is necessary to understand and compare congestion control algorithms. To illustrate the previous claim, Figure 1.1 shows a trace of the congestion window in bytes (see Section II for a review of congestion control in TCP) for TCP Westwood [16] using an experiment found in Gangadhar et al[10]. The left plot shows the experiment run 2

12 on ns-3 release 3.24 while the plot on the right shows the same experiment run on ns-3 release These results show that the implementation of TCP Westwood has changed dramatically between the two implementations. How can researchers trust simulators to compare new versions of TCP when established algorithms are not consistent between releases? The work we present in this paper has been motivated by these findings, and by the difficulties encountered in adding a new congestion control algorithm to ns-3. The main contribution of this work is a new design of the TCP congestion control implementation for network simulators that targets modularity, extensibility, re-usability, and robustness. The design will be presented after a brief background on TCP congestion control and the history of TCP code design in the ns-3 simulator. This work also presents results that question the existing TCP implementation in ns-3, lending weight to the argument that a comprehensive design evaluation, as well as a formal validation process, is very much needed. 1.2 Background TCP Congestion Control The Transmission Control Protocol (TCP) is probably the most widely used endto-end Internet protocol, supporting Internet services such as the Web, , and file transfer. TCP s main goal is to provide to applications reliable, in-order data delivery atop of IP s unreliable, best-effort service. To this end, TCP provides error control, flow control, and congestion control which are implemented based on mechanisms such as acknowledgments (ACKs), re-transmissions, sequence numbers, and timers. TCP segments 3

13 include sequence numbers, which are included in ACKs sent by the receiver to signify to the sender that data was received. Re-transmission occurs when an ACK for a certain range of sequence numbers has not been received within a specified time period. Because TCP s control functions use the same mechanisms, their operation is tightly coupled. However, in this paper, we focus our attention primarily on TCP congestion control. In its early days, TCP congestion control was defined by two algorithms: slow start and congestion avoidance. Slow start is invoked after a TCP connection has been established or after a timeout event. It increases the TCP sender s congestion window cwnd from one maximum segment size (MSS) exponentially until it reaches the slow start threshold ssth. The congestion avoidance algorithm increases cwnd linearly until a timeout occurs. At that point, slow start is invoked. TCP congestion control has continuously evolved over the years. Fast retransmit and fast recovery were among the early modifications to the original TCP congestion control mechanism. The fast retransmit algorithm is used to detect and repair loss to try to prevent timeouts. It employs incoming duplicate ACKs as an indication of loss and responds by re-transmitting the potentially lost segment right away. Fast recovery is used to adjust cwnd until a non-duplicate ACK arrives. RFC 5681 [3] specifies the basic rules defining these four algorithms, but many variations have been defined and used over the years. TCP variants such as TCP Reno [15], TCP New Reno [14], and TCP Westwood [16] refer to specific implementations of those four algorithms. They potentially include additional components such as Selective Acknowledgments (SACK), which make it possible for the receiver to acknowledge scattered blocks of data, not just a continuous block. Newer 4

14 variants often build upon older variants. For example, the TCP Westwood [16] congestion control algorithm is a modification of TCP Reno s congestion control that changes only how cwnd and ssth are modified after a congestion event, which is triggered by n duplicate ACKs or a timeout. While TCP Reno halves the congestion window after three duplicate ACKs, Westwood uses a bandwidth estimator to try to adjust the congestion window based on the currently available bandwidth [16]. Both Reno s and Westwood s congestion control algorithms, however, have the same implementation of slow start and congestion avoidance Survey of TCP Congestion Control Algorithms This section gives a summary of some of the more well known variants of TCP Congestion Control to show the wide range of algorithms that a simulator may have to support. It should be noted how many of these algorithms are minor modifications of an existing algorithm. This observation gives weight to the argument that a TCP congestion control implementation in a simulator should prioritize modularity and extensibility to best support the research goals of the community. Early Algorithms The earliest versions of congestion control, TCP Tahoe and TCP Reno were named after the BSD releases in which they appeared: 4.3BSD-Tahoe and 4.3BSD-Reno. The main improvements from Tahoe to Reno are described in Jacobson [15]. Both algorithms specify that slow start begins after a timeout, at the beginning of slow start the congestion window is one Maximum Segment Size (MSS), and during slow start the congestion window increases by one for each ACK. Tahoe and Reno behave differently after three duplicate 5

15 ACKs, however. After three duplicate ACKs, Tahoe sets the slow start threshold to half the current congestion window, and begins slow start, essentially treating three duplicate ACKs in the same way as a timeout. Reno, instead includes separate algorithm for how to behave after three duplicate ACKs. This algorithm is called Fast Recovery, and causes both the slow start threshold and congestion window to be set to half the current congestion window and all re-transmitted data to be acknowledged before the congestion window is increased again. Both Tahoe and Reno have a Fast Retransmit algorithm which re-transmits all unacknowledged data after three duplicate ACKs, as opposed to waiting for a timeout. Variations of TCP Reno Reno uses an acknowledgment number field that contains a cumulative acknowledgment, indicating the TCP receiver has received all of the data up to the byte indicated in the field [8]. This achieves poor performance when multiple packets are dropped from a window of data, because Reno waits for a re-transmission timer expiration before initiating re-transmission. TCP NewReno [14] and SACK TCP [17] were both developed to attempt to improve this performance issue encountered in TCP Reno. Selective Acknowledgement (SACK) [17] is an option in TCP that specifies that specific segments be acknowledged, as opposed to cumulative acknowledgments, with which a sender can only learn about a single lost segment per round trip time. Selective acknowledgments can be either positive or negative, meaning that the receiver can acknowledge segments that were or were not received. This prevents the receiver from having to choose between preemptively re-transmitting segments that may have been lost, which can waste bandwidth, or waiting to learn of lost segments, which can cause increased latency. TCP 6

16 with SACK is not necessarily tied with any specific congestion control algorithm, although it was originally presented as a modification of TCP Reno, but is often used on wireless and satellite links, which tend to have higher error rates, to prevent packet loss from being mistakenly interpreted as congestion. TCP NewReno attempts to solve the same performance issue seen in Reno without using SACK. TCP NewReno [14] only differs from TCP Reno in the Fast Recovery specification. In Reno, partial ACKs end Fast Recovery. Instead, NewReno attempts to keep the transmission window full by sending a new packet for each ACK, or partial ACK that is at least one MSS, received during Fast Recovery. NewReno remains in Fast Recovery until all of the data that was outstanding at the beginning of Fast Recovery has been acknowledged. Therefore, when multiple packets are lost from a single window, NewReno re-transmits one lost packet per RTT instead of waiting for a re-transmission timeout. Comparisons of TCP NewReno and TCP with SACK found that NewReno does not perform as well as TCP with SACK when a large number of packets are dropped from a window of data. It is, however, comparable for smaller loss rates [17]. TCP NewReno is one of the most common forms of TCP in use today. TCP Vegas [5] is also very similar to TCP Reno, but is not as widely used as TCP NewReno. Vegas extends Reno s re-transmission mechanism by using the system clock to record the RTT for each segment. Vegas then uses the RTT estimate to re-transmit based on only one duplicate ACK if the RTT for that segment is larger than the timeout value. Vegas also preemptively re-transmits the first or second segment after a re-transmission if the RTT is larger than the timeout value, to avoid waiting for a duplicate ACK. Although 7

17 Vegas is not widely used, similar RTT estimates have been used in many later algorithms, and Vegas has been seen to be a very smooth version of TCP [5], [11]. High Latency or High BDP Links The original TCP congestion control algorithms do not perform well for networks with links with very high propagation delays. The growth of the congestion window in all of those algorithms happens as a response to ACK arrivals, the frequency of which is determined by the RTT, and therefore the propagation delay. Long RTTs reduce the congestion window growth rate and cause throughput degradation, as well as unfair sharing of bandwidth resources in situations with links of mixed propagation delay [6]. TCP Hybla [6] is a version of TCP designed to solve this challenge, referred to as the RTT disparity problem. Hybla requires a similar RTT estimate to the estimate introduced in Vegas, but the estimate is performed for both the high latency link, and a lower latency link with which the high latency link should be achieving equivalent performance. Congestion window calculations are then scaled by the ratio of those two estimates, so that the congestion window in the high latency link is no longer penalized by the long RTT. A similar challenge is presented by high bandwidth-delay product (BDP) links. The BDP of a link describes the number of packets in flight when the bandwidth is fully utilized. This number is ideally also the size of the congestion window. Again, because traditional congestion control algorithms grow the congestion window at a rate of one per RTT, it can take too long for the congestion window to reach its maximum size if the maximum size is very large. This is an especially large problem for short-lived flows. Many congestion control algorithms have been developed in response to this prob- 8

18 lem, because of how important high BDP links are in modern networking. Some of the most well known algorithms are BIC-TCP [33], Cubic [13], and Compound [28]. BIC-TCP [33] uses a binary search algorithm to adjust the congestion window size between the max window size that has been seen, and the last window size where there was not loss for one RTT period (the perceived min window size). This allows the congestion window to change quickly when it is far from the true capacity of the network, and change slowly when it is close to the available capacity. BIC-TCP has been seen to work well with High BDP links and to be very stable, and has therefore been used in the Linux Kernel [13]. Cubic [13] is a slight modification of BIC-TCP that uses a cubic function to simplify the window adjust algorithm presented in BIC-TCP. The window growth in Cubic depends only on the time between congestion events, and is not dependent on RTT. This allows better RTT-fairness, similar to what is provided by TCP Hybla. Compound TCP [28] is a modification of TCP Vegas in which the congestion avoidance phase tracks a second moving window, the delay-based window. The sending window is then taken as the minimum of the congestion window and delay-based window. The delay-based window is scalable and can rapidly increase the sending rate when the network is under-utilized. Like BIC-TCP and Cubic, Compound TCP offers both bandwidth scalability and RTT fairness. Comparisons of Cubic and Compound find that Cubic outperforms Compound on wired links with reverse traffic, but that their performance is comparable on wireless links [1]. Although TCP Westwood [16] was originally developed as a wireless congestion control algorithm, it has also been regarded as a successful algorithm for links with high 9

19 BDP [13]. Westwood relies on a bandwidth estimator to set the congestion window and slow start threshold to the correct values for the network, rather than simply reducing them at every congestion event. The bandwidth estimation is maintained with every received ACK, and used to set the congestion window and slow start thresholds after a congestion event. TCP Westwood+ [12] is a slight modification of Westwood designed to improve the correctness of the bandwidth estimation algorithm in the presence of ACK compression. Comparisons of Westwood(+), NewReno, and Vegas show that Westwood(+) improves utilization of low-congestion wireless links as well as fairness [11], [16]. Data Center Algorithms Lastly, data centers are one of the most important networking scenarios. Data centers often have to accommodate diverse applications with low latency and high throughput requirements. Traditional TCP protocols, when used in data centers, cause flows with high bandwidth requirements to build up queues and harm the performance of latency sensitive flows [2]. Data Center TCP (DCTCP) [2] was developed to address this issue. DCTPC uses Explicit Congestion Notification (ECN), which is an option that allows switches in data centers to set certain bits in packet headers when congestion is observed by the switch. Congestion in this sense is measured by buffer occupancy exceeding a fixed small threshold. The DCTCP source reduces the congestion window according to the fraction of packets that are marked as having encountered congestion in the network. This allows DCTCP to achieve high burst tolerance, low latency, and high throughput without requiring switches to contain large buffers [2]. 10

20 Deadline Aware Datacenter TCP (D 2 TCP) [29] was developed to support a specific, important class of data center applications, called Online Data Intensive applications. These applications are real-time, which imposes a 300ms latency constraint. In addition, these applications often use tree-based algorithms, which creates bursts of children to parent traffic. DCTCP is not aware of deadlines, causing it to not support this application very well. D 2 TCP uses ECN feedback in conjunction with deadlines to scale the congestion window. This allows D 2 TCP to reduce the fraction of missed deadlines relative to DCTCP by 75%. D 3 [31] is a full control protocol designed to support data center applications. Like D 2 TCP, D 3 was also designed for the purpose of supporting deadline-sensitive applications. D 3, however, is not compatible with legacy TCP, making it unlikely to become a widely adopted protocol. Incast Congestion Control for TCP (ICTCP) [32] is also a specialized TCP congestion control algorithm, but supports applications such as MapReduce and Search that have a many-to-one traffic pattern. Traditional versions of TCP do not work well for that many-to-one traffic pattern on high-bandwidth, low latency networks, such as data center networks. When many synchronized servers simultaneously send data to one receiver, a type of congestion can occur called incast congestion. The performance of this type of workload is determined by the slowest TCP connection. Previous approaches to mitigating incast congestion have attempted to reduce timeout values, but ICTCP adjusts the TCP receive window proactively before packet loss occurs. ICTCP is able to achieve almost zero timeouts and high goodput for TCP incast. 11

21 Multipath TCP (MPTCP) [25] has also been proposed for use in data centers. MPTCP establishes multiple sub-flows on different paths between the same pair of endpoints for a single TCP connection. This provides many benefits for a data center network such as improved bandwidth utilization and fairness. The MPTCP congestion control algorithm moves traffic off more congested paths to less congested ones, which benefits not just the MPTCP flow, but any competing flows as well. TCP Inigo [27] was developed to achieve results similar to those achieved by DCTCP, without requiring ECN. ECN is not always available in networking hardware and it can therefore be problematic to rely on a protocol that assumes that ECN (or any specific hardware support) exists. TCP Inigo is a protocol that does not require any specific hardware support or switch configuration. Instead, Inigo uses RTT measurements in the sender to calculate the congestion ratio, and adjust the congestion window accordingly. Preliminary results for TCP Inigo are promising in terms of both network utilization and fairness [27] History of TCP Design in ns-3 The previous discussion helps to illuminate some of the challenges that arise when attempting to implement TCP congestion control in a way that is both efficient and extensible. In particular, extensibility in a simulation platform is quite critical as new versions of TCP congestion control have been frequently proposed and added to network simulators for validation and testing. If the TCP implementation in Linux favors efficiency over extensibility that may be acceptable since new TCP variants are not added to the Linux kernel very often. In a network simulator, however, lack of extensibility goes directly against the goals 12

22 tcp-westwood(+) tcp-tahoe tcp-newreno Defines all TCP functionality tcp-socket-base Minimal Shared Code Base Figure 1.2: Block diagram showing the implementation of TCP congestion control for ns-3 version of the software. This section discusses the recent history of TCP congestion control design choices in the ns-3 network simulator [20], which motivate our new design as described in Section 2.3. We should point out that even though our work was motivated by ns-3 s TCP congestion control implementation, our design is applicable to network simulation platforms in general. As shown in Figure 1.2, the second most recent ns-3 release, version 3.24, is organized as follows: (1) it has a module called tcp-socket-base that contains the code base shared by all versions of TCP implemented in the simulator, and (2) there is one main module for each TCP variant containing most of the code required for that specific congestion control algorithm [21]. The tcp-socket-base module contains very little code. It has a function named ReceivedAck, called upon the receipt of a new acknowledgment, which serves mostly to call the NewAck and DupAck functions, which are defined for each TCP variant in its respective module. NewAck and DupAck do not just contain congestion 13

23 tcp-congestion-ops: Defines basic congestion control functionality: PktsAcked() GetSsThresh() IncreaseWindow() tcp-new-version tcp-hybla tcp-westwood(+) Also includes TcpNewReno Refactored Shared Code Base Includes more common functionality: All ACKs Error/Flow Control Fast Retransmit and Fast Recovery Figure 1.3: Block diagram showing the implementation of TCP congestion control for ns-3 version control functionality, but include all TCP functionality that should be invoked upon receipt of a new or duplicate acknowledgment, respectively. Consequently, although the code is organized by congestion control algorithm, a significant portion of code in each module is in no way specific to the corresponding congestion control algorithm. The main downsides of this design are the significant code duplication and the high overhead required to maintain and extend it, e.g., add a new congestion control algorithm. Recently, ns-3 s TCP implementation has undergone a refactoring effort. As shown in Figure 1.3, in the refactored code the majority of the TCP code is shared and only specific congestion control functions need to be implemented to define a new version of congestion control [24]. The result of this effort is the recently released ns-3 version 3.25 [22]. In this version, the ReceivedAck function in tcp-socket-base contains much more shared functionality and calls more specific, well-defined congestion control functions such as PktsAcked, GetSsThresh, and IncreaseWindow. These functions are defined in the 14

24 congestion-ops module. TCP congestion control variants are each implemented in a separate module using the congestion-ops module as their base class. The implementation for TCP New Reno, however, exists in congestion-ops because New Reno is the default TCP congestion control algorithm used by ns-3. The refactored design in ns-3 version 3.25 is more extensible than the previous design, although great care must be taken to ensure that all shared code is truly generic. As will be discussed further in Section 2.2, that is not currently the case. Another criticism of this design is that, since TCP New Reno is used as the default implementation, the implementation of other congestion control algorithms is not always completely clear. It is not obvious, for example, that the TCP Westwood implementation actually uses the SlowStart function defined for New Reno, since SlowStart is not one of the three functions specified in the base class. We argue that, although the current design is moving in the correct direction, additional improvements are needed. 15

25 Chapter 2 Results 2.1 Motivating Results: TCP Inigo The original goal of this research was to analyze a new implementation of TCP congestion control, TCP Inigo [27]. The starting point for this work was to validate previous results, achieved using the Mininet simulator, with the more powerful NS3 simulator. It was hoped that, after previous results were validated, more interesting experiments would be run using the additional functionality provided by NS3. However, the validation step exposed significant issues with the NS3 simulator, changing the direction of the research to be focused on understanding, correcting, and validating the simulator itself. This section summarizes progress made in simulating TCP Inigo and shows the results that caused the research direction to shift. Figures 2.1, 2.2, 2.3, and 2.4 show the results obtained on Mininet, against which we hoped to validate our implementation of TCP Inigo on NS3. Figures 2.1, 2.2, and 2.3 show the size of the congestion window and slow start threshold in segments for TCP 16

26 Figure 2.1: Log of congestion window and slow start threshold (both in Segments) for TCP Inigo, run on Mininet. Plot courtesy of Andrew Shewmaker. Inigo, Reno, and Westwood, respectively. It should be noted that, because Mininet uses the Linux TCP implementation, the TCP Reno implementation is equivalent to the NS3 TCP NewReno implementation. Figure 2.4 shows the size of the bottleneck queue for all three TCP versions. These results were obtained on the Mininet simulator by Andrew Shewmaker, the primary author of TCP Inigo. The experiment from which these graphs were produced involved one source and one sink node connected by a gateway. The access link bandwidth was 500Mbps, bottleneck bandwidth was 100Mbps, and the bottleneck link 17

27 Figure 2.2: Log of congestion window and slow start threshold (both in Segments) for TCP Reno, run on Mininet. Plot courtesy of Andrew Shewmaker. had a queue size of 100 packets. The total Round Trip Time (RTT) was 4ms, the Maximum Transmission Unit (MTU) was 1500 Bytes, and the experiment was run for 100s. It is clear from examining these graphs that the congestion window computations for TCP Reno and Westwood are not realistic in Mininet. The experiment parameters should cause congestion in the network more frequently than the one congestion window drop that is seen. From these plots it is clear that the bottleneck bandwidth is not being enforced. Since Mininet is such a simple network simulator, it is not altogether surprising 18

28 Figure 2.3: Log of congestion window and slow start threshold (both in Segments) for TCP Westwood, run on Mininet. Plot courtesy of Andrew Shewmaker. that there would be approximations and inaccuracies. However, it is clear that comparing Inigo to Reno and Westwood on Mininet would not be a productive comparison. It was hoped that NS3 would be more robust, due to its complexity, age, and heavy usage in the networking community. Figure 2.5 shows the congestion window and slow start threshold in segments for TCP Inigo, Reno, and Westwood for the same experiment run on NS3. The version of NS3 used was the development repository between releases 3.24 and 3.25, with TCP refactoring 19

29 Figure 2.4: Bottleneck queue length (in packets) for TCP Inigo, Reno, and Westwood, run on Mininet. Plot courtesy of Andrew Shewmaker. patches applied. This code base is very similar to NS3 release These plots can be compared to Figures 2.1, 2.2, and 2.3, respectively (N.B. that the plots for NS3 show segment count while the plots for Mininet show log of segment count), and it is clear that the two simulators produce extraordinarily different results. Similarly, Figure 2.6 shows the queue lengths for the first two seconds of the experiment for all three TCP versions on NS3. The first two seconds are shown for clarity, and the same pattern repeats for the rest of the experiment. Comparing these plots to Figure 2.4 also demonstrates extreme differences between the output of Mininet and NS3. While attempting to adjust the Mininet and NS3 experiments to try to isolate the sources of the differences, it was noted that the NS3 results were surprisingly reactive 20

30 Figure 2.5: Congestion window and slow start threshold (both in Segments) for TCP Inigo, NewReno, and Westwood, run on NS3. to certain parameter settings. Figure 2.7 shows three plots of congestion window for TCP Inigo and TCP NewReno. These plots show the same experiment described previously, only with three different RTT values: 2ms, 4ms, and 8ms. At 8ms it is clear that something causes the experiment to no longer create congestion, although the traffic was setup to fill the network, independent of RTT. A very similar phenomenon was also observed when changing the MTU from 1500 Byes to 64 kilobytes. These results brought us to the conclusion that it was impossible to validate our implementation of TCP Inigo in NS3, or to use NS3 to make any comparisons between TCP 21

31 Figure 2.6: Bottleneck queue length (in packets) for TCP Inigo, NewReno, and Westwood, run on Mininet. Inigo and existing TCP congestion control algorithms, with the behavior of the simulator as it was. We therefore began a process of attempting to validate and correct the NS3 TCP implementation. We used previously published results containing experiments run on hardware, in NS2, or in earlier versions of NS3, as baselines to understand the behavior of the current NS3 release. The results of this validation attempt, as well as our proposals for an improved TCP congestion control implementation for simulators, are contained in the remainder of this work. 22

32 Figure 2.7: Congestion window (in Segments) for TCP Inigo and NewReno, run on NS3, with RTT of 2ms, 4ms, and 8ms, clockwise from top left. 2.2 Validating ns-3 Version 3.25 Before presenting our proposed design, we further motivate the need for a new design by presenting experiments we conducted to validate the current state of the simulator. To this end, we tried to reproduce results reported in the literature. In particular, the experiments presented here are taken from previous work by Gangadhar et al[10], Casoni et al[7], and Grieco et al[11]. The simulation parameters and the values we used for these experiments are summarized in Table 2.1, which also includes the reference to the work where the experiment was originally carried out (Original Source column) and the network 23

33 simulator used to run the original experiments (Simulator Used column). A? in that column indicates that the simulator version was not specified in the original paper, so we used the publication date and the simulator s version release schedule to infer the version used in the original experiments. The tcp-variants-comparison row in Table 2.1 refers to a version of the experiment developed by Gangadhar et al that is provided as part of ns-3 release (in the examples/tcp directory). All experiments involve a source and a sink connected by a gateway with pointto-point links, and traffic sent from the source to the sink using the BulkSendHelper traffic generator, which continuously sends traffic to fill the network. The link with the lower bandwidth is referred to as Bottleneck Link, and the other as Access Link. Their bandwidth and propagation delay are shown in the fourth and fifth columns of Table 2.1. Where applicable, packet losses are simulated using a uniform distribution defined by the rate shown in the Loss Rate column in Table 2.1. Different experiments show different degrees of variation between the current ns release and original results obtained from either previous versions of ns-3, ns-2, or hardware. Figure 2.8 shows an experiment originally from Casoni et al [7], in which results from ns-3 version 3.22 s TCP New Reno were compared to Linux. Their results showed cwnd and ssth from ns-3 both very close in scale and pattern to the values obtained from Linux; however, cwnd showed spikes that were attributed to fast retransmit. The top left plot in Figure 2.8 shows the same experiment repeated on ns , with results very similar to those in the original publication. We found, however, that the spikes are due to fast recovery: the increase of cwnd by one segment for every duplicate ACK after the three 24

34 Figure 2.8: Congestion window and slow start threshold for New Reno in ns (top row) and 3.25 (bottom row), with packet loss rates set to 10 3 and with fast recovery enabled (left column) and disabled (right column). 40 second trace taken from a 600 second experiment. duplicate ACKs that initiate fast recovery (section 3.2 of RFC 2582[9]). The top right plot in Figure 2.8 shows the same experiment with that portion of fast recovery commented out. This plot more closely matches the Linux results presented in Casoni et al [7]. For New Reno, this difference is not necessarily an error. The ns-3 code very closely follows the RFC, however the Linux TCP implementation handles fast recovery differently. The Linux implementation does not follow RFC 2582 [9] exactly and does not adjust cwnd for every ACK during fast recovery [7] [26]. 25

35 The important point here is that, in order to disable fast recovery, its code had to be tracked down and commented out within the shared code base. The current design of TCP in ns-3 allows no option for changing or disabling the fast retransmit and fast recovery algorithms. In different scenarios you may want different implementations of the algorithm, or, as in this case, simply to turn one off. Commenting out sections of code is a very error prone method of achieving that result. For TCP Westwood, however, this design does result in an error. The Westwood algorithm specifies that the bandwidth estimation should be applied not just after a timeout, but after n duplicate ACKs as well [16]. The New Reno fast recovery algorithm specifies that after three duplicate ACKs the congestion window is halved. Since this New Reno code exists in the shared code base, after three duplicate ACKs the congestion window is halved for TCP Westwood as well, and the bandwidth estimation does not occur as it should, according to the Westwood specification. Having one fast retransmit and fast recovery implementation in the shared code base is therefore not a viable design for an extensible TCP implementation as it is not guaranteed that any two congestion control algorithms will require the same behavior for those components. The bottom two plots in Figure 2.8 show the same set of experiments, but run on ns , with the refactored TCP. It is important to reiterate that, although the intention of this refactoring was to make the TCP congestion control design extensible, it was still necessary to manually find and comment out the relevant lines of code corresponding to fast recovery in the shared code base. Although there is not a drastic difference between the results for the two versions of ns-3, the new version does respond differently to the removal 26

36 Figure 2.9: Congestion window and slow start threshold for New Reno in ns (top row) and 3.25 (bottom row), with packet loss rates set to 0 (left column) and 10 3 (right column). Full trace from a 1000 second experiment. of the fast recovery algorithm. Looking closely at the bottom right plot in Figure 2.8 it is clear that slow start is initiated much more frequently than in the other plots. In fact, it is quite concerning to see slow start initiated almost 20 times in a span of 40 seconds. This issue could be due to a number of factors but the current design of the TCP congestion control implementation makes it difficult to isolate the source of the change. Because of the extent to which the shared code has been increased, it is hard to track the relationship between the modular component and the shared code base. In this case, it is easy to 27

37 Figure 2.10: Congestion window and slow start threshold for New Reno in ns (top row) and 3.25 (bottom row), with packet loss rates set to 0 (left column) and 10 3 (right column). Full trace from a 100 second experiment. validate the implementation of the SlowStart function, but difficult to see exactly under what conditions SlowStart will be executed. This example further motivates the need for a comprehensive design evaluation. Figure 2.9 shows the results of another experiment comparing ns-3 versions 3.24 and This experiment was first presented in Grieco et al [11] and seen also in Abdeljaouad et al [1]. This experiment originally included one forward flow with 10 back-flows turning on and off but, due to the results discovered while attempting duplication, the 28

38 experiment is presented here without the back-flows. Again, the top row shows results for version 3.24 and the bottom row for version For this experiment, the left column is performed with a packet loss rate of 0 and the right column with a rate of The differences in the plots with an error rate of zero (left column) are quite significant. The top left plot very closely matches the results previously obtained by Grieco et al [11] and Abdeljaouad et al [1], which used the ns-2 network simulator [19]. It appears, looking at the bottom left plot, that in ns-3 version 3.25, without packet loss, congestion events only occur once the congestion window has grown to a much larger size than in ns-3 version Since the experiments have the exact same setup, it is clear that some parameter is not enforced properly in version This is not visible in the equivalent experiments with a non-zero packet loss rate (right column) because even a very small loss rate creates enough perceived congestion events to mask this issue. The difference in these plots speaks to an insufficient validation effort between release versions of ns-3. If such simple experiments are not repeatable, how can the implementation be used to support further research? Figure 2.10 shows behavior similar to the plots in Figure 2.9. The plots in this figure were generated using the tcp-variants-comparison experiment that exists in the examples/tcp directory in every ns-3 release (based on the experiment presented in Gangadhar et al [10]). Again, the top row shows the experiments run on release 3.24, the bottom on release 3.25, the left column shows packet loss rates of 0, and the right column of Again, the difference in the shape of cwnd for 0 loss rate shows a lack of repeatability even between an example provided in each ns-3 release. 29

39 2.3 Proposed Design The examples presented in the previous section motivate the need for re-designing a TCP congestion control implementation for network simulators in order to increase modularity, and consequently robustness and extensibility. Our design is based on the following goals: Extensibility: The primary goal of network simulators is to allow researchers to test and validate new protocols or variants/extensions to existing ones. TCP is a prime example: as one of the most widely used Internet protocols, TCP has attracted considerable attention from the networking research and practitioner communities. In particular, the TCP congestion control algorithms have evolved significantly over the years, with new versions often appearing as minor modifications of existing algorithms. Consequently, it must be straightforward to add new TCP congestion control variants to network simulators existing TCP code base in order to facilitate testing and validation. Modularity: TCP congestion control has evolved significantly over time and currently has many variants, each of which consists of a set of algorithms. Among those algorithms, some may be unique and some shared with other variants. As such, modular design and implementation are critical, especially in network simulation platforms. Modularity will allow different congestion control variants to share implementations of common functions that have been well tested. It will also facilitate experimenting and testing new congestion control versions since modules that implement different functions can be easily enabled/disabled. In addition, it will greatly reduce the overhead necessary for implementing new congestion control mechanisms, as they are often small modifications to one 30

40 tcp-congestion-ops: Defines basic congestion control functionality: PktsAcked() GetSsThresh() IncreaseWindow() Add: FastRetransmit(), FastRecovery() Refactored Shared Code Base Includes common functionality: All ACKs, Error/Flow Control Will not include: Fast Retransmit and Fast Recovery tcp-getssthresh tcp-pktsacked tcp-increasewindow tcp-fastretransmit tcp-fastrecovery tcp-new-version tcp-hybla tcp-westwood(+) Figure 2.11: Proposed design for a modular and extensible TCP congestion control implementation. component of existing algorithms. While our design can be generalized to any network simulator, we used ns-3 to showcase and test our implementation. As such, we will continue to use ns-3 s TCP congestion control implementation as the basis for our design changes. The main modification to the shared code found in tcp-socket-base is that it no longer includes fast retransmit and fast recovery algorithms. The tcp-congestion-ops module remains the base class for all congestion control implementations with the following changes: (1) it no longer contains the implementation of New Reno, which now is implemented in a separate, dedicated module, and (2) it now includes FastRetransmit and FastRecovery function signatures. Each function in the base class will have a corresponding module in which all its variant implementations will reside. Lastly, each specific congestion control algorithm will have a module that defines the selection of its components. 31

41 Figure 2.11 summarizes the proposed design. We believe that this design achieves modularity and extensibility, ensuring implementations are clear and easy to understand. Specifically, having a module for each algorithm, not just each congestion control variant, allows developers to easily find and follow implementations that already exist for one specific piece of functionality, without having to root through all congestion control modules and the shared code base. Table 2.2 summarizes the changes made between ns-3 releases 3.24 and 3.25, as well as our design changes. Each column in the table corresponds to a specific module and the rows describe the state of that module for the three designs. The table clearly shows the proposed design s increased modularity. The extensibility of the proposed design can be inferred from the ease with which a new algorithm module can be created, given an existing set of component modules. Next, Section 2.4 validates our design against ns and Section 2.5 showcases the extensibility of the design by using it as a basis to implement a new TCP congestion control variant. 2.4 Design Validation TCP New Reno Ideally we would like to fully validate our proposed design. Unfortunately, as demonstrated previously, the starting point for our design is not entirely correct. In this section we show that we are able to obtain the same results for TCP New Reno using our design and implementation when compared to the results obtained with ns The 32

42 Figure 2.12: Comparison between simulation results obtained with our design and implementation of New Reno and ns s New Reno. Left-side graph can be compared with Figure 2.8 s lower left graph, and right-side graph can be compared with Figure 2.10 s lower left graph. second part of this section shows the same results for TCP Westwood and contains a deep dive into an example illustrating our attempts at achieving consistent results between ns and The goal is to show how high a barrier poor validation between releases can be for researchers trying to validate their results. Finally, in Section 2.5, we return to our original motivation, i.e., implementing a new TCP congestion control algorithm in ns-3, to show how our proposed design simplifies that effort. The plot on the left side of Figure 2.12 shows results obtained by running the experiment by Casoni et al[7] using our implementation of TCP New Reno. We observe that it matches quite well Figure 2.8 s lower left graph, which shows results from the same experiment run using ns The plot on the right side of Figure 2.12 shows results from the tcp-variants-comparison experiment available as part as the ns release. It matches quite well Figure 2.10 s lower left graph which was obtained by running the same experiment using ns s TCP NewReno. 33

43 Figure 2.13: (Duplicate of Figure 1.1): A 600 second trace of the congestion window (in bytes) of TCP Westwood. Experiment from Gangadhar et al[10] run on ns-3 release version 3.24 (left) and version 3.25 (right). Figure 2.14: A 600 second trace of the congestion window of TCP Westwood. Experiment from Gangadhar et al [10] run on ns after design implementation (left) and after artificially increasing the bandwidth estimate by a factor of 40 (right) TCP Westwood For TCP Westwood, we used the experiment from Gangadhar et al[10]. Figure 2.13 (a duplication of Figure 1.1, presented again for clarity) shows the original outcome of this experiment, in which we observe considerable differences between ns-3 releases 3.24 and As discussed in Section 2.2, one clear bug with the TCP Westwood implementation 34

Congestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University

Congestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University Congestion Control Daniel Zappala CS 460 Computer Networking Brigham Young University 2/25 Congestion Control how do you send as fast as possible, without overwhelming the network? challenges the fastest

More information

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness Recap TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness 81 Feedback Signals Several possible signals, with different

More information

Fast Retransmit. Problem: coarsegrain. timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission

Fast Retransmit. Problem: coarsegrain. timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission Fast Retransmit Problem: coarsegrain TCP timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission Packet 1 Packet 2 Packet 3 Packet 4 Packet 5 Packet 6 Sender Receiver

More information

Transmission Control Protocol. ITS 413 Internet Technologies and Applications

Transmission Control Protocol. ITS 413 Internet Technologies and Applications Transmission Control Protocol ITS 413 Internet Technologies and Applications Contents Overview of TCP (Review) TCP and Congestion Control The Causes of Congestion Approaches to Congestion Control TCP Congestion

More information

Appendix B. Standards-Track TCP Evaluation

Appendix B. Standards-Track TCP Evaluation 215 Appendix B Standards-Track TCP Evaluation In this appendix, I present the results of a study of standards-track TCP error recovery and queue management mechanisms. I consider standards-track TCP error

More information

CS321: Computer Networks Congestion Control in TCP

CS321: Computer Networks Congestion Control in TCP CS321: Computer Networks Congestion Control in TCP Dr. Manas Khatua Assistant Professor Dept. of CSE IIT Jodhpur E-mail: manaskhatua@iitj.ac.in Causes and Cost of Congestion Scenario-1: Two Senders, a

More information

Impact of transmission errors on TCP performance. Outline. Random Errors

Impact of transmission errors on TCP performance. Outline. Random Errors Impact of transmission errors on TCP performance 1 Outline Impact of transmission errors on TCP performance Approaches to improve TCP performance Classification Discussion of selected approaches 2 Random

More information

Performance Evaluation of TCP Westwood. Summary

Performance Evaluation of TCP Westwood. Summary Summary This project looks at a fairly new Transmission Control Protocol flavour, TCP Westwood and aims to investigate how this flavour of TCP differs from other flavours of the protocol, especially TCP

More information

Performance Analysis of TCP Variants

Performance Analysis of TCP Variants 102 Performance Analysis of TCP Variants Abhishek Sawarkar Northeastern University, MA 02115 Himanshu Saraswat PES MCOE,Pune-411005 Abstract The widely used TCP protocol was developed to provide reliable

More information

Congestion Collapse in the 1980s

Congestion Collapse in the 1980s Congestion Collapse Congestion Collapse in the 1980s Early TCP used fixed size window (e.g., 8 packets) Initially fine for reliability But something happened as the ARPANET grew Links stayed busy but transfer

More information

CMPE 257: Wireless and Mobile Networking

CMPE 257: Wireless and Mobile Networking CMPE 257: Wireless and Mobile Networking Katia Obraczka Computer Engineering UCSC Baskin Engineering Lecture 10 CMPE 257 Spring'15 1 Student Presentations Schedule May 21: Sam and Anuj May 26: Larissa

More information

Performance Consequences of Partial RED Deployment

Performance Consequences of Partial RED Deployment Performance Consequences of Partial RED Deployment Brian Bowers and Nathan C. Burnett CS740 - Advanced Networks University of Wisconsin - Madison ABSTRACT The Internet is slowly adopting routers utilizing

More information

CMSC 417. Computer Networks Prof. Ashok K Agrawala Ashok Agrawala. October 30, 2018

CMSC 417. Computer Networks Prof. Ashok K Agrawala Ashok Agrawala. October 30, 2018 CMSC 417 Computer Networks Prof. Ashok K Agrawala 2018 Ashok Agrawala October 30, 2018 Message, Segment, Packet, and Frame host host HTTP HTTP message HTTP TCP TCP segment TCP router router IP IP packet

More information

Overview. TCP congestion control Computer Networking. TCP modern loss recovery. TCP modeling. TCP Congestion Control AIMD

Overview. TCP congestion control Computer Networking. TCP modern loss recovery. TCP modeling. TCP Congestion Control AIMD Overview 15-441 Computer Networking Lecture 9 More TCP & Congestion Control TCP congestion control TCP modern loss recovery TCP modeling Lecture 9: 09-25-2002 2 TCP Congestion Control Changes to TCP motivated

More information

Transport Layer (Congestion Control)

Transport Layer (Congestion Control) Transport Layer (Congestion Control) Where we are in the Course Moving on up to the Transport Layer! Application Transport Network Link Physical CSE 461 University of Washington 2 Congestion Collapse Congestion

More information

Lecture 21: Congestion Control" CSE 123: Computer Networks Alex C. Snoeren

Lecture 21: Congestion Control CSE 123: Computer Networks Alex C. Snoeren Lecture 21: Congestion Control" CSE 123: Computer Networks Alex C. Snoeren Lecture 21 Overview" How fast should a sending host transmit data? Not to fast, not to slow, just right Should not be faster than

More information

image 3.8 KB Figure 1.6: Example Web Page

image 3.8 KB Figure 1.6: Example Web Page image. KB image 1 KB Figure 1.: Example Web Page and is buffered at a router, it must wait for all previously queued packets to be transmitted first. The longer the queue (i.e., the more packets in the

More information

CS 43: Computer Networks. 19: TCP Flow and Congestion Control October 31, Nov 2, 2018

CS 43: Computer Networks. 19: TCP Flow and Congestion Control October 31, Nov 2, 2018 CS 43: Computer Networks 19: TCP Flow and Congestion Control October 31, Nov 2, 2018 Five-layer Internet Model Application: the application (e.g., the Web, Email) Transport: end-to-end connections, reliability

More information

Lecture 15: TCP over wireless networks. Mythili Vutukuru CS 653 Spring 2014 March 13, Thursday

Lecture 15: TCP over wireless networks. Mythili Vutukuru CS 653 Spring 2014 March 13, Thursday Lecture 15: TCP over wireless networks Mythili Vutukuru CS 653 Spring 2014 March 13, Thursday TCP - recap Transport layer TCP is the dominant protocol TCP provides in-order reliable byte stream abstraction

More information

TCP congestion control:

TCP congestion control: TCP congestion control: Probing for usable bandwidth: Ideally: transmit as fast as possible (cwnd as large as possible) without loss Increase cwnd until loss (congestion) Loss: decrease cwnd, then begin

More information

A Hybrid Systems Modeling Framework for Fast and Accurate Simulation of Data Communication Networks. Motivation

A Hybrid Systems Modeling Framework for Fast and Accurate Simulation of Data Communication Networks. Motivation A Hybrid Systems Modeling Framework for Fast and Accurate Simulation of Data Communication Networks Stephan Bohacek João P. Hespanha Junsoo Lee Katia Obraczka University of Delaware University of Calif.

More information

RED behavior with different packet sizes

RED behavior with different packet sizes RED behavior with different packet sizes Stefaan De Cnodder, Omar Elloumi *, Kenny Pauwels Traffic and Routing Technologies project Alcatel Corporate Research Center, Francis Wellesplein, 1-18 Antwerp,

More information

ADVANCED COMPUTER NETWORKS

ADVANCED COMPUTER NETWORKS ADVANCED COMPUTER NETWORKS Congestion Control and Avoidance 1 Lecture-6 Instructor : Mazhar Hussain CONGESTION CONTROL When one part of the subnet (e.g. one or more routers in an area) becomes overloaded,

More information

Chapter III. congestion situation in Highspeed Networks

Chapter III. congestion situation in Highspeed Networks Chapter III Proposed model for improving the congestion situation in Highspeed Networks TCP has been the most used transport protocol for the Internet for over two decades. The scale of the Internet and

More information

ENRICHMENT OF SACK TCP PERFORMANCE BY DELAYING FAST RECOVERY Mr. R. D. Mehta 1, Dr. C. H. Vithalani 2, Dr. N. N. Jani 3

ENRICHMENT OF SACK TCP PERFORMANCE BY DELAYING FAST RECOVERY Mr. R. D. Mehta 1, Dr. C. H. Vithalani 2, Dr. N. N. Jani 3 Research Article ENRICHMENT OF SACK TCP PERFORMANCE BY DELAYING FAST RECOVERY Mr. R. D. Mehta 1, Dr. C. H. Vithalani 2, Dr. N. N. Jani 3 Address for Correspondence 1 Asst. Professor, Department of Electronics

More information

TCP Congestion Control

TCP Congestion Control 1 TCP Congestion Control Onwutalobi, Anthony Claret Department of Computer Science University of Helsinki, Helsinki Finland onwutalo@cs.helsinki.fi Abstract This paper is aimed to discuss congestion control

More information

Computer Networking Introduction

Computer Networking Introduction Computer Networking Introduction Halgurd S. Maghdid Software Engineering Department Koya University-Koya, Kurdistan-Iraq Lecture No.11 Chapter 3 outline 3.1 transport-layer services 3.2 multiplexing and

More information

Transmission Control Protocol (TCP)

Transmission Control Protocol (TCP) TETCOS Transmission Control Protocol (TCP) Comparison of TCP Congestion Control Algorithms using NetSim @2017 Tetcos. This document is protected by copyright, all rights reserved Table of Contents 1. Abstract....

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2014 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

15-744: Computer Networking TCP

15-744: Computer Networking TCP 15-744: Computer Networking TCP Congestion Control Congestion Control Assigned Reading [Jacobson and Karels] Congestion Avoidance and Control [TFRC] Equation-Based Congestion Control for Unicast Applications

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2014 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

CS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP

CS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP CS 5520/ECE 5590NA: Network Architecture I Spring 2008 Lecture 13: UDP and TCP Most recent lectures discussed mechanisms to make better use of the IP address space, Internet control messages, and layering

More information

ENSC 835: COMMUNICATION NETWORKS

ENSC 835: COMMUNICATION NETWORKS ENSC 835: COMMUNICATION NETWORKS Evaluation of TCP congestion control mechanisms using OPNET simulator Spring 2008 FINAL PROJECT REPORT LAXMI SUBEDI http://www.sfu.ca/~lsa38/project.html lsa38@cs.sfu.ca

More information

Investigating the Use of Synchronized Clocks in TCP Congestion Control

Investigating the Use of Synchronized Clocks in TCP Congestion Control Investigating the Use of Synchronized Clocks in TCP Congestion Control Michele Weigle Dissertation Defense May 14, 2003 Advisor: Kevin Jeffay Research Question Can the use of exact timing information improve

More information

CSE 123A Computer Networks

CSE 123A Computer Networks CSE 123A Computer Networks Winter 2005 Lecture 14 Congestion Control Some images courtesy David Wetherall Animations by Nick McKeown and Guido Appenzeller The bad news and the good news The bad news: new

More information

Lecture 14: Congestion Control"

Lecture 14: Congestion Control Lecture 14: Congestion Control" CSE 222A: Computer Communication Networks George Porter Thanks: Amin Vahdat, Dina Katabi and Alex C. Snoeren Lecture 14 Overview" TCP congestion control review Dukkipati

More information

F-RTO: An Enhanced Recovery Algorithm for TCP Retransmission Timeouts

F-RTO: An Enhanced Recovery Algorithm for TCP Retransmission Timeouts F-RTO: An Enhanced Recovery Algorithm for TCP Retransmission Timeouts Pasi Sarolahti Nokia Research Center pasi.sarolahti@nokia.com Markku Kojo, Kimmo Raatikainen University of Helsinki Department of Computer

More information

IJSRD - International Journal for Scientific Research & Development Vol. 2, Issue 03, 2014 ISSN (online):

IJSRD - International Journal for Scientific Research & Development Vol. 2, Issue 03, 2014 ISSN (online): IJSRD - International Journal for Scientific Research & Development Vol. 2, Issue 03, 2014 ISSN (online): 2321-0613 Performance Evaluation of TCP in the Presence of in Heterogeneous Networks by using Network

More information

Congestion control in TCP

Congestion control in TCP Congestion control in TCP If the transport entities on many machines send too many packets into the network too quickly, the network will become congested, with performance degraded as packets are delayed

More information

CMSC 417. Computer Networks Prof. Ashok K Agrawala Ashok Agrawala. October 25, 2018

CMSC 417. Computer Networks Prof. Ashok K Agrawala Ashok Agrawala. October 25, 2018 CMSC 417 Computer Networks Prof. Ashok K Agrawala 2018 Ashok Agrawala Message, Segment, Packet, and Frame host host HTTP HTTP message HTTP TCP TCP segment TCP router router IP IP packet IP IP packet IP

More information

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks Hybrid Control and Switched Systems Lecture #17 Hybrid Systems Modeling of Communication Networks João P. Hespanha University of California at Santa Barbara Motivation Why model network traffic? to validate

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2015 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

Transport Protocols for Data Center Communication. Evisa Tsolakou Supervisor: Prof. Jörg Ott Advisor: Lect. Pasi Sarolahti

Transport Protocols for Data Center Communication. Evisa Tsolakou Supervisor: Prof. Jörg Ott Advisor: Lect. Pasi Sarolahti Transport Protocols for Data Center Communication Evisa Tsolakou Supervisor: Prof. Jörg Ott Advisor: Lect. Pasi Sarolahti Contents Motivation and Objectives Methodology Data Centers and Data Center Networks

More information

Studying Fairness of TCP Variants and UDP Traffic

Studying Fairness of TCP Variants and UDP Traffic Studying Fairness of TCP Variants and UDP Traffic Election Reddy B.Krishna Chaitanya Problem Definition: To study the fairness of TCP variants and UDP, when sharing a common link. To do so we conduct various

More information

MEASURING PERFORMANCE OF VARIANTS OF TCP CONGESTION CONTROL PROTOCOLS

MEASURING PERFORMANCE OF VARIANTS OF TCP CONGESTION CONTROL PROTOCOLS MEASURING PERFORMANCE OF VARIANTS OF TCP CONGESTION CONTROL PROTOCOLS Harjinder Kaur CSE, GZSCCET, Dabwali Road, Bathinda, Punjab, India, sidhuharryab@gmail.com Gurpreet Singh Abstract CSE, GZSCCET, Dabwali

More information

Operating Systems and Networks. Network Lecture 10: Congestion Control. Adrian Perrig Network Security Group ETH Zürich

Operating Systems and Networks. Network Lecture 10: Congestion Control. Adrian Perrig Network Security Group ETH Zürich Operating Systems and Networks Network Lecture 10: Congestion Control Adrian Perrig Network Security Group ETH Zürich Where we are in the Course More fun in the Transport Layer! The mystery of congestion

More information

Where we are in the Course. Topic. Nature of Congestion. Nature of Congestion (3) Nature of Congestion (2) Operating Systems and Networks

Where we are in the Course. Topic. Nature of Congestion. Nature of Congestion (3) Nature of Congestion (2) Operating Systems and Networks Operating Systems and Networks Network Lecture 0: Congestion Control Adrian Perrig Network Security Group ETH Zürich Where we are in the Course More fun in the Transport Layer! The mystery of congestion

More information

Advanced Computer Networks

Advanced Computer Networks Advanced Computer Networks Congestion control in TCP Contents Principles TCP congestion control states Congestion Fast Recovery TCP friendly applications Prof. Andrzej Duda duda@imag.fr http://duda.imag.fr

More information

Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015

Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015 Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015 I d like to complete our exploration of TCP by taking a close look at the topic of congestion control in TCP. To prepare for

More information

TCP Congestion Control in Wired and Wireless networks

TCP Congestion Control in Wired and Wireless networks TCP Congestion Control in Wired and Wireless networks Mohamadreza Najiminaini (mna28@cs.sfu.ca) Term Project ENSC 835 Spring 2008 Supervised by Dr. Ljiljana Trajkovic School of Engineering and Science

More information

Chapter 13 TRANSPORT. Mobile Computing Winter 2005 / Overview. TCP Overview. TCP slow-start. Motivation Simple analysis Various TCP mechanisms

Chapter 13 TRANSPORT. Mobile Computing Winter 2005 / Overview. TCP Overview. TCP slow-start. Motivation Simple analysis Various TCP mechanisms Overview Chapter 13 TRANSPORT Motivation Simple analysis Various TCP mechanisms Distributed Computing Group Mobile Computing Winter 2005 / 2006 Distributed Computing Group MOBILE COMPUTING R. Wattenhofer

More information

Congestion Control End Hosts. CSE 561 Lecture 7, Spring David Wetherall. How fast should the sender transmit data?

Congestion Control End Hosts. CSE 561 Lecture 7, Spring David Wetherall. How fast should the sender transmit data? Congestion Control End Hosts CSE 51 Lecture 7, Spring. David Wetherall Today s question How fast should the sender transmit data? Not tooslow Not toofast Just right Should not be faster than the receiver

More information

Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks

Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks Zongsheng Zhang Go Hasegawa Masayuki Murata Osaka University Contents Introduction Analysis of parallel TCP mechanism Numerical

More information

TCP based Receiver Assistant Congestion Control

TCP based Receiver Assistant Congestion Control International Conference on Multidisciplinary Research & Practice P a g e 219 TCP based Receiver Assistant Congestion Control Hardik K. Molia Master of Computer Engineering, Department of Computer Engineering

More information

TCP over Wireless. Protocols and Networks Hadassah College Spring 2018 Wireless Dr. Martin Land 1

TCP over Wireless. Protocols and Networks Hadassah College Spring 2018 Wireless Dr. Martin Land 1 TCP over Wireless Protocols and Networks Hadassah College Spring 218 Wireless Dr. Martin Land 1 Classic TCP-Reno Ideal operation in-flight segments = cwnd (send cwnd without stopping) Cumulative ACK for

More information

Outline. CS5984 Mobile Computing

Outline. CS5984 Mobile Computing CS5984 Mobile Computing Dr. Ayman Abdel-Hamid Computer Science Department Virginia Tech Outline Review Transmission Control Protocol (TCP) Based on Behrouz Forouzan, Data Communications and Networking,

More information

TCP FTAT (Fast Transmit Adaptive Transmission): a New End-To-End Congestion Control Algorithm

TCP FTAT (Fast Transmit Adaptive Transmission): a New End-To-End Congestion Control Algorithm Cleveland State University EngagedScholarship@CSU ETD Archive 2014 TCP FTAT (Fast Transmit Adaptive Transmission): a New End-To-End Congestion Control Algorithm Mohammed Ahmed Melegy Mohammed Afifi Cleveland

More information

TCP Incast problem Existing proposals

TCP Incast problem Existing proposals TCP Incast problem & Existing proposals Outline The TCP Incast problem Existing proposals to TCP Incast deadline-agnostic Deadline-Aware Datacenter TCP deadline-aware Picasso Art is TLA 1. Deadline = 250ms

More information

Robust TCP Congestion Recovery

Robust TCP Congestion Recovery Robust TCP Congestion Recovery Haining Wang Kang G. Shin Computer Science Department EECS Department College of William and Mary The University of Michigan Williamsburg, VA 23187 Ann Arbor, MI 48109 hnw@cs.wm.edu

More information

CS4700/CS5700 Fundamentals of Computer Networks

CS4700/CS5700 Fundamentals of Computer Networks CS4700/CS5700 Fundamentals of Computer Networks Lecture 15: Congestion Control Slides used with permissions from Edward W. Knightly, T. S. Eugene Ng, Ion Stoica, Hui Zhang Alan Mislove amislove at ccs.neu.edu

More information

TCP Congestion Control

TCP Congestion Control TCP Congestion Control What is Congestion The number of packets transmitted on the network is greater than the capacity of the network Causes router buffers (finite size) to fill up packets start getting

More information

TCP Congestion Control

TCP Congestion Control What is Congestion TCP Congestion Control The number of packets transmitted on the network is greater than the capacity of the network Causes router buffers (finite size) to fill up packets start getting

More information

Outline Computer Networking. TCP slow start. TCP modeling. TCP details AIMD. Congestion Avoidance. Lecture 18 TCP Performance Peter Steenkiste

Outline Computer Networking. TCP slow start. TCP modeling. TCP details AIMD. Congestion Avoidance. Lecture 18 TCP Performance Peter Steenkiste Outline 15-441 Computer Networking Lecture 18 TCP Performance Peter Steenkiste Fall 2010 www.cs.cmu.edu/~prs/15-441-f10 TCP congestion avoidance TCP slow start TCP modeling TCP details 2 AIMD Distributed,

More information

Lixia Zhang M. I. T. Laboratory for Computer Science December 1985

Lixia Zhang M. I. T. Laboratory for Computer Science December 1985 Network Working Group Request for Comments: 969 David D. Clark Mark L. Lambert Lixia Zhang M. I. T. Laboratory for Computer Science December 1985 1. STATUS OF THIS MEMO This RFC suggests a proposed protocol

More information

UNIT IV -- TRANSPORT LAYER

UNIT IV -- TRANSPORT LAYER UNIT IV -- TRANSPORT LAYER TABLE OF CONTENTS 4.1. Transport layer. 02 4.2. Reliable delivery service. 03 4.3. Congestion control. 05 4.4. Connection establishment.. 07 4.5. Flow control 09 4.6. Transmission

More information

Flow and Congestion Control (Hosts)

Flow and Congestion Control (Hosts) Flow and Congestion Control (Hosts) 14-740: Fundamentals of Computer Networks Bill Nace Material from Computer Networking: A Top Down Approach, 6 th edition. J.F. Kurose and K.W. Ross traceroute Flow Control

More information

CS Networks and Distributed Systems. Lecture 10: Congestion Control

CS Networks and Distributed Systems. Lecture 10: Congestion Control CS 3700 Networks and Distributed Systems Lecture 10: Congestion Control Revised 2/9/2014 Transport Layer 2 Application Presentation Session Transport Network Data Link Physical Function:! Demultiplexing

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2015 1 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

Contents. CIS 632 / EEC 687 Mobile Computing. TCP in Fixed Networks. Prof. Chansu Yu

Contents. CIS 632 / EEC 687 Mobile Computing. TCP in Fixed Networks. Prof. Chansu Yu CIS 632 / EEC 687 Mobile Computing TCP in Fixed Networks Prof. Chansu Yu Contents Physical layer issues Communication frequency Signal propagation Modulation and Demodulation Channel access issues Multiple

More information

Transport layer issues

Transport layer issues Transport layer issues Dmitrij Lagutin, dlagutin@cc.hut.fi T-79.5401 Special Course in Mobility Management: Ad hoc networks, 28.3.2007 Contents Issues in designing a transport layer protocol for ad hoc

More information

Chapter III: Transport Layer

Chapter III: Transport Layer Chapter III: Transport Layer UG3 Computer Communications & Networks (COMN) Mahesh Marina mahesh@ed.ac.uk Slides thanks to Myungjin Lee and copyright of Kurose and Ross Principles of congestion control

More information

TCP Congestion Control

TCP Congestion Control TCP Congestion Control Lecture material taken from Computer Networks A Systems Approach, Third Ed.,Peterson and Davie, Morgan Kaufmann, 2003. Computer Networks: TCP Congestion Control 1 TCP Congestion

More information

Networked Systems and Services, Fall 2018 Chapter 3

Networked Systems and Services, Fall 2018 Chapter 3 Networked Systems and Services, Fall 2018 Chapter 3 Jussi Kangasharju Markku Kojo Lea Kutvonen 4. Transport Layer Reliability with TCP Transmission Control Protocol (TCP) RFC 793 + more than hundred other

More information

Congestion Control in Datacenters. Ahmed Saeed

Congestion Control in Datacenters. Ahmed Saeed Congestion Control in Datacenters Ahmed Saeed What is a Datacenter? Tens of thousands of machines in the same building (or adjacent buildings) Hundreds of switches connecting all machines What is a Datacenter?

More information

CS3600 SYSTEMS AND NETWORKS

CS3600 SYSTEMS AND NETWORKS CS3600 SYSTEMS AND NETWORKS NORTHEASTERN UNIVERSITY Lecture 24: Congestion Control Prof. Alan Mislove (amislove@ccs.neu.edu) Slides used with permissions from Edward W. Knightly, T. S. Eugene Ng, Ion Stoica,

More information

Networked Systems and Services, Fall 2017 Reliability with TCP

Networked Systems and Services, Fall 2017 Reliability with TCP Networked Systems and Services, Fall 2017 Reliability with TCP Jussi Kangasharju Markku Kojo Lea Kutvonen 4. Transmission Control Protocol (TCP) RFC 793 + more than hundred other RFCs TCP Loss Recovery

More information

Advanced Congestion Control (Hosts)

Advanced Congestion Control (Hosts) Advanced Congestion Control (Hosts) 14-740: Fundamentals of Computer Networks Bill Nace Material from Computer Networking: A Top Down Approach, 5 th edition. J.F. Kurose and K.W. Ross Congestion Control

More information

PERFORMANCE COMPARISON OF THE DIFFERENT STREAMS IN A TCP BOTTLENECK LINK IN THE PRESENCE OF BACKGROUND TRAFFIC IN A DATA CENTER

PERFORMANCE COMPARISON OF THE DIFFERENT STREAMS IN A TCP BOTTLENECK LINK IN THE PRESENCE OF BACKGROUND TRAFFIC IN A DATA CENTER PERFORMANCE COMPARISON OF THE DIFFERENT STREAMS IN A TCP BOTTLENECK LINK IN THE PRESENCE OF BACKGROUND TRAFFIC IN A DATA CENTER Vilma Tomço, 1 Aleksandër Xhuvani 2 Abstract: The purpose of this work is

More information

ISSN: Index Terms Wireless networks, non - congestion events, packet reordering, spurious timeouts, reduce retransmissions.

ISSN: Index Terms Wireless networks, non - congestion events, packet reordering, spurious timeouts, reduce retransmissions. ISSN:2320-0790 A New TCP Algorithm to reduce the number of retransmissions in Wireless Networks A Beulah, R Nita Marie Ann Assistant Professsor, SSN College of Engineering, Chennai PG Scholar, SSN College

More information

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4 Principles of reliable data transfer 3.5 Connection-oriented transport: TCP segment

More information

Cross-layer TCP Performance Analysis in IEEE Vehicular Environments

Cross-layer TCP Performance Analysis in IEEE Vehicular Environments 24 Telfor Journal, Vol. 6, No. 1, 214. Cross-layer TCP Performance Analysis in IEEE 82.11 Vehicular Environments Toni Janevski, Senior Member, IEEE, and Ivan Petrov 1 Abstract In this paper we provide

More information

SIMPLE MODEL FOR TRANSMISSION CONTROL PROTOCOL (TCP) Irma Aslanishvili, Tariel Khvedelidze

SIMPLE MODEL FOR TRANSMISSION CONTROL PROTOCOL (TCP) Irma Aslanishvili, Tariel Khvedelidze 80 SIMPLE MODEL FOR TRANSMISSION CONTROL PROTOCOL (TCP) Irma Aslanishvili, Tariel Khvedelidze Abstract: Ad hoc Networks are complex distributed systems that consist of wireless mobile or static nodes that

More information

Computer Networking

Computer Networking 15-441 Computer Networking Lecture 17 TCP Performance & Future Eric Anderson Fall 2013 www.cs.cmu.edu/~prs/15-441-f13 Outline TCP modeling TCP details 2 TCP Performance Can TCP saturate a link? Congestion

More information

cs/ee 143 Communication Networks

cs/ee 143 Communication Networks cs/ee 143 Communication Networks Chapter 4 Transport Text: Walrand & Parakh, 2010 Steven Low CMS, EE, Caltech Recap: Internet overview Some basic mechanisms n Packet switching n Addressing n Routing o

More information

TCP OVER AD HOC NETWORK

TCP OVER AD HOC NETWORK TCP OVER AD HOC NETWORK Special course on data communications and networks Zahed Iqbal (ziqbal@cc.hut.fi) Agenda Introduction Versions of TCP TCP in wireless network TCP in Ad Hoc network Conclusion References

More information

Computer Networks Spring 2017 Homework 2 Due by 3/2/2017, 10:30am

Computer Networks Spring 2017 Homework 2 Due by 3/2/2017, 10:30am 15-744 Computer Networks Spring 2017 Homework 2 Due by 3/2/2017, 10:30am (please submit through e-mail to zhuoc@cs.cmu.edu and srini@cs.cmu.edu) Name: A Congestion Control 1. At time t, a TCP connection

More information

Chapter II. Protocols for High Speed Networks. 2.1 Need for alternative Protocols

Chapter II. Protocols for High Speed Networks. 2.1 Need for alternative Protocols Chapter II Protocols for High Speed Networks 2.1 Need for alternative Protocols As the conventional TCP suffers from poor performance on high bandwidth delay product links [47] meant for supporting transmission

More information

Communication Networks

Communication Networks Communication Networks Spring 2018 Laurent Vanbever nsg.ee.ethz.ch ETH Zürich (D-ITET) April 30 2018 Materials inspired from Scott Shenker & Jennifer Rexford Last week on Communication Networks We started

More information

Improving the Robustness of TCP to Non-Congestion Events

Improving the Robustness of TCP to Non-Congestion Events Improving the Robustness of TCP to Non-Congestion Events Presented by : Sally Floyd floyd@acm.org For the Authors: Sumitha Bhandarkar A. L. Narasimha Reddy {sumitha,reddy}@ee.tamu.edu Problem Statement

More information

CS519: Computer Networks. Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control

CS519: Computer Networks. Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control : Computer Networks Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control TCP performance We ve seen how TCP the protocol works Sequencing, receive window, connection setup and teardown And

More information

A Survey on Quality of Service and Congestion Control

A Survey on Quality of Service and Congestion Control A Survey on Quality of Service and Congestion Control Ashima Amity University Noida, U.P, India batra_ashima@yahoo.co.in Sanjeev Thakur Amity University Noida, U.P, India sthakur.ascs@amity.edu Abhishek

More information

Transport layer. UDP: User Datagram Protocol [RFC 768] Review principles: Instantiation in the Internet UDP TCP

Transport layer. UDP: User Datagram Protocol [RFC 768] Review principles: Instantiation in the Internet UDP TCP Transport layer Review principles: Reliable data transfer Flow control Congestion control Instantiation in the Internet UDP TCP 1 UDP: User Datagram Protocol [RFC 768] No frills, bare bones Internet transport

More information

Prasanthi Sreekumari, 1 Sang-Hwa Chung, 2 Meejeong Lee, 1 and Won-Suk Kim Introduction

Prasanthi Sreekumari, 1 Sang-Hwa Chung, 2 Meejeong Lee, 1 and Won-Suk Kim Introduction International Journal of Distributed Sensor Networks Volume 213, Article ID 59252, 16 pages http://dx.doi.org/1.1155/213/59252 Research Article : Detection of Fast Retransmission Losses Using TCP Timestamp

More information

Overview. TCP & router queuing Computer Networking. TCP details. Workloads. TCP Performance. TCP Performance. Lecture 10 TCP & Routers

Overview. TCP & router queuing Computer Networking. TCP details. Workloads. TCP Performance. TCP Performance. Lecture 10 TCP & Routers Overview 15-441 Computer Networking TCP & router queuing Lecture 10 TCP & Routers TCP details Workloads Lecture 10: 09-30-2002 2 TCP Performance TCP Performance Can TCP saturate a link? Congestion control

More information

Documents. Configuration. Important Dependent Parameters (Approximate) Version 2.3 (Wed, Dec 1, 2010, 1225 hours)

Documents. Configuration. Important Dependent Parameters (Approximate) Version 2.3 (Wed, Dec 1, 2010, 1225 hours) 1 of 7 12/2/2010 11:31 AM Version 2.3 (Wed, Dec 1, 2010, 1225 hours) Notation And Abbreviations preliminaries TCP Experiment 2 TCP Experiment 1 Remarks How To Design A TCP Experiment KB (KiloBytes = 1,000

More information

Mobile Communications Chapter 9: Mobile Transport Layer

Mobile Communications Chapter 9: Mobile Transport Layer Prof. Dr.-Ing Jochen H. Schiller Inst. of Computer Science Freie Universität Berlin Germany Mobile Communications Chapter 9: Mobile Transport Layer Motivation, TCP-mechanisms Classical approaches (Indirect

More information

Transport layer. Review principles: Instantiation in the Internet UDP TCP. Reliable data transfer Flow control Congestion control

Transport layer. Review principles: Instantiation in the Internet UDP TCP. Reliable data transfer Flow control Congestion control Transport layer Review principles: Reliable data transfer Flow control Congestion control Instantiation in the Internet UDP TCP 1 UDP: User Datagram Protocol [RFC 768] No frills, bare bones Internet transport

More information

CSE 461. TCP and network congestion

CSE 461. TCP and network congestion CSE 461 TCP and network congestion This Lecture Focus How should senders pace themselves to avoid stressing the network? Topics Application Presentation Session Transport Network congestion collapse Data

More information

Principles of congestion control

Principles of congestion control Principles of congestion control Congestion: Informally: too many sources sending too much data too fast for network to handle Different from flow control! Manifestations: Lost packets (buffer overflow

More information

CS Transport. Outline. Window Flow Control. Window Flow Control

CS Transport. Outline. Window Flow Control. Window Flow Control CS 54 Outline indow Flow Control (Very brief) Review of TCP TCP throughput modeling TCP variants/enhancements Transport Dr. Chan Mun Choon School of Computing, National University of Singapore Oct 6, 005

More information