One More Bit Is Enough

Size: px
Start display at page:

Download "One More Bit Is Enough"

Transcription

1 One More Bit Is Enough Yong Xia Lakshminarayanan Subramanian + Ion Stoica + Shivkumar Kalyanaraman ECSE Department Rensselaer Polytechnic Institute {xiay@alum, shivkuma@ecse}.rpi.edu + EECS Department University of California, Berkeley {lakme, istoica}@cs.berkeley.edu ABSTRACT Achieving efficient and fair bandwidth allocation while minimizing packet loss in high bandwidth-delay product networks has long been a daunting challenge. Existing endto-end congestion control (e.g., TCP) and traditional congestion notification schemes (e.g., TCP+AQM/ECN) have significant limitations in achieving this goal. While the recently proposed XCP protocol addresses this challenge, XCP requires multiple bits to encode the congestion-related information exchanged between routers and end-hosts. Unfortunately, there is no space in the IP header for these bits, and solving this problem involves a non-trivial and timeconsuming standardization process. In this paper, we design and implement a simple, lowcomplexity protocol, called Variable-structure congestion Control Protocol (VCP), that leverages only the existing two ECN bits for network congestion feedback, and yet achieves comparable performance to XCP, i.e., high utilization, low persistent queue length, negligible packet loss rate, and reasonable fairness. On the downside, VCP converges significantly slower to a fair allocation than XCP. We evaluate the performance of VCP using extensive ns2 simulations over a wide range of network scenarios. To gain insight into the behavior of VCP, we analyze a simple fluid model, and prove a global stability result for the case of a single bottleneck link shared by flows with identical round-trip times. CATEGORIES AND SUBJECT DESCRIPTORS C.2.6 [Communication Networks]: Internetworking GENERAL TERMS Algorithms, Design, Experimentation, Performance. KEYWORDS Congestion Control, TCP, AQM, ECN, XCP. 1. INTRODUCTION The Additive-Increase-Multiplicative-Decrease (AIMD) [1] congestion control algorithm employed by TCP [24] is known Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SIGCOMM 5, August 21 26, 25, Philadelphia, Pennsylvania, USA. Copyright 25 ACM /5/8...$5.. to be ill-suited for high Bandwidth-Delay Product (BDP) networks. With rapid advances in the deployment of very high bandwidth links in the Internet, the need for a viable replacement of TCP in such environments has become increasingly important. Several research efforts have proposed different approaches for this problem, each with their own strengths and limitations. These can be broadly classified into two categories: end-to-end and network feedback based approaches. Pure end-to-end congestion control schemes such as HighSpeed TCP [16], FAST [3] and BIC [65, 57], although being attractive solutions for the short-term (due to a lesser deployment barrier), may not be suitable as long-term solutions. Indeed, in high BDP networks, using loss and/or delay as the only congestion signal(s) poses fundamental limitations on achieving high utilization and fairness while maintaining low bottleneck queue length and minimizing congestioninduced packet drop rate. HighSpeed TCP illustrates the limitations of loss-based approaches in high bandwidth optical links with very low bit-error rates [16]. Similarly, Bullot and Cottrell show that delay-based approaches are highly sensitive to minor delay variations [7], a common case in today s Internet. To address some of the limitations of end-to-end congestion control schemes, many researchers have proposed the use of explicit network feedback. However, while traditional congestion notification feedback schemes such as TCP+AQM/ECN proposals [17, 2, 41, 55] are successful in reducing the loss rate and the queue size in the network, they still fall short in achieving high utilization in high BDP networks [23, 47, 34]. XCP [34] addresses this problem by having routers estimate the fair rate and send this rate back to the senders. Congestion control schemes that use explicit rate feedback have been also proposed in the context of the ATM Available Bit Rate (ABR) service [39, 9, 32, 26, 33]. However, these schemes are hard to deploy in today s Internet as they require a non-trivial number of bits to encode the rate, bits which are not available in the IP headers. In this paper, we show that it is possible to approximate XCP s performance in high BDP networks by leveraging only the two ECN bits (already present in the IP header) to encode the congestion feedback. The crux of our algorithm, called Variable-structure congestion Control Protocol (VCP), is to dynamically adapt the congestion control policy as a function of the level of congestion in the network. With VCP, each router computes a load factor [26], and uses this factor to classify the level of congestion into three regions: low-load, high-load and overload [28]. The

2 router encodes the level of congestion in the ECN bits. As with ECN, the receiver sends the congestion information to the sender via acknowledgement (ACK) packets. Based on the load region reported by the network, the sender uses one of the following policies: Multiplicative Increase (MI) in the low-load region, Additive Increase (AI) in the high-load region, and Multiplicative Decrease (MD) in the overload region. By using MI in the low-load region, flows can exponentially ramp up their bandwidth to improve network utilization. Once high utilization is attained, AIMD provides long-term fairness amongst the competing flows. Using extensive packet-level ns2 [5] simulations that cover a wide-range of network scenarios, we show that VCP can approximate the performance of XCP by achieving high utilization, low persistent queue length, negligible packet drop rate and reasonable fairness. One limitation of VCP (as is the case for other end-host based approaches including TCP and TCP+AQM) is that it converges significantly slower to a fair allocation than XCP. To better understand the VCP s behavior, we use a simple fluid model that approximates the behavior of VCP. For the case of a single bottleneck link shared by flows with identical round-trip delays, we prove that the model asymptotically achieves global stability independent of the link capacity, the feedback delay and the number of flows. For more general multiple-bottleneck topologies, we show that the equilibrium rate allocation of this model is max-min fair [4]. While this model may not accurately reflect VCP s dynamics, it does reinforce the stability and fairness properties that we observe in our simulations and provides a good theoretical grounding for VCP. From a practical point of view VCP has two advantages. First, VCP does not require any modifications to the IP header since it can reuse the two ECN bits in a way that is compatible with the ECN proposal [55]. Second, it is a simple protocol with low algorithmic complexity. The complexity of VCP s end host algorithm is similar to that of TCP. The router algorithm maintains no per-flow state, and it has very low computation complexity. We believe that these benefits largely offset VCP s limitation of having a much slower fairness convergence speed than XCP. The rest of the paper is organized as follows. In Section 2, we describe the guidelines that motivate the design of VCP and in Section 3, we provide a detailed description of VCP. In Section 4, we evaluate the performance of VCP using extensive simulations. In Section 5, we use a fluid model that approximates VCP s behavior and characterize the stability, fairness and convergence properties of this model (detailed proofs are presented in a technical report [64]). Section 6 addresses concerns on the stability of VCP under heterogeneous delays and the influence of switching between MI, AI and MD on efficiency and fairness. We review related work in Section 7 and summarize our findings in Section FOUNDATIONS In this section, we first review why XCP scales to high BDP networks while TCP+AQM does not. Then, we present two guidelines that form the basis of the VCP design. 2.1 Why XCP outperforms TCP+AQM? There are two main reasons of why TCP does not scale to high BDP networks. First, packet loss is a binary congestion signal that conveys no information about the degree of congestion. Second, due to stability reasons, relying only on packet loss for congestion indication requires TCP to use a conservative window-increment policy and an aggressive window-decrement policy [24, 34]. In high BDP networks, every loss event forces a TCP flow to perform an MD, followed by the slow convergence of the AI algorithm to reach high utilization. Since the time for each individual AIMD epoch is proportional to the per-flow BDP, TCP flows remain in low utilization regions for prolonged periods of time thereby resulting in poor link utilization. Using AQM/ECN in conjunction with TCP does not solve this problem since the (one-bit) ECN feedback, similar to a packet loss, is not indicative of the degree of congestion either. XCP addresses this problem by precisely measuring the fair share of a flow at a router and providing explicit rate feedback to end-hosts. One noteworthy aspect of XCP is the decoupling of efficiency control and fairness control at each router. XCP uses MIMD to control the flow aggregate and converge exponentially fast to any available bandwidth and uses AIMD to fairly allocate the bandwidth among competing flows. XCP, however, requires multiple bits in the packet header to carry bandwidth allocation information ( cwnd) from network routers to end hosts, and congestion window (cwnd) and Round-Trip Time (RTT) information (rtt) from the end-hosts to the network routers. 2.2 Design Guidelines for VCP The main goal of our work is to develop a simple congestion control mechanism that can scale to high BDP networks. By simple we mean an AQM-style approach where routers merely provide feedback on the level of network congestion, and end-hosts perform congestion control actions using this feedback. Furthermore, to maintain the compatibility with the existing IP header format, we restrict ourselves to using only two bits to encode the congestion information. To address these challenges, our solution builds around two design guidelines: #1, Decouple efficiency control and fairness control. Like XCP, VCP decouples efficiency and fairness control. However, unlike XCP where routers run the efficiency and fairness control algorithms and then explicitly communicate the rate to end-hosts, VCP routers compute only a congestion level, and end-hosts run one of the two algorithms as a function of the congestion level. More precisely, VCP classifies the network utilization into different utilization regions [28] and determines the controller that is suitable for a given region. Efficiency and fairness have different levels of relative importance in different utilization regions. When network utilization is low, the goal of VCP is to improve efficiency more than fairness. On the other hand, when utilization is high, VCP accords higher priority to fairness than efficiency. By decoupling these two issues, end hosts have only a single objective in each region and thus need to apply only one congestion response. For example, one such choice of congestion response, which we use in VCP, is to perform MI in low utilization regions for improving efficiency, and to apply AIMD in high utilization regions for achieving fairness. The goal then is to switch between these two congestion responses depending on the level of network utilization. #2, Use link load factor as the congestion signal. XCP uses spare bandwidth (the difference between capac-

3 Flow Throughput (Mbps) capacity 1st flow 2nd flow Figure 1: The throughput dynamics of two flows of the same RTT (8ms). They share one bottleneck with the capacity bouncing between 1Mbps and 2Mbps. This simple example unveils VCP s potential to quickly track changes in available bandwidth (with load-factor guided MIMD) and thereafter achieve a fair bandwidth allocation (with AIMD). ity and demand) as a measure of the degree of congestion. In VCP, we use load factor as the congestion signal, i.e., the relative ratio of demand and capacity [26]. While the load factor conveys less information than spare bandwidth, the fact that the load factor is a scale-free parameter allows us to encode it using a small number of bits without much loss of information. In this paper, we show that a two-bit encoding of the load factor is sufficient to approximate XCP s performance. Note that in comparison to binary congestion signals such as loss and one-bit ECN, the load factor conveys more information about the degree of network congestion. 2.3 A Simple Illustration In this subsection, we give a high level description of VCP using a simple example. A detailed description of VCP is presented in Section 3. Periodically, each router measures the load factor for its output links and classifies the load factor into three utilization regions: low-load, high-load or overload. Each router encodes the utilization regions in the two ECN bits in the IP header of each data packet. In turn, the receiver sends back this information to the sender via the ACK packets. Depending on this congestion information, the sender applies different congestion responses. If the router signals low-load, the sender increases its sending rate using MI; if the router signals high-load, the sender increases its sending rate using AI; otherwise, if the router signals overload, the sender reduces its sending rate using MD. The core of the VCP protocol is summarized by the following pseudo code. 1) Each router periodically estimates a load factor, and encodes this load factor into the data packets IP header. This information is then sent back by the receiver to the sender via ACK packets. 2) Based on the load factor it receives, each sender performs one of the following control algorithms: 2.1) For low-load, perform MI; 2.2) For high-load, perform AI; 2.3) For overload, perform MD. Figure 1 shows the throughput dynamics of two flows sharing one bottleneck link. Clearly, VCP is successful in tracking the bandwidth changes by using MIMD, and achieve fair allocation when the second flow arrives, by using AIMD. The Internet, however, is much more complex than this simplified example across many dimensions: the link capacities and router buffer sizes are highly heterogeneous, the RTTs of flows may differ significantly, and the number of flows is unknown and changes over time. We next describe the details of the VCP protocol, which will be able to handle more realistic environments. 3. THE VCP PROTOCOL In this section, we provide a detailed description of VCP. We begin by presenting three key issues that need to be addressed in the design of VCP. Then, we describe how we address each of these issues in turn. 3.1 Key Design Issues To make VCP a practical approach for the Internet-like environments with significant heterogeneity in link capacities, end-to-end RTTs, router buffer sizes and variable traffic characteristics, we need to address the following three key issues. Load factor transition point: VCP separates the network load condition into three regions: low-load, high-load and overload. The load factor transition point in VCP represents the boundary between the low-load and high-load regions, which is also the demarcation between applying MI and AI algorithms. The choice of the transition point represents a trade-off between achieving high link utilization and responsiveness to congestion. Achieving high network utilization requires a high value for the transition point. But this choice negatively impacts responsiveness to congestion, which in turn affects the convergence time to achieve fairness. Additionally, given that Internet traffic is inherently bursty [44][54], we require a reliable estimation algorithm of the load factor. We discuss the issue of load factor estimation in Section 3.2. Setting of congestion control parameters: Using MI for congestion control is often fraught with the danger of instability due to its large variations over short time scales. To maintain stability and avoid large queues at routers, we need to make sure that the aggregate rate of the VCP flows using MI does not overshoot the link capacity. Similarly, to achieve fairness, we need to make sure that a flow enters the AI phase before the link gets congested. In order to satisfy these criteria, we need an appropriate choice of MI, AI and MD parameters that can achieve high utilization while maintaining stability, fairness and small persistent queues. To better understand these issues, we first describe our parameter settings for a simplified network model, where all flows have the same RTT and observe the same state of the network load condition, i.e., all flows obtain the same load factor feedback (Section 3.3). We then generalize our parameter choice for flows with heterogeneous RTTs. Heterogeneous RTTs: When flows have heterogeneous RTTs, different flows can run different algorithms (i.e., MI, AI, or MD) at a given time. This may lead to unpredictable behavior. The RTT heterogeneity can have a significant impact even when all flows run the same algorithm, if this algorithm is MI. In this case, a flow with a lower RTT can claim much more bandwidth than a flow with a higher RTT. To address this problem, end-hosts need to adjust their MI parameters according to their observed RTTs, as discussed in Section 3.4.

4 >1% 1% 8% ^ρ l code load: low high over c ρ^ l : (1) 2 (1) 2 8% 1% (11) 2 Figure 2: The quantized load factor ˆρ l at a link l is a non-decreasing function of the raw load factor ρ l and can be represented by a two-bit code ˆρ c l. We now discuss these three design issues in greater detail. 3.2 Load Factor Transition Point Consider a simple scenario involving a fixed set of longlived flows. The goal of VCP is to reach a steady state where the system is near full utilization, and the flows use AIMD for congestion control. To achieve this steady state, the choice of the load factor transition point should satisfy three constraints: The transition point should be sufficiently high to enable the system to obtain high overall utilization; After the flows perform an MD from an overloaded state, the MD step should force the system to always enter the high-load state, not the low-load state; If the utilization is marginally lower than the transition point, a single MI step should only lift the system into the high-load state, but not the overload state. Let β < 1 denote the MD factor, i.e., when using the MD algorithm, the sender reduces the congestion window with the factor β (as in Equation (4) in Section 3.3). The first constraint requires a high transition point. This choice coupled with the second condition leads to a high value of β. However, a very high value of β is undesirable as it decreases VCP s response to congestion. For example, if the transition point is 95%, then β >.95, and it takes VCP about 14 RTTs to halve the congestion window. At the other end, if we chose β =.5 (as in TCP [24]), the transition point can be at most 5%, which reduces the overall network utilization. To balance these conflicting requirements, we chose β =.875, the same value used in the DECbit scheme [56]. Given β, we set the load factor transition point to 8%. This gives us a safety margin of 7.5%, which allows the system to operate in the AIMD mode in steady state. In summary, we choose the following three ranges to encode the load factor ρ l (see Figure 2): Low-load region: ˆρ l = 8% when ρ l [%, 8%); High-load region: ˆρ l = 1% when ρ l [8%, 1%); Overload region: ˆρ l > 1% when ρ l [1%, ). Thus, the quantized load factor ˆρ l can be represented using a two-bit code ˆρ c l, i.e., ˆρ c l = (1) 2, (1) 2 and (11) 2 for ˆρ l = 8%, ˆρ l = 1% and ˆρ l > 1%, respectively. The code () 2 is reserved for ECN-unaware source hosts to signal not-ecn-capable-transport to ECN-capable routers, which is essential for incremental deployment [55]. The encoded load factor is embedded in the two-bit ECN field in the IP header. ρ l Estimation of the load factor: Due to the bursty nature of the Internet traffic, we need to estimate the load factor over an appropriate time interval, t ρ. When choosing t ρ we need to balance two conflicting requirements. On one hand, t ρ should be larger than the RTTs experienced by most flows to factor out the burstiness induced by the flows responses to congestion. On the other hand, t ρ should be small enough to avoid queue buildup. Internet measurements [53, 29] report that roughly 75% 9% of flows have RTTs less than 2 ms. Hence, we set t ρ = 2ms. During every time interval t ρ, each router estimates a load factor ρ l for each of its output links l as [26, 33, 17, 2, 41]: ρ l = λ l + κ q q l γ l C l t ρ. (1) Here, λ l is the amount of input traffic during the period t ρ, q l is the persistent queue length during this period, κ q controls how fast the persistent queue drains [17, 2] (we set κ q =.5), γ l is the target utilization [41] (set to a value close to 1), and C l is the link capacity. The input traffic λ l is measured using a packet counter. To measure the persistent queue q l, we use a low-pass filter that samples the instantaneous queue size, q(t), every t q t ρ (we chose t q = 1ms). 3.3 Congestion Control Parameter Setting In this section, we discuss the choice of parameters used by VCP to implement the MI/AI/MD algorithms. To simplify the discussion, we consider a single link shared by flows, whose RTTs are equal to the link load factor estimation period, i.e., rtt = t ρ. Hence, the flows have synchronous feedback and their control intervals are also in sync with the link load factor estimation. We will discuss the case of heterogeneous RTTs in Section 3.4. At any time t, a VCP sender performs one of the three actions based on the value of the encoded load factor sent by the network: MI : cwnd(t + rtt) = cwnd(t) ( 1 + ξ ) (2) AI : cwnd(t + rtt) = cwnd(t) + α (3) MD : cwnd(t + δt) = cwnd(t) β (4) where rtt = t ρ, δt +, ξ >, α > and < β < 1. Based on the relationship between the choice of the load factor transition point and the MD parameter β, we chose β =.875 (see Section 3.2). We use α = 1. as is in TCP [24]. Setting the MI parameter: The stability of VCP is dictated by the MI parameter ξ. In network-based rate allocation approaches like XCP, the rate increase of a flow at any time is proportional to the spare capacity available in the network [34]. Translating this into the VCP context, we require the MI of the congestion window to be proportional to 1 ˆρ l where ˆρ l represents the current load factor. During the MI phase, the current sending rate of each flow is proportional to the current load factor ˆρ l. Consequently, we obtain ξ(ˆρ) = κ 1 ˆρ l ˆρ l, (5) where κ is a constant that determines the stability of VCP and controls the speed to converge toward full utilization. Based on analyzing the stability properties of this algorithm (see Theorem 1 in Section 5), we set κ =.25. Since end-

5 hosts only obtain feedback on the utilization region as opposed to the exact value of the load factor, they need to make a conservative assumption that the network load is near the transition point. Thus, the end-hosts use the value of ξ(8%) =.625 in the MI phase. 3.4 Handling RTT Heterogeneity with Parameter Scaling Until now, we have considered the case where competing flows have the same RTT, and this RTT is also equal to the load factor estimation interval, t ρ. In this section, we relax these assumptions by considering flows with heterogeneous RTTs. To offset the impact of the RTT heterogeneity, we need to scale the congestion control parameters used by the end-hosts according to their RTTs. Scaling the MI/AI parameters: Consider a flow with a round trip time rtt, and assume that all the routers use the same interval, t ρ, to estimate the load factor on each link. Let ξ and α represent the unscaled MI and AI parameters as described in Section 3.3, where all flows have an identical RTT (= t ρ). To handle the case of flows with different RTTs, we set the scaled MI/AI parameters ξ s and α s as follows: 1 For MI : ξ s (1 + ξ) rtt tρ 1, (6) For AI : α s α rtt. t ρ (7) An end-host uses the scaled parameters ξ s and α s in (2) and (3) to adjust the congestion window after each RTT. The scaling of these parameters emulates the behavior of all flows having an identical RTT, which is equal to t ρ. The net result is that over any time period, the window increase under either MI or AI is independent of the flows RTTs. Thus, unlike TCP, VCP flow s throughput is not affected by the RTT heterogeneity [43, 51, 15]. Handling MD: MD is an impulse-like operation that is not affected by the length of the RTT. Hence, the value of β in (4) needs not to be scaled with the RTT of the flow. However, to avoid over-reaction to the congestion signal, a flow should perform an MD at most once during an estimation interval t ρ. Upon getting the first load factor feedback that signals congestion (i.e., ˆρ c l = (11) 2 ), the sender immediately reduces its congestion window cwnd using MD, and then freezes the cwnd for a time period of t ρ. After this period, the end host runs AI for one RTT in order to obtain the new load factor. Scaling for fair rate allocation: RTT-based parameter scaling, as described above, only ensures that the congestion windows of two flows with different RTTs converge to the same value in steady state. However, this does not guarantee fairness as the rate of the flow is still inversely proportional to its RTT, i.e., rate = cwnd/rtt. To achieve fair rate allocation, we need to add an additional scaling factor to the AI algorithm. To illustrate why this is the case, consider the simple AIMD control mechanism applied to two competing flows where each flow i (= 1, 2) uses a separate AI parameter α i but a common MD parameter β. At the end of the M-th 1 Equation (6) is the solution for 1 + ξ = (1 + ξ s ) t ρ rtt where the right-hand side is the MI amount of a flow with the RTT value rtt, during a time interval t ρ. Similarly, Equation (7) is obtained by solving 1 + α = 1 + tρ rtt α s. Table 1: VCP Parameter Setting Para Value Meaning t ρ 2 ms the link load factor measurement interval t q 1 ms the link queue sampling interval γ l.98 the link target utilization κ q.5 how fast to drain the link steady queue κ.25 how fast to probe the available bw (MI) α 1. the AI parameter β.875 the MD parameter congestion epoch that includes n > 1 rounds of AI and one round of MD in each epoch, we have cwnd i (M) = β [ cwnd i (M 1) + n α i ]. Eventually, each flow i achieves a congestion window that is proportional to its AI parameter, α i. Indeed, the ratio of the congestion windows of the two flows approaches α 1/α 2 for large values of M, and n > 1: cwnd 1 (M) cwnd 2 (M) = cwnd 1(M 1)/n + α 1 cwnd 2 (M 1)/n + α 2 = cwnd 1(M 2)/n 2 + α 1 /n + α 1 cwnd 2 (M 2)/n 2 + α 1 /n + α 2 = α 1 α 2. Hence, to allocate bandwidth fairly among two flows, we need to scale each flow s AI parameter α i using its own RTT. For this purpose, we use t ρ as a common-base RTT for all the flows. Thus, the new AI scaling parameter, α rate, becomes For AI : α rate α s rtt t ρ = α ( rtt t ρ ) 2. (8) 3.5 Summary of Parameters Table 1 summarizes the set of VCP router/end-host parameters and their typical values. We note that throughout all the simulations reported in this paper, we use the same parameter values. This suggests that VCP is robust in a large variety of environments. 4. PERFORMANCE EVALUATION In this section, we use extensive ns2 simulations to evaluate the performance of VCP for a wide range of network scenarios [18] including varying the link capacities in the range [1Kbps, 5Gbps], round trip times in the range [1ms, 1.5s], numbers of long-lived, FTP-like flows in the range [1, 1], and arrival rates of short-lived, web-like flows in the range [1s 1, 1s 1 ]. We always use two-way traffic with congestion resulted in the reverse path. The bottleneck buffer size is set to the bandwidth-delay product, or two packets per flow, whichever is larger. The data packet size is 1 bytes, while the ACK packet is 4 bytes. All simulations are run for at least 12s to ensure that the system has reached its steady state. The average utilization statistics neglect the first 2% of simulation time. For all the time-series graphs, utilization and throughput are averaged over 5ms interval, while queue length and congestion window are sampled every 1ms. We use a fixed set of VCP parameters listed in Table 1 for all the simulations in this paper.

6 VCP Utilization XCP Utilization 5% VCP Avg Queue XCP Avg Queue 1.% VCP Drop Rate XCP Drop Rate Bottleneck Queue (% Buf) 4% 3% 2% 1% Bottleneck Drops (% Pkt Sent).8%.6%.4%.2% %.% Bottleneck Capacity (Mbps) Bottleneck Capacity (Mbps) Bottleneck Capacity (Mbps) Figure 3: VCP with the bottleneck capacity ranging from 1Kbps to 5Gbps. It achieves high utilization and almost no packet loss with decreasing bottleneck queue as the capacity increases. Note the logarithmic scale of the x-axis in this figure and the next one. 1.1 VCP Utilization XCP Utilization 2% VCP Avg Queue XCP Avg Queue 1.% VCP Drop Rate XCP Drop Rate Bottleneck Queue (% Buf) 15% 1% 5% % Bottleneck Drops (% Pkt Sent).8%.6%.4%.2%.% Round trip Propagation Delay (ms) Round trip Propagation Delay (ms) Round trip Propagation Delay (ms) Figure 4: VCP with the round-trip propagation delay ranging from 1ms to 15ms. It is able to achieve reasonably high utilization, low persistent queue and no packet loss. 1.1 VCP Utilization XCP Utilization 3% VCP Avg Queue XCP Avg Queue 1.% VCP Drop Rate XCP Drop Rate Bottleneck Queue (% Buf) 25% 2% 15% 1% 5% Bottleneck Drops (% Pkt Sent).8%.6%.4%.2% %.% Num of Long lived Flows Num of Long lived Flows Num of Long lived Flows Figure 5: VCP with the number of long-lived, FTP-like flows ranging from 1 to 1. with more bursty bottleneck queue for higher number of FTP flows. It achieves high utilization These simulation results demonstrate that, for a wide range of scenarios, VCP is able to achieve exponential convergence to high utilization, low persistent queue, negligible packet drop rate and reasonable fairness, except its significantly slower fairness convergence speed compared to XCP. 4.1 One Bottleneck We first evaluate the performance of VCP for the simple case of a single bottleneck link shared by multiple VCP flows. We study the effect of varying the link capacity, the round-trip times, the number of flows on the performance of VCP. The basic setting is a 15Mbps link with 8ms RTT where the forward and reverse path each has 5 FTP flows. We evaluate the impact of each network parameter in isolation while retaining the others as the basic setting. Impact of Bottleneck Capacity: As illustrated in Figure 3, we observe that VCP achieves high utilization ( 93%) across a wide range of bottleneck link capacities varying from 1Kbps to 5Gbps. The utilization gap in comparison to XCP is at most 7% across the entire bandwidth range. Additionally, as we scale the bandwidth of the link, the average (maximal) queue length decreases to about.1% (1%) buffer size. The absolute persistent queue length is very small for higher capacities, leading to negligible packet drop rates (zero packet drops for many cases). At extremely low capacities, e.g., 1Kbps (per-flow BDP of.2 packets), the bottleneck average queue significantly increases to 5% of the buffer size, resulting in roughly.6% packet loss. This happens because the AI parameter setting (α = 1.) is too large for such low capacities. Impact of Feedback Delay: We fix the bottleneck capacity at 15Mbps and vary the round-trip propagation delay from 1ms to 15ms. As shown in Figure 4, we notice that in most cases, the bottleneck utilization is higher than 9%, and the average (maximal) queue is less than 5% (15%) of the buffer size. We also observe that the RTT parameter scaling is sensitive to very low values of RTT (e.g., 1ms), thereby causing the average (maximal) queue length to grow to about 15% (45%) of the buffer size. For the RTT values larger than 8ms, VCP obtains lower utilization (85% 94%) since the link load factor measurement interval t ρ = 2ms is much less than the RTTs of the flows. As a result, the load condition measured in each t ρ shows variations due to the bursty nature of window-based control. This can be compensated by increasing t ρ; but the trade-off is that the link load measurement will be less responsive causing the queue length to grow. In all these cases, we did not observe any packet drops in VCP. Impact of Number of Long-lived Flows: With an increase in the number of forward FTP flows, we notice that the traffic gets more bursty, as shown by the increasing trend of the bottleneck maximal queue. However, even when the network is very heavily multiplexed by 1 flows (i.e., the average per-flow BDP equals to only 1.5 packets), the maximal queue is still less than 38% of the buffer size. The average queue is consistently less than 5% buffer size as shown in Figure 5 across all these cases. For the heavily multiplexed cases, VCP even slightly outperforms XCP. Impact of Short-lived Traffic: To study VCP s performance in the presence of variability and burstiness in flow

7 VCP Utilization XCP Utilization 3% VCP Avg Queue XCP Avg Queue 1.% VCP Drop Rate XCP Drop Rate Bottleneck Queue (% Buf) 25% 2% 15% 1% 5% Bottleneck Drops (% Pkt Sent).8%.6%.4%.2% Mice Arrival Rate (/s) % Mice Arrival Rate (/s).% Mice Arrival Rate (/s) Figure 6: Similar to XCP, VCP remains efficient with low persistent queue and zero packet loss given the short-lived, web-like flows arriving/departing at a rate from 1/ s to 1/ s..98 Same Bandwidth, Utilization Different Bandwidth, Utilization.6% Same Bandwidth, Avg Queue Different Bandwidth, Avg Queue 1.% Same Bandwidth, Drop Rate Different Bandwidth, Drop Rate Bottleneck Queue (% Buf).5%.4%.3%.2%.1% Bottleneck Drops (% Pkt Sent).8%.6%.4%.2%.%.% Bottleneck ID Bottleneck ID Bottleneck ID Figure 7: VCP with multiple congested bottlenecks. For either all the links have the same capacity (1Mbps), or the middle link #4 has lower capacity (5Mbps) than the others, VCP consistently achieves high utilization, low persistent queue and zero packet drop on all the bottlenecks. 5 Equal RTT (4ms) Different RTT (4 156ms) Very Different RTT (4 33ms) 1.2 Equal RTT (4ms) Different RTT (4 156ms) Very Different RTT (4 33ms) 2 Very Different RTT (4 33ms) Flow Throughput (Mbps) Bottleneck Queue (packets) Flow ID Figure 8: To some extent, VCP distributes bandwidth fairly among competing flows with either equal or different RTTs. In all the case, it maintains high utilization, keeps small queue and drops no packet. arrivals, we add web traffic into the network. These flows arrive according to the Poisson process, with the average arrival rate varying from 1/ s to 1/ s. Their transfer size obeys the Pareto distribution with an average of 3 packets. This setting is consistent with the real-world web traffic model [11]. As shown by Figure 6, the bottleneck always maintains high utilization with small queue lengths and zero packet drops, similar to XCP. In summary, we note that across a wide range of network configurations with a single bottleneck link, VCP can achieve comparable performance as XCP including high utilization, low persistent queues, and negligible packet drops. All these results are achieved with a fixed set of parameters shown in Table Multiple Bottlenecks Next, we study the performance of VCP with a more complex topology of multiple bottlenecks. For this purpose, we use a typical parking-lot topology with seven links. All the links have a 2ms one-way propagation delay. There are 5 FTP flows traversing all the links in the forward direction, and 5 FTP flows in the reverse direction as well. In addition, each individual link has 5 cross FTP flows traversing in the forward direction. We run two simulations. First, all the links have 1Mbps capacity. Second, the middle link #4 has the smallest capacity of only 5Mbps, while all the others have the same capacity of 1Mbps. Figure 7 shows that for both cases, VCP performs as good as in the single-bottleneck scenarios. For the first case, VCP achieves 94% average utilization, less than.2%-buffer-size average queue length and zero packet drops at all the bottlenecks. When we lower the capacity of the middle link, its average utilization increases slightly to 96%, with the largest maximal queue representing only 6.4% buffer size. In comparison to XCP, one key difference is that VCP penalizes long flows more than short flows. For example, in the second case, VCP allocates.39mbps to each long flow, and 4.96Mbps to each cross flow that passes the middle link; while all these flows get about.85mbps under XCP. We discuss the reason behind this in Section Fairness TCP flows with different RTTs achieve bandwidth allocation that is proportional to 1/ rtt z where 1 z 2 [43]. VCP alleviates this issue to some extent. Here we look at the RTT-fairness of VCP. We have 3 FTP flows sharing a single 9Mbps bottleneck, with 3 FTP flows on the reverse path. We perform three sets of simulations: (a) the same RTT; (b) small RTT difference; (c) huge RTT difference. We will see that VCP is able to allocate bottleneck bandwidth fairly among competing flows, as long as their RTTs are not significantly different. This capability degrades as the RTT heterogeneity increases. In the case where all the flows have a common RTT or have a small RTT difference, VCP achieves a near-even distribution of the capacity among the competing flows (refer to Figure 8). However, when the flows have significantly different RTTs, VCP does not distribute the bandwidth fairly between the flows that have huge RTT variation (with throughput ratio of up to 5). This fairness discrepancy occurs due to

8 Flow Throughput (Mbps) Flow 1 (rtt = 4ms) Flow 2 (rtt = 5ms) Flow 3 (rtt = 6ms) Flow 4 (rtt = 7ms) Flow 5 (rtt = 8ms) Bottleneck Queue (packets) Figure 9: VCP converges onto good fairness, high utilization and small queue. takes significantly longer time than XCP. However, its fairness convergence 9 8 RTT: 6ms 158ms 1 25 Congestion Window (packets) Bottleneck Queue (packets) Figure 1: VCP is robust against and responsive to sudden, considerable traffic demand changes, and at the same time maintains low persistent bottleneck queue. the following reason. A flow with a very high RTT is bound to have high values for their MI and AI parameters due to parameter scaling (see Section 3.4). Due to practical operability constraints, we place artificial bounds on the actual values of these parameters (specifically the MI parameter) to prevent sudden bursts from VCP flows which can cause the persistent queue length at the bottleneck link to increase substantially. These bounds restrict the throughput of flows with very high RTTs. 4.4 Dynamics All the previous simulations focus on the steady-state behavior of VCP. Now, we investigate its short-term dynamics. Convergence Behavior: To study the convergence behavior of VCP, we revert to the single bottleneck link with a bandwidth of 45Mbps where we introduce 5 flows into the system, one after another, with starting times separated by 1s. We also set the RTT values of the five flows to different values. The reverse path has 5 flows that are always active. Figure 9 illustrates that VCP reallocates bandwidth to new flows whenever they come in without affecting its high utilization or causing large instantaneous queue. However, VCP takes a much longer time than XCP to converge to the fair allocation. We theoretically quantify the fairness convergence speed for VCP in Theorem 4 in Section 5. Sudden Demand Change: We illustrate how VCP reacts to sudden changes in traffic demand using a simple simulation. Consider an initial setting of 5 forward FTP flows with varying RTTs (uniformly chosen in the range [5ms, 15ms]) sharing a 2Mbps bottleneck link. There are 5 FTP flows on the reverse path. At t=8s, 15 new forward FTP flows become active; then they leave at 14s. Figure 1 clearly shows that VCP can adapt sudden fluctuations in the traffic demand. (The left figure draws the congestion window dynamics for four randomly chosen flows.) When the new flows enter the system, the flows adjust their rates to the new fair share while maintaining the link at high utilization. At t=14s, when three-fourths of the flows depart creating a sudden drop in the utilization, the system quickly discovers this and ramps up to 95% utilization in about 5 seconds. Notice that during the adjustment period, the bottleneck queue remains much lower than its full size. (All the queue dynamics figures in this paper use the router buffer size to scale their queue-length axis.) This simulation shows that VCP is responsive to sudden, significant decreases/increases in the available bandwidth. This is no surprise because VCP switches to the MI mode which by nature can track any bandwidth change in logarithmic time (see Theorem 3 in Section 5). We have also performed a variety of other simulations to show VCP s ability to provide bandwidth differentiation. Due to the limited space we are unable to present the results here. We refer the reader to our technical report for more details [64]. 5. A FLUID MODEL To obtain insight into the behavior of VCP, in this section, we consider a simple fluid model, and analyze its stability and fairness properties. We also analyze VCP s efficiency and fairness convergence speed. Our model approximates the behavior of VCP using a load-factor guided algorithm which combines the MI and AI steps of VCP as described in (2) and (3) in Section 3.3: ẇ i(t) = 1 T with the MI parameter [ wi(t) ξ(ρ(t)) + α ] (9) ξ(ρ(t)) = κ 1 ρ(t), (1) ρ(t) where κ > is the stability coefficient of the MI parameter. In the remainder of this section we will refer to this model as the MIAIMD model. It assumes infinite router buffers, and that end-hosts know the exact value of the load factor ρ(t), as computed by the routers. We start our analysis by considering a single bottleneck link traversed by N flows that have the same RTT, T. As shown in Figure 11, the load factor ρ(t) received by the source at a time t is computed based on the sender s rate at time t T,

9 source router destination t T ρ t ξ time Figure 11: A simplified VCP model. The source sending rate at time t T is used by the router to calculate a load factor ρ, which is echoed back from the destination to the source at time t. Then the source adjusts its MI parameter ξ(ρ(t)) based on the load factor ρ(t). N i=1 ρ(t) = w i(t T ), (11) where w i(t) is the flow i s congestion window at time t, C is the link capacity, and < γ 1 is the target link utilization. We assume that w i (t) is a positive, continuous and differentiable function, and T is a constant. Since ξ(ρ(t)) is proportional to the available bandwidth, the MIAIMD algorithm tracks the available bandwidth exponentially fast and thus achieves efficiency. It also converges to fairness as we will show in Theorem 2. 2 Using (9) to sum over all N flows yields ẇ(t) = 1 T [ w(t) ξ(ρ(t)) + Nα ] (12) where w(t) = N i=1 w i(t) is the sum of all the congestion windows. This result, together with (1) and (11), leads to ẇ(t) = 1 T { κ w(t) [ 1 ] + Nα } (13) w(t T ) where w(t) >. We assume the initial condition w(t) = N (i.e., w i (t) = 1), for all t [ T, ]. In [64], we prove the following global stability result. Theorem 1. Under the model (9), (1) and (11) where a single bottleneck is shared by a set of synchronous flows with the same RTT, if κ 1, then the delayed differential 2 equation (13) is globally asymptotically stable with a unique equilibrium w = + N α, and all the flows have the κ same steady-state rate ri = γc + α 1. N κ T This result has two implications. First, the sufficient condition κ 1 holds for any link capacity, any feedback delay, 2 and any number of flows. Furthermore, the global stability result does not depend on the network parameters. Second, this result is optimal in that at the equilibrium, the system achieves all the design goals: high utilization, fairness, zero steady-state queue length, and zero packet loss rate this is because we can always adjust γ such that the system stabilizes at a steady-state utilization slightly less than 1. Importance of γ: While (11) defines γ as the target utilization, the actual utilization is w = γ + α 1 where CT κ P P = CT is the per-flow BDP. To achieve a certain target N utilization γ, γ should be treated as a control variable and set to γ = γ α 1. For more details on how to make this κ P adjustment process automatic without even knowing α, κ and P, we refer the reader to [64]. 2 Theorem 2 actually proves the max-min fairness for a general multiple-bottleneck topology. For a single link, max-min fairness means each flow gets an equal share of the link capacity. Next, we consider a more general multiple-bottleneck network topology. Let ρ i (t) denote the maximal link load factor on flow i s path L i that includes a subset of links, i.e., L i = { l flow i traverses link l}. The MI parameter of flow i is then ξ(ρ i 1 (t)) = κ [ 1 ], (14) ρ i (t) where ρ i (t) = max l Li ρ l (t), ρ l (t) =, and the subset of flows I l = { i flow i traverses link l}. We prove the following fairness result in [64]. i I l w i (t T ) γc l T Theorem 2. In a multiple-bottleneck topology where all flows have the same round-trip time T, if there exists a unique equilibrium, then the algorithm defined by (9) and (14) allocates a set of max-min fair rates ri = where ρ l = i I l w i. α κt (1 1 max l Li ρ ) l To better understand this result note that a flow s sending rate is determined by the most congested bottleneck link on its path. Thus, the flows traversing the most congested bottleneck links in the system will naturally experience the lowest throughputs. Having established the stability and fairness properties of the MIAIMD model, we now turn our attention on the convergence of the VCP protocol. The following two theorems, proved in [64], give the convergence properties. Theorem 3. The VCP protocol takes O(log C) RTTs to claim (or release) a major part of any spare (or over-used) capacity C. Theorem 4. The VCP protocol takes O(P log P ) RTTs to converge onto fairness for any link, where P is the perflow bandwidth-delay product, and P > 1 is the largest congestion window difference between flows sharing that link. Not surprisingly, due to the use of MI in the low-load region, VCP converges exponentially fast to high utilization. On the other hand, VCP s convergence time to fairness is similar to other AIMD-based protocols, such as TCP+AQM. In contrast, explicit feedback schemes like XCP require only O(log P ) RTTs to converge to fairness. This is because the end-host based AIMD algorithms improve fairness per AIMD epoch, which includes O(P ) rounds of AI and one round of MD, while the equivalent operation in XCP takes only one RTT. The VCP protocol can be viewed as an approximation of the MIAIMD model along three axes. First, the MIAIMD model uses the exact load factor feedback, ρ(t), while VCP uses a quantized value of the load factor. Second, in the MI and AI phases, VCP uses either the multiplicative factor or the additive factor term, but not both as the MIAIMD model does. Third, in the overload region, VCP applies a constant MD parameter β instead of ξ(ρ(t)). The comparison between the simulation results of VCP and the analytical results of the MIAIMD model suggests that the two differ most notably in terms of the fairness model. While in the case of multiple bottleneck links, the MIAIMD model achieves max-min fairness [4], VCP tends to allocate more bandwidth to flows that traverse fewer bottleneck links (see Section 4.2). This is because VCP relies on the quantized representation of the load factor instead of the exact value.

10 6. DISCUSSIONS Since VCP switches between MI, AI, and MD algorithms based on the load factor feedback, there are natural concerns with respect to the impact of these switches on the system stability, efficiency, and fairness, particularly in systems with highly heterogeneous RTTs. We discuss these concerns in this section. We discuss VCP s TCP-friendliness and incremental deployment in [64]. 6.1 Stability under Heterogeneous Delays Although the MIAIMD model presented in Section 5 is provably stable, it assumes synchronous feedback. To accommodate heterogeneous delays, VCP scales the MI/AI parameters such that flows with different RTTs act as if they were having the same RTT. This scaling mechanism is also essential to achieving fair bandwidth allocation, as discussed in Section 3.4. In normal circumstances, VCP makes a transition to MD only from AI. However, even if VCP switches directly from MD to MI, if the demand traffic at the router does not change significantly, VCP will eventually slide back into AI. Finally, to prevent the system from oscillating between MI and MD, we set the load factor transition point ˆρ l to 8%, and set the MD parameter β to.875 > ˆρ l. This gives us a safety margin of 7.5%. The extensive simulation results presented in Section 4 suggest that VCP is indeed stable over a large variety of network scenarios including per-flow bandwidths of up to 1Mbps and RTTs of up to 1.5s. 6.2 Influences of Mode Sliding From an efficiency perspective, VCP s goal is to bring and maintain the system into the high utilization region. While MI enables VCP to quickly reach the high link utilization, VCP needs also to make sure that the system remains in this state. The main mechanisms employed by VCP to achieve this goal is the scaling of the MI/AI parameters for flows with different RTTs. In addition to improving fairness, this scaling is essential to avoid oscillations. Otherwise, a flow with a low RTT may apply MI several times during the estimation interval, t ρ, of the link load factor. Other mechanisms employed by VCP to maintain high efficiency include choosing an appropriate value of the MD parameter to remain in the high utilization region, using a safety margin between MI and AI, and bounding the burstiness (Section 4.3). As discussed in Section 3.4, there are two major concerns with respect to fairness. First, a flow with a small RTT probes the network faster than a flow with a large RTT. Thus, the former may increase its bandwidth much faster than the latter. Second, it will take longer for a large-rtt flow to switch from MI to AI than a small-rtt flow. This may give the large-rtt flow an unfair advantage. VCP addresses the first issue by using the RTT scaling mechanism (see (6)-(7)). To address the second issue, VCP bounds the MI gain, as discussed in Section 4.3. To illustrate the effectiveness of limiting the MI gain, Figure 12 shows the congestion window evolution for two flows with RTTs of 5ms and 5ms, respectively, traversing a single 1Mbps link. At time 12.6s, the 5ms-RTT flow switches from MI to AI. In contrast, due to its larger RTT, the 5ms-RTT flow keeps performing MI until time 12.37s. However, because VCP limits the MI gain of the 5ms-RTT flow, the additional Congestion Window (packets) flow with rtt = 5ms flow with rtt = 5ms MI ==> AI A MD <==> AI Figure 12: The congestion window dynamics of two flows with dramatically different RTTs (5ms vs. 5ms). Due to its longer delay, the larger-rtt flow always slides its mode later than the one with smaller RTT (see the regions labeled as A and B). However, the effect of this asynchronous switching is accommodated by VCP and does not prevent it from maintaining stability and achieving efficiency and fairness. bandwidth acquired by this flow during the.31s interval is only marginal when compared to the bandwidth acquired by the 5ms-RTT flow. 7. RELATED WORK This paper builds upon a great body of related work, particularly XCP [34], TCP [24, 1, 49], AIMD [1, 27] and ECN [55, 56]. Congestion control is pioneered by TCP and AIMD. The research on AQM starts from RED [17], followed by Blue [14], REM [2], PI controller [22], AVQ [21, 41], and CHOKe [52], etc. Below we relate VCP to three categories of congestion control schemes and a set of analytical results. Explicit rate based schemes: XCP regulates source sending rate with decoupled efficiency control and fairness control and achieves excellent performance. ATM ABR service (e.g., see [39, 9, 32, 26, 33]) previously proposes explicit rate control. VCP learns from these schemes. In contrast, VCP is primarily an end-host based protocol. This key difference brings new design challenges not faced by XCP (and the ATM ABR schemes) and thus VCP is not just a twobit version of XCP. The idea of classifying network load into different regions is originally presented in [28]. The link load factor is suggested as a congestion signal in [26], based on which VCP quantizes and encodes it for a more compact representation for the degree of congestion. MaxNet [63] uses the maximal congestion information among all the bottlenecks to achieve max-min fairness. QuickStart [25] occasionally uses several bits per packet to quickly ramp up source sending rates. VCP is complementary to QuickStart as it constantly uses two bits per packet. Congestion notification based schemes: For high BDP networks, according to [34], the performance gap between XCP and TCP+RED/REM/AVQ/CSFQ [58] with one-bit ECN support seems large. VCP generalizes one-bit ECN and applies some ideas from these AQM schemes. For example, RED s queue-averaging idea, REM s match-rate-clearbuffer idea and AVQ s virtual-capacity idea obviously find themselves in VCP s load factor calculation in Equation (1). This paper demonstrates that the marginal performance gain from one-bit to two-bit ECN feedback could be significant. Two-bit ECN is also used to choose different decrease parameters for TCP in [13], which is very different from the way VCP uses. On the end-host side, GAIMD [66] and the binomial control [3] generalize the AIMD algorithm, while VCP goes even further to combine MIMD with AIMD. B

One More Bit Is Enough

One More Bit Is Enough One More Bit Is Enough Yong Xia, RPI Lakshmi Subramanian, UCB Ion Stoica, UCB Shiv Kalyanaraman, RPI SIGCOMM 05, Philadelphia, PA 08 / 23 / 2005 Motivation #1: TCP doesn t work well in high b/w or delay

More information

One More Bit Is Enough

One More Bit Is Enough One More Bit Is Enough Yong Xia Lakshminarayanan Subramanian Ion Stoica Shivkumar Kalyanaraman RPI UCB UCB RPI xiay@alum.rpi.edu lakme@cs.berkeley.edu istoica@cs.berkeley.edu shivkuma@ecse.rpi.edu Abstract

More information

Congestion Control for High Bandwidth-delay Product Networks. Dina Katabi, Mark Handley, Charlie Rohrs

Congestion Control for High Bandwidth-delay Product Networks. Dina Katabi, Mark Handley, Charlie Rohrs Congestion Control for High Bandwidth-delay Product Networks Dina Katabi, Mark Handley, Charlie Rohrs Outline Introduction What s wrong with TCP? Idea of Efficiency vs. Fairness XCP, what is it? Is it

More information

On the Design of Load Factor based Congestion Control Protocols for Next-Generation Networks

On the Design of Load Factor based Congestion Control Protocols for Next-Generation Networks 1 On the Design of Load Factor based Congestion Control Protocols for Next-Generation Networks Ihsan Ayyub Qazi Taieb Znati Student Member, IEEE Senior Member, IEEE Abstract Load factor based congestion

More information

INTERNATIONAL JOURNAL OF RESEARCH IN COMPUTER APPLICATIONS AND ROBOTICS ISSN

INTERNATIONAL JOURNAL OF RESEARCH IN COMPUTER APPLICATIONS AND ROBOTICS ISSN INTERNATIONAL JOURNAL OF RESEARCH IN COMPUTER APPLICATIONS AND ROBOTICS ISSN 2320-7345 A SURVEY ON EXPLICIT FEEDBACK BASED CONGESTION CONTROL PROTOCOLS Nasim Ghasemi 1, Shahram Jamali 2 1 Department of

More information

XCP: explicit Control Protocol

XCP: explicit Control Protocol XCP: explicit Control Protocol Dina Katabi MIT Lab for Computer Science dk@mit.edu www.ana.lcs.mit.edu/dina Sharing the Internet Infrastructure Is fundamental Much research in Congestion Control, QoS,

More information

Congestion Control for High Bandwidth-delay Product Networks

Congestion Control for High Bandwidth-delay Product Networks Congestion Control for High Bandwidth-delay Product Networks Dina Katabi, Mark Handley, Charlie Rohrs Presented by Chi-Yao Hong Adapted from slides by Dina Katabi CS598pbg Sep. 10, 2009 Trends in the Future

More information

CS 268: Lecture 7 (Beyond TCP Congestion Control)

CS 268: Lecture 7 (Beyond TCP Congestion Control) Outline CS 68: Lecture 7 (Beyond TCP Congestion Control) TCP-Friendly Rate Control (TFRC) explicit Control Protocol Ion Stoica Computer Science Division Department of Electrical Engineering and Computer

More information

CS644 Advanced Networks

CS644 Advanced Networks What we know so far CS644 Advanced Networks Lecture 6 Beyond TCP Congestion Control Andreas Terzis TCP Congestion control based on AIMD window adjustment [Jac88] Saved Internet from congestion collapse

More information

ADVANCED TOPICS FOR CONGESTION CONTROL

ADVANCED TOPICS FOR CONGESTION CONTROL ADVANCED TOPICS FOR CONGESTION CONTROL Congestion Control The Internet only functions because TCP s congestion control does an effective job of matching traffic demand to available capacity. TCP s Window

More information

15-744: Computer Networking. Overview. Queuing Disciplines. TCP & Routers. L-6 TCP & Routers

15-744: Computer Networking. Overview. Queuing Disciplines. TCP & Routers. L-6 TCP & Routers TCP & Routers 15-744: Computer Networking RED XCP Assigned reading [FJ93] Random Early Detection Gateways for Congestion Avoidance [KHR02] Congestion Control for High Bandwidth-Delay Product Networks L-6

More information

CS268: Beyond TCP Congestion Control

CS268: Beyond TCP Congestion Control TCP Problems CS68: Beyond TCP Congestion Control Ion Stoica February 9, 004 When TCP congestion control was originally designed in 1988: - Key applications: FTP, E-mail - Maximum link bandwidth: 10Mb/s

More information

CS 268: Computer Networking

CS 268: Computer Networking CS 268: Computer Networking L-6 Router Congestion Control TCP & Routers RED XCP Assigned reading [FJ93] Random Early Detection Gateways for Congestion Avoidance [KHR02] Congestion Control for High Bandwidth-Delay

More information

Congestion Control for High Bandwidth-Delay Product Networks

Congestion Control for High Bandwidth-Delay Product Networks Congestion Control for High Bandwidth-Delay Product Networks Presented by: Emad Shihab Overview Introduce the problem of XCP (what the protocol achieves) (how the protocol achieves this) The problem! TCP

More information

Improving XCP to Achieve Max-Min Fair Bandwidth Allocation

Improving XCP to Achieve Max-Min Fair Bandwidth Allocation 1 Improving to Achieve Max-Min Fair Bandwidth Allocation Xiaowei Yang, Yanbin Lu, and Lei Zan University of California, Irvine {xwy, yanbinl, lzan}@ics.uci.edu Abstract TCP is prone to be inefficient and

More information

Congestion Control In the Network

Congestion Control In the Network Congestion Control In the Network Brighten Godfrey cs598pbg September 9 2010 Slides courtesy Ion Stoica with adaptation by Brighten Today Fair queueing XCP Announcements Problem: no isolation between flows

More information

Chapter II. Protocols for High Speed Networks. 2.1 Need for alternative Protocols

Chapter II. Protocols for High Speed Networks. 2.1 Need for alternative Protocols Chapter II Protocols for High Speed Networks 2.1 Need for alternative Protocols As the conventional TCP suffers from poor performance on high bandwidth delay product links [47] meant for supporting transmission

More information

Reliable Transport II: TCP and Congestion Control

Reliable Transport II: TCP and Congestion Control Reliable Transport II: TCP and Congestion Control Stefano Vissicchio UCL Computer Science COMP0023 Recap: Last Lecture Transport Concepts Layering context Transport goals Transport mechanisms and design

More information

! Network bandwidth shared by all users! Given routing, how to allocate bandwidth. " efficiency " fairness " stability. !

! Network bandwidth shared by all users! Given routing, how to allocate bandwidth.  efficiency  fairness  stability. ! Motivation Network Congestion Control EL 933, Class10 Yong Liu 11/22/2005! Network bandwidth shared by all users! Given routing, how to allocate bandwidth efficiency fairness stability! Challenges distributed/selfish/uncooperative

More information

In-network Resource Allocation (Scribed by Ambuj Ojha)

In-network Resource Allocation (Scribed by Ambuj Ojha) In-network Resource Allocation (Scribed by Ambuj Ojha) Key Insight Adding intelligence in routers can improve congestion control relative to purely end to end schemes as seen in TCP Tahoe and BBR. We discuss

More information

Investigating the Use of Synchronized Clocks in TCP Congestion Control

Investigating the Use of Synchronized Clocks in TCP Congestion Control Investigating the Use of Synchronized Clocks in TCP Congestion Control Michele Weigle (UNC-CH) November 16-17, 2001 Univ. of Maryland Symposium The Problem TCP Reno congestion control reacts only to packet

More information

Transmission Control Protocol (TCP)

Transmission Control Protocol (TCP) TETCOS Transmission Control Protocol (TCP) Comparison of TCP Congestion Control Algorithms using NetSim @2017 Tetcos. This document is protected by copyright, all rights reserved Table of Contents 1. Abstract....

More information

Congestion Control. Andreas Pitsillides University of Cyprus. Congestion control problem

Congestion Control. Andreas Pitsillides University of Cyprus. Congestion control problem Congestion Control Andreas Pitsillides 1 Congestion control problem growing demand of computer usage requires: efficient ways of managing network traffic to avoid or limit congestion in cases where increases

More information

Lecture 14: Congestion Control"

Lecture 14: Congestion Control Lecture 14: Congestion Control" CSE 222A: Computer Communication Networks Alex C. Snoeren Thanks: Amin Vahdat, Dina Katabi Lecture 14 Overview" TCP congestion control review XCP Overview 2 Congestion Control

More information

Congestion Control. Tom Anderson

Congestion Control. Tom Anderson Congestion Control Tom Anderson Bandwidth Allocation How do we efficiently share network resources among billions of hosts? Congestion control Sending too fast causes packet loss inside network -> retransmissions

More information

Congestion Control for High-Bandwidth-Delay-Product Networks: XCP vs. HighSpeed TCP and QuickStart

Congestion Control for High-Bandwidth-Delay-Product Networks: XCP vs. HighSpeed TCP and QuickStart Congestion Control for High-Bandwidth-Delay-Product Networks: XCP vs. HighSpeed TCP and QuickStart Sally Floyd September 11, 2002 ICIR Wednesday Lunch 1 Outline: Description of the problem. Description

More information

Internet Congestion Control for Future High Bandwidth-Delay Product Environments

Internet Congestion Control for Future High Bandwidth-Delay Product Environments Internet Congestion Control for Future High Bandwidth-Delay Product Environments Dina Katabi Mark Handley Charlie Rohrs MIT-LCS ICSI Tellabs dk@mit.edu mjh@icsi.berkeley.edu crhors@mit.edu Abstract Theory

More information

Congestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources

Congestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources Congestion Source 1 Source 2 10-Mbps Ethernet 100-Mbps FDDI Router 1.5-Mbps T1 link Destination Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets

More information

Equation-Based Congestion Control for Unicast Applications. Outline. Introduction. But don t we need TCP? TFRC Goals

Equation-Based Congestion Control for Unicast Applications. Outline. Introduction. But don t we need TCP? TFRC Goals Equation-Based Congestion Control for Unicast Applications Sally Floyd, Mark Handley AT&T Center for Internet Research (ACIRI) Jitendra Padhye Umass Amherst Jorg Widmer International Computer Science Institute

More information

Appendix B. Standards-Track TCP Evaluation

Appendix B. Standards-Track TCP Evaluation 215 Appendix B Standards-Track TCP Evaluation In this appendix, I present the results of a study of standards-track TCP error recovery and queue management mechanisms. I consider standards-track TCP error

More information

RED behavior with different packet sizes

RED behavior with different packet sizes RED behavior with different packet sizes Stefaan De Cnodder, Omar Elloumi *, Kenny Pauwels Traffic and Routing Technologies project Alcatel Corporate Research Center, Francis Wellesplein, 1-18 Antwerp,

More information

An Adaptive Virtual Queue (AVQ) Algorithm for Active Queue Management

An Adaptive Virtual Queue (AVQ) Algorithm for Active Queue Management University of Pennsylvania ScholarlyCommons Departmental Papers (ESE) Department of Electrical & Systems Engineering April 2004 An Adaptive Virtual Queue (AVQ) Algorithm for Active Queue Management Srisankar

More information

15-744: Computer Networking TCP

15-744: Computer Networking TCP 15-744: Computer Networking TCP Congestion Control Congestion Control Assigned Reading [Jacobson and Karels] Congestion Avoidance and Control [TFRC] Equation-Based Congestion Control for Unicast Applications

More information

Lecture 14: Congestion Control"

Lecture 14: Congestion Control Lecture 14: Congestion Control" CSE 222A: Computer Communication Networks George Porter Thanks: Amin Vahdat, Dina Katabi and Alex C. Snoeren Lecture 14 Overview" TCP congestion control review Dukkipati

More information

TCP Congestion Control : Computer Networking. Introduction to TCP. Key Things You Should Know Already. Congestion Control RED

TCP Congestion Control : Computer Networking. Introduction to TCP. Key Things You Should Know Already. Congestion Control RED TCP Congestion Control 15-744: Computer Networking L-4 TCP Congestion Control RED Assigned Reading [FJ93] Random Early Detection Gateways for Congestion Avoidance [TFRC] Equation-Based Congestion Control

More information

CSE 123A Computer Networks

CSE 123A Computer Networks CSE 123A Computer Networks Winter 2005 Lecture 14 Congestion Control Some images courtesy David Wetherall Animations by Nick McKeown and Guido Appenzeller The bad news and the good news The bad news: new

More information

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks Hybrid Control and Switched Systems Lecture #17 Hybrid Systems Modeling of Communication Networks João P. Hespanha University of California at Santa Barbara Motivation Why model network traffic? to validate

More information

Oscillations and Buffer Overflows in Video Streaming under Non- Negligible Queuing Delay

Oscillations and Buffer Overflows in Video Streaming under Non- Negligible Queuing Delay Oscillations and Buffer Overflows in Video Streaming under Non- Negligible Queuing Delay Presented by Seong-Ryong Kang Yueping Zhang and Dmitri Loguinov Department of Computer Science Texas A&M University

More information

Flow-start: Faster and Less Overshoot with Paced Chirping

Flow-start: Faster and Less Overshoot with Paced Chirping Flow-start: Faster and Less Overshoot with Paced Chirping Joakim Misund, Simula and Uni Oslo Bob Briscoe, Independent IRTF ICCRG, Jul 2018 The Slow-Start

More information

6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long

6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long 6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long Please read Chapter 19 of the 6.02 book for background, especially on acknowledgments (ACKs), timers,

More information

An Adaptive Virtual Queue (AVQ) Algorithm for Active Queue Management

An Adaptive Virtual Queue (AVQ) Algorithm for Active Queue Management An Adaptive Virtual Queue (AVQ) Algorithm for Active Queue Management Srisankar Kunniyur kunniyur@ee.upenn.edu R. Srikant rsrikant@uiuc.edu December 16, 22 Abstract Virtual Queue-based marking schemes

More information

Congestion Control in Communication Networks

Congestion Control in Communication Networks Congestion Control in Communication Networks Introduction Congestion occurs when number of packets transmitted approaches network capacity Objective of congestion control: keep number of packets below

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2015 1 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2015 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

Increase-Decrease Congestion Control for Real-time Streaming: Scalability

Increase-Decrease Congestion Control for Real-time Streaming: Scalability Increase-Decrease Congestion Control for Real-time Streaming: Scalability Dmitri Loguinov City University of New York Hayder Radha Michigan State University 1 Motivation Current Internet video streaming

More information

Analysis of Binary Adjustment Algorithms in Fair Heterogeneous Networks

Analysis of Binary Adjustment Algorithms in Fair Heterogeneous Networks Analysis of Binary Adjustment Algorithms in Fair Heterogeneous Networks Sergey Gorinsky Harrick Vin Technical Report TR2000-32 Department of Computer Sciences, University of Texas at Austin Taylor Hall

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2014 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

Reliable Transport II: TCP and Congestion Control

Reliable Transport II: TCP and Congestion Control Reliable Transport II: TCP and Congestion Control Brad Karp UCL Computer Science CS 3035/GZ01 31 st October 2013 Outline Slow Start AIMD Congestion control Throughput, loss, and RTT equation Connection

More information

Random Early Detection (RED) gateways. Sally Floyd CS 268: Computer Networks

Random Early Detection (RED) gateways. Sally Floyd CS 268: Computer Networks Random Early Detection (RED) gateways Sally Floyd CS 268: Computer Networks floyd@eelblgov March 20, 1995 1 The Environment Feedback-based transport protocols (eg, TCP) Problems with current Drop-Tail

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2014 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

A report on a few steps in the evolution of congestion control. Sally Floyd June 10, 2002 IPAM Program in Large Scale Communication Networks

A report on a few steps in the evolution of congestion control. Sally Floyd June 10, 2002 IPAM Program in Large Scale Communication Networks A report on a few steps in the evolution of congestion control Sally Floyd June 10, 2002 IPAM Program in Large Scale Communication Networks 1 Topics: High-speed TCP. Faster Start-up? AQM: Adaptive RED

More information

ADVANCED COMPUTER NETWORKS

ADVANCED COMPUTER NETWORKS ADVANCED COMPUTER NETWORKS Congestion Control and Avoidance 1 Lecture-6 Instructor : Mazhar Hussain CONGESTION CONTROL When one part of the subnet (e.g. one or more routers in an area) becomes overloaded,

More information

Congestion Avoidance

Congestion Avoidance COMP 631: NETWORKED & DISTRIBUTED SYSTEMS Congestion Avoidance Jasleen Kaur Fall 2016 1 Avoiding Congestion: Strategies TCP s strategy: congestion control Ø Control congestion once it occurs Repeatedly

More information

CS321: Computer Networks Congestion Control in TCP

CS321: Computer Networks Congestion Control in TCP CS321: Computer Networks Congestion Control in TCP Dr. Manas Khatua Assistant Professor Dept. of CSE IIT Jodhpur E-mail: manaskhatua@iitj.ac.in Causes and Cost of Congestion Scenario-1: Two Senders, a

More information

Communication Networks

Communication Networks Communication Networks Spring 2018 Laurent Vanbever nsg.ee.ethz.ch ETH Zürich (D-ITET) April 30 2018 Materials inspired from Scott Shenker & Jennifer Rexford Last week on Communication Networks We started

More information

CS 356: Computer Network Architectures Lecture 19: Congestion Avoidance Chap. 6.4 and related papers. Xiaowei Yang

CS 356: Computer Network Architectures Lecture 19: Congestion Avoidance Chap. 6.4 and related papers. Xiaowei Yang CS 356: Computer Network Architectures Lecture 19: Congestion Avoidance Chap. 6.4 and related papers Xiaowei Yang xwy@cs.duke.edu Overview More on TCP congestion control Theory Macroscopic behavior TCP

More information

CS4700/CS5700 Fundamentals of Computer Networks

CS4700/CS5700 Fundamentals of Computer Networks CS4700/CS5700 Fundamentals of Computer Networks Lecture 15: Congestion Control Slides used with permissions from Edward W. Knightly, T. S. Eugene Ng, Ion Stoica, Hui Zhang Alan Mislove amislove at ccs.neu.edu

More information

Analyzing the Receiver Window Modification Scheme of TCP Queues

Analyzing the Receiver Window Modification Scheme of TCP Queues Analyzing the Receiver Window Modification Scheme of TCP Queues Visvasuresh Victor Govindaswamy University of Texas at Arlington Texas, USA victor@uta.edu Gergely Záruba University of Texas at Arlington

More information

Congestion Avoidance

Congestion Avoidance Congestion Avoidance Richard T. B. Ma School of Computing National University of Singapore CS 5229: Advanced Compute Networks References K. K. Ramakrishnan, Raj Jain, A Binary Feedback Scheme for Congestion

More information

Lecture 21: Congestion Control" CSE 123: Computer Networks Alex C. Snoeren

Lecture 21: Congestion Control CSE 123: Computer Networks Alex C. Snoeren Lecture 21: Congestion Control" CSE 123: Computer Networks Alex C. Snoeren Lecture 21 Overview" How fast should a sending host transmit data? Not to fast, not to slow, just right Should not be faster than

More information

TCP Congestion Control

TCP Congestion Control 6.033, Spring 2014 TCP Congestion Control Dina Katabi & Sam Madden nms.csail.mit.edu/~dina Sharing the Internet How do you manage resources in a huge system like the Internet, where users with different

More information

Utility-Based Rate Control in the Internet for Elastic Traffic

Utility-Based Rate Control in the Internet for Elastic Traffic 272 IEEE TRANSACTIONS ON NETWORKING, VOL. 10, NO. 2, APRIL 2002 Utility-Based Rate Control in the Internet for Elastic Traffic Richard J. La and Venkat Anantharam, Fellow, IEEE Abstract In a communication

More information

ECE 333: Introduction to Communication Networks Fall 2001

ECE 333: Introduction to Communication Networks Fall 2001 ECE 333: Introduction to Communication Networks Fall 2001 Lecture 28: Transport Layer III Congestion control (TCP) 1 In the last lecture we introduced the topics of flow control and congestion control.

More information

RECHOKe: A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks

RECHOKe: A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < : A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks Visvasuresh Victor Govindaswamy,

More information

Congestion Control for High Bandwidth-Delay Product Networks

Congestion Control for High Bandwidth-Delay Product Networks Congestion Control for High Bandwidth-Delay Product Networks Dina Katabi Mark Handley Charlie Rohrs MIT-LCS ICSI Tellabs dk@mit.edu mjh@icsi.berkeley.edu crhors@mit.edu ABSTRACT Theory and experiments

More information

CSCE 463/612 Networks and Distributed Processing Spring 2017

CSCE 463/612 Networks and Distributed Processing Spring 2017 CSCE 463/612 Networks and Distributed Processing Spring 2017 Transport Layer VI Dmitri Loguinov Texas A&M University March 28, 2017 Original slides copyright 1996-2004 J.F Kurose and K.W. Ross 1 Chapter

More information

Principles of congestion control

Principles of congestion control Principles of congestion control Congestion: Informally: too many sources sending too much data too fast for network to handle Different from flow control! Manifestations: Lost packets (buffer overflow

More information

Implementing stable TCP variants

Implementing stable TCP variants Implementing stable TCP variants IPAM Workshop on Large Scale Communications Networks April 2002 Tom Kelly ctk21@cam.ac.uk Laboratory for Communication Engineering University of Cambridge Implementing

More information

Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks. Congestion Control in Today s Internet

Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks. Congestion Control in Today s Internet Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks Ion Stoica CMU Scott Shenker Xerox PARC Hui Zhang CMU Congestion Control in Today s Internet Rely

More information

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness Recap TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness 81 Feedback Signals Several possible signals, with different

More information

Chapter III: Transport Layer

Chapter III: Transport Layer Chapter III: Transport Layer UG3 Computer Communications & Networks (COMN) Mahesh Marina mahesh@ed.ac.uk Slides thanks to Myungjin Lee and copyright of Kurose and Ross Principles of congestion control

More information

Congestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University

Congestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University Congestion Control Daniel Zappala CS 460 Computer Networking Brigham Young University 2/25 Congestion Control how do you send as fast as possible, without overwhelming the network? challenges the fastest

More information

CS 556 Advanced Computer Networks Spring Solutions to Midterm Test March 10, YOUR NAME: Abraham MATTA

CS 556 Advanced Computer Networks Spring Solutions to Midterm Test March 10, YOUR NAME: Abraham MATTA CS 556 Advanced Computer Networks Spring 2011 Solutions to Midterm Test March 10, 2011 YOUR NAME: Abraham MATTA This test is closed books. You are only allowed to have one sheet of notes (8.5 11 ). Please

More information

Improving Internet Performance through Traffic Managers

Improving Internet Performance through Traffic Managers Improving Internet Performance through Traffic Managers Ibrahim Matta Computer Science Department Boston University Computer Science A Glimpse of Current Internet b b b b Alice c TCP b (Transmission Control

More information

Router Design: Table Lookups and Packet Scheduling EECS 122: Lecture 13

Router Design: Table Lookups and Packet Scheduling EECS 122: Lecture 13 Router Design: Table Lookups and Packet Scheduling EECS 122: Lecture 13 Department of Electrical Engineering and Computer Sciences University of California Berkeley Review: Switch Architectures Input Queued

More information

Appendix A. Methodology

Appendix A. Methodology 193 Appendix A Methodology In this appendix, I present additional details of the evaluation of Sync-TCP described in Chapter 4. In Section A.1, I discuss decisions made in the design of the network configuration.

More information

CS3600 SYSTEMS AND NETWORKS

CS3600 SYSTEMS AND NETWORKS CS3600 SYSTEMS AND NETWORKS NORTHEASTERN UNIVERSITY Lecture 24: Congestion Control Prof. Alan Mislove (amislove@ccs.neu.edu) Slides used with permissions from Edward W. Knightly, T. S. Eugene Ng, Ion Stoica,

More information

Promoting the Use of End-to-End Congestion Control in the Internet

Promoting the Use of End-to-End Congestion Control in the Internet Promoting the Use of End-to-End Congestion Control in the Internet Sally Floyd and Kevin Fall IEEE/ACM Transactions on Networking May 1999 ACN: TCP Friendly 1 Outline The problem of Unresponsive Flows

More information

CSCI-1680 Transport Layer III Congestion Control Strikes Back Rodrigo Fonseca

CSCI-1680 Transport Layer III Congestion Control Strikes Back Rodrigo Fonseca CSCI-1680 Transport Layer III Congestion Control Strikes Back Rodrigo Fonseca Based partly on lecture notes by David Mazières, Phil Levis, John Jannotti, Ion Stoica Last Time Flow Control Congestion Control

More information

Distributed ECN-Based Congestion Control

Distributed ECN-Based Congestion Control 1 Distributed ECN-Based Congestion Control Xiaolong Li Homayoun Yousefi zadeh Department of EECS University of California, Irvine [xiaolonl,hyousefi]@uci.edu Abstract Following the design philosophy of

More information

Improving TCP Performance over Wireless Networks using Loss Predictors

Improving TCP Performance over Wireless Networks using Loss Predictors Improving TCP Performance over Wireless Networks using Loss Predictors Fabio Martignon Dipartimento Elettronica e Informazione Politecnico di Milano P.zza L. Da Vinci 32, 20133 Milano Email: martignon@elet.polimi.it

More information

CS519: Computer Networks. Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control

CS519: Computer Networks. Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control : Computer Networks Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control TCP performance We ve seen how TCP the protocol works Sequencing, receive window, connection setup and teardown And

More information

Open Box Protocol (OBP)

Open Box Protocol (OBP) Open Box Protocol (OBP) Paulo Loureiro 1, Saverio Mascolo 2, Edmundo Monteiro 3 1 Polytechnic Institute of Leiria, Leiria, Portugal, loureiro.pjg@gmail.pt 2 Politecnico di Bari, Bari, Italy, saverio.mascolo@gmail.com

More information

Incremental deployment of new ECN-compatible congestion control

Incremental deployment of new ECN-compatible congestion control Incremental deployment of new ECN-compatible congestion control ABSTRACT Ihsan Ayyub Qazi, Taieb Znati Department of Computer Science University of Pittsburgh Pittsburgh, PA 1526, USA {ihsan,znati}@cs.pitt.edu

More information

TCP SIAD: Congestion Control supporting Low Latency and High Speed

TCP SIAD: Congestion Control supporting Low Latency and High Speed TCP SIAD: Congestion Control supporting Low Latency and High Speed Mirja Kühlewind IETF91 Honolulu ICCRG Nov 11, 2014 Outline Current Research Challenges Scalability in

More information

Performance Consequences of Partial RED Deployment

Performance Consequences of Partial RED Deployment Performance Consequences of Partial RED Deployment Brian Bowers and Nathan C. Burnett CS740 - Advanced Networks University of Wisconsin - Madison ABSTRACT The Internet is slowly adopting routers utilizing

More information

Outline Computer Networking. TCP slow start. TCP modeling. TCP details AIMD. Congestion Avoidance. Lecture 18 TCP Performance Peter Steenkiste

Outline Computer Networking. TCP slow start. TCP modeling. TCP details AIMD. Congestion Avoidance. Lecture 18 TCP Performance Peter Steenkiste Outline 15-441 Computer Networking Lecture 18 TCP Performance Peter Steenkiste Fall 2010 www.cs.cmu.edu/~prs/15-441-f10 TCP congestion avoidance TCP slow start TCP modeling TCP details 2 AIMD Distributed,

More information

On Network Dimensioning Approach for the Internet

On Network Dimensioning Approach for the Internet On Dimensioning Approach for the Internet Masayuki Murata ed Environment Division Cybermedia Center, (also, Graduate School of Engineering Science, ) e-mail: murata@ics.es.osaka-u.ac.jp http://www-ana.ics.es.osaka-u.ac.jp/

More information

Bandwidth Allocation & TCP

Bandwidth Allocation & TCP Bandwidth Allocation & TCP The Transport Layer Focus Application Presentation How do we share bandwidth? Session Topics Transport Network Congestion control & fairness Data Link TCP Additive Increase/Multiplicative

More information

DCP-EW: Distributed Congestion-control Protocol for Encrypted Wireless Networks

DCP-EW: Distributed Congestion-control Protocol for Encrypted Wireless Networks DCP-EW: Distributed Congestion-control Protocol for Encrypted Wireless Networks Xiaolong Li Homayoun Yousefi zadeh Department of EECS University of California, Irvine [xiaolonl, hyousefi]@uci.edu Abstract

More information

TCP and BBR. Geoff Huston APNIC

TCP and BBR. Geoff Huston APNIC TCP and BBR Geoff Huston APNIC Computer Networking is all about moving data The way in which data movement is controlled is a key characteristic of the network architecture The Internet protocol passed

More information

CSE 461. TCP and network congestion

CSE 461. TCP and network congestion CSE 461 TCP and network congestion This Lecture Focus How should senders pace themselves to avoid stressing the network? Topics Application Presentation Session Transport Network congestion collapse Data

More information

The War Between Mice and Elephants

The War Between Mice and Elephants The War Between Mice and Elephants (by Liang Guo and Ibrahim Matta) Treating Short Connections fairly against Long Connections when they compete for Bandwidth. Advanced Computer Networks CS577 Fall 2013

More information

Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes

Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes Zhili Zhao Dept. of Elec. Engg., 214 Zachry College Station, TX 77843-3128 A. L. Narasimha Reddy

More information

Episode 5. Scheduling and Traffic Management

Episode 5. Scheduling and Traffic Management Episode 5. Scheduling and Traffic Management Part 3 Baochun Li Department of Electrical and Computer Engineering University of Toronto Outline What is scheduling? Why do we need it? Requirements of a scheduling

More information

ECSE-6600: Internet Protocols Spring 2007, Exam 1 SOLUTIONS

ECSE-6600: Internet Protocols Spring 2007, Exam 1 SOLUTIONS ECSE-6600: Internet Protocols Spring 2007, Exam 1 SOLUTIONS Time: 75 min (strictly enforced) Points: 50 YOUR NAME (1 pt): Be brief, but DO NOT omit necessary detail {Note: Simply copying text directly

More information

CUBIC. Qian HE (Steve) CS 577 Prof. Bob Kinicki

CUBIC. Qian HE (Steve) CS 577 Prof. Bob Kinicki CUBIC Qian HE (Steve) CS 577 Prof. Bob Kinicki Agenda Brief Introduction of CUBIC Prehistory of CUBIC Standard TCP BIC CUBIC Conclusion 1 Brief Introduction CUBIC is a less aggressive and more systematic

More information

CPSC 826 Internetworking. Congestion Control Approaches Outline. Router-Based Congestion Control Approaches. Router-Based Approaches Papers

CPSC 826 Internetworking. Congestion Control Approaches Outline. Router-Based Congestion Control Approaches. Router-Based Approaches Papers 1 CPSC 826 Internetworking Router-Based Congestion Control Approaches Michele Weigle Department of Computer Science Clemson University mweigle@cs.clemson.edu October 25, 2004 http://www.cs.clemson.edu/~mweigle/courses/cpsc826

More information

Congestion Control & Transport protocols

Congestion Control & Transport protocols Congestion Control & Transport protocols from New Internet and Networking Technologies for Grids and High-Performance Computing, tutorial given at HiPC 04, Bangalore, India December 22nd, 2004 C. Pham

More information

Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks

Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks Zongsheng Zhang Go Hasegawa Masayuki Murata Osaka University Contents Introduction Analysis of parallel TCP mechanism Numerical

More information