Loss Proportional Decrease Based Congestion Control in the Future Internet

Size: px
Start display at page:

Download "Loss Proportional Decrease Based Congestion Control in the Future Internet"

Transcription

1 Loss Proportional Decrease Based Congestion Control in the Future Internet Tae-eun Kim, Songwu Lu and Vaduvur Bharghavan Coordinated Sciences Laboratory University of Illinois, Illinois, IL s: tkim, slu, Abstract In recent years, a number of router technologies have emerged wherein the routers can provide fair packet marking/dropping with early congestion detection/notification. For a network of fair packet marking/dropping routers, we propose a new congestion control algorithm based on the Linear Increase/Loss Proportional Decrease (LIPD) paradigm, and compare it with the predominant Linear Increase/Multiplicative Decrease (LIMD) congestion control algorithm deployed in today s Internet. The key difference between LIPD and LIMD is that in LIPD, the reduction of the transmission rate of a connection upon loss feedback is proportional to the fraction of lost packets, while in LIMD, the reduction is independent of the loss fraction. We show that LIPD is more efficient, less sensitive to random loss, fair, and robust to short term deviations from fair marking/dropping. We further refine the LIPD congestion control algorithm by incorporating the notion of congestion history into the congestion control algorithm. The resulting algorithm, LIPD/H, is able to predict the cause of packet loss (random packet loss and channel capacity probe loss from congestion loss) and react appropriately. Through simulation and analysis, we show that LIPD/H is a viable congestion control algorithm in a network wherein all the routers perform fair packet marking/dropping with early congestion feedback. I. INTRODUCTION Congestion control in today s Internet is typically achieved through a combination of packet dropping at a network router and a rate/window adaptation mechanism at the end hosts. When network congestion is detected, the router executes a packet dropping policy to drop a fraction of incoming packets, and the end hosts throttle the transmission rates of connections upon loss feedback in order to quickly alleviate congestion. The typical packet dropping policy implemented in Internet routers to date has been tail drop 1. At the end hosts, linear increase/ multiplicative decrease (LIMD) has been the predominant paradigm for congestion control, because it has been shown in [1] that LIMD is the only linear congestion control paradigm that is guaranteed to converge to fairness from an arbitrary start state for tail drop routers. Briefly, LIMD periodically adapts the sending rate of a connection by gently increas- ½ A tail drop router typically has a FIFO queue for each output link, and drops incoming packets as the queue becomes full. ing the rate by an additive constant «upon observing no packet losses (in order to probe for additional bandwidth), and aggressively decreasing the sending rate by a multiplicative constant upon observing packet losses (in order to alleviate congestion). LIMD is known to be robust and tends to provide proportional fairness in a generic network topology [1][13]. In recent years, researchers have perceived the need to introduce router-level mechanisms for ensuring fair packet dropping/marking in the router during congestion, in order to punish non-adaptive senders that fail to react to congestion by throttling down their sending rates. The efforts along this direction include the random early detection (RED) proposal [3] and its variants [5][6], and more recent random drop switches [4] and CSFQ[9]. Most of these mechanisms have provided fair packet marking/dropping in varying degrees, which means that on the average, the number of packets of a connection marked/dropped at a router during congestion detection/notification is proportional to its sending rate. In this paper, we assume that fair packet marking/dropping will be implemented in the future Internet. With this assumption in place, we revisit an alternative to LIMD that has been discredited before for congestion control (in the tail drop router context), which we call the linear increase/loss proportional decrease (LIPD) algorithm. Briefly, LIPD still increases the transmission rate of a connection by an additive constant upon detecting no packet loss, but decreases the sending rate by an amount proportional to the packet loss it has observed upon detecting packet loss. Therefore, the key difference between LIPD and LIMD is that in LIPD, the reduction of the transmission rate of a connection upon congestion is proportional to the fraction of lost packets, while in LIMD, the reduction is independent of the loss fraction. It turns out that LIPD overcomes a number of problems associated with LIMD. Specifically, compared to LIMD, LIPD has a smaller band for the variation of the transmission rate, is less sensitive to random loss due to noncongestion reasons such as those caused by channel error, misrouting, link failures etc., achieves higher efficiency, and is less susceptible to loss synchronization. In addition, if each router implements fair packet marking/dropping, we show that LIPD preserves the fairness property of

2 LIMD and is robust against short-term deviations from ideal fair marking/dropping. However, the plain LIPD is also less sensitive to the onset of sudden congestion (e.g. when a number of new flows join in) because it does not react to packet drops as aggressively as LIMD, thus takes longer to converge to the fairness point, and causes more packet drops. In order to overcome these limitations, we further refine LIPD by incorporating the notion of congestion history into the congestion control algorithm. By maintaining the history of the transmission rate and packet loss, the resulting algorithm, LIPD/H, is able to react as aggressively as LIMD if the congestion loss sustains over successive windows, and throttle transmission rate gently upon random loss or probe loss. In essence, LIPD/H is able to predict the cause of packet loss (random packet loss, channel capacity probe loss, and congestion loss) and react appropriately. Note that although LIPD/H is proposed to work with both fair dropping and fair marking routers, it performs more effectively in a network of fair marking routers in the sense it leads to a minimal number of dropped packets when early congestion detection is in place. In essence, LIPD/H serves as a congestion avoidance algorithm in this case. The rest of the paper is organized as follows. Section II describes the network model. Section III discusses the design goals for congestion control. Section IV motivates our design choices of LIPD and maintaining congestion history. Section V presents both LIPD and LIPD/H algorithms and their properties. Section VI evaluates the design of LIPD and LIPD/H through simulations, and Section VII concludes this paper. II. NETWORK MODEL We are interested in a packet switched network, where all network routers provide fair packet dropping/marking with early congestion notification/detection. The sender of each flow executes a congestion control algorithm to adjust its transmission rate upon packet loss/marker feedback. The evolution of the transmission rate of each flow occurs over successive windows called epochs. The receiver computes packet losses/markers at the end of each epoch, and sends a congestion acknowledgment to the sender containing this information. The sender reacts to the loss/marker feedback signal by adjusting its sending rate according to its congestion control algorithm. A. Fair Packet Dropping One key requirement of our framework is that all network routers must provide fair packet dropping. The ideal fair dropping is defined as follows: Definition II.1: (Ideal Fair Packet Dropping) A router provides ideal fair packet dropping if the number of packet drops that a flow receives from this router during congestion is proportional to its sending rate. Namely, packet dropping for two flows and during congestion at Ø satisfies: Рص Рص Ö ØµÖ Øµ (1) where Рص is the number of dropped packets for flow, and Ö Øµ is its sending rate. Remark II.1: The fair packet dropping can be achieved via fair queueing by maintaining per flow state with same buffer bounds. In most router implementations, the above ideal fair dropping is not perfectly achieved. Practical fair packet dropping is achieved through the following: Definition II.2: (Practical fair packet dropping) The packet loss Рص that a flow receives at epoch Ø during congestion is specified by: Рص Рص Û Øµ (2) where Рص is the ideal fair dropping given by ½µ, and Û Øµ is an independent zero-mean, finite-variance random variable such that Û Øµµ ¼ and Û Øµµ ½ Remark II.2: The above definition requires only the expected value of packet drops is fair. In practice, one way to achieve the above practical fair dropping is when a router detects congestion, it drops each incoming packet (irrespective of which flow it belongs to) with an equal probability Ô. This may include several variants of RED [3][5][6], random drop switches [4], and CSFQ[9]. Interestingly, most TCP performance analyses to date [12][14] are performed using such a loss model. Our proposed algorithm works with both types of fair dropping routers, as shown in Section V. B. Packet Loss Behaviors Ideally, we would like all packet losses to be congestion induced, since most congestion control algorithms to date [1][8] react upon packet loss and do not differentiate packet losses. However, in practice, packet losses may generally occur because of three reasons: (a) congestion losses due to the reduction in the available bandwidth of the connection, (b) probe losses due to additive increase in rate of the congestion control algorithm beyond its sustainable value, and (c) random losses due to noncongestion and non-probe related reasons such as those caused by channel error, misrouting, link failures etc. A well-designed congestion control algorithm should be able to act appropriately upon different losses, i.e., throttle the transmission rate gently upon small random loss or probe loss, and more aggressively upon congestion loss. C. Early Congestion Detection and Fair Packet Marking If early congestion detection is in place in a network and each router provides fair packet marking (in contrast to dropping), our proposed algorithms LIPD and LIPD/H are equally applicable. They actually perform even more effectively in the sense the number of packet drops would be minimal. In essence, LIPD and LIPD/H work as congestion avoidance algorithms in such scenarios.

3 III. DESIGN GOALS FOR CONGESTION CONTROL In this section, we identify several design goals for congestion control: A. Fairness Fairness dictates the distribution of total rate allocation among flows to be fair by some fairness criteria. Popular fairness criteria include Jain s fairness index over a single bottleneck link [1], the classical max-min fairness [13] and the more recent proportional fairness [1]. In particular, proportional fairness is defined as: Definition III.1: (Proportional Fairness) [1] Given a feasible rate allocation Ü in a network Ê, the aggregate of proportional rate changes with respect to the optimum rate allocation Ü is negative: Ü Ü ¼ (3) Ü Ê B. Efficiency and Rate of Convergence Besides fair rate distribution among flows, another requirement is the efficiency of a resource. The efficiency of a link is measured by the closeness of the total rate allocation to its link capacity. Notice that though two algorithms can be both fair, their efficiency can be quite different, as shown later for LIMD and LIPD. In addition to efficiency, we also require the converging process to be fast, as measured by the rate of convergence. Rate of convergence is generally specified by the speed with which a congestion control algorithm approaches to the optimal fairness point from any initial state. C. Reaction to Packet Losses Section II-B illustrates different packet losses in the network and the need to treat them accordingly. However, it is almost impossible to deduce the exact cause of packet loss with the restricted information that the sender has. Therefore, it makes more sense to classify packet losses on the basis of the congestion control response that must be generated as a result of the loss. Three possible responses for packet losses with varying degree of rate throttling are: 1. No reduction in transmission rate: appropriate if the sender experiences random loss when its transmission rate is less than the available connection capacity. 2. Gentle reduction in transmission rate: appropriate if the sender experiences probe loss when its transmission rate is at or around the available capacity. 3. Aggressive reduction in transmission rate: appropriate if the sender experiences congestion loss when its transmission rate is larger than the available capacity. IV. TWO KEY DESIGN CHOICES In this section, we motivate the two design choices that are central to this paper: linear increase/loss proportional decrease (LIPD) adaptation, and maintenance of the history of transmission rate and packet losses. We start with a brief introduction to the classical LIMD algorithm. A. The Classical LIMD Algorithm Congestion control algorithms that are deployed in the real network today typically use the linear increase/multiplicative decrease (LIMD) paradigm. In LIMD, the sender monitors observed packet loss at the receiver. Periodically, the sender adjusts its sending rate Ö based on the loss feedback from the receiver 2. If ¼, Ö Ö «. If ¼, Ö Ö ½ µ. LIMD has several attractive properties, in particular its fairness. It has been shown in [1] that LIMD is the only congestion control paradigm that is guaranteed to converge to fairness from an arbitrary start state for tail drop routers, as illustrated by Figure 1(a). The figure shows the trajectory of rate allocation in LIMD congestion control with ¼, which eventually oscillates about the optimal point along the fairness line. While the convergence property of LIMD is certainly attractive, LIMD has many wellcataloged drawbacks including low efficiency, high sensitivity to random loss, large variations of the instantaneous transmission rate, large fluctuations in buffer utilization, and loss synchronization among flows [2]. Xgoal Rate allocof ation X X2 X1 Rate allocation of (a) Fairness Line Efficiency Line Xgoal Xgoal x2 Rate allocof ation X X2 X1 Fairness Line x1 Rate allocation of (b) Efficiency Line Fig. 1. Convergence Properties of LIMD and LIPD (with fair packet dropping). B. The Proposed LIPD Algorithm for Congestion Control In order to overcome the drawbacks of LIMD, we propose the LIPD algorithm. In LIPD, the sender periodically adjusts its sending rate Ö based on the loss feedback from the receiver. If ¼, Ö Ö «. To decrease rate, the fraction of packets lost during the last measurement period is used (i.e., if ¼ Ö Ö Ö). The key difference between LIPD and LIMD is that in LIPD, the reduction of the transmission rate of a connection upon congestion is proportional to the fraction of lost packets, while in LIMD, the reduction is independent of the loss fraction. It turns out that LIPD overcomes a number of problems associated with LIMD. Specifically, LIPD is less sensitive to random packet loss, leads to significantly higher link utilization if a small number of flows shares a link, does not cause large variations in transmission rate or buffer utilization, and is less susceptible to loss synchronization. In addition, if each router implements fair packet The loss feedback is the fraction of packets lost at the receiver, and is available from end-to-end feedback (See Section II). Xgoal

4 dropping/marking, LIPD preserves the fairness property of LIMD, which we will illustrate through Figure 1(b). In Figure 1(b), ¼ represents the initial rate allocations of flow 1 and flow 2. Both flows increase their sending rates linearly until the system reaches ½ Ü ½ Ü µ. At this point, the expected packet loss that each flow will observe is in proportion to its sending rate if all routers implement fair packet dropping. Thus, flow 1 will see an expected loss of ½ Ü µ Ü ½ Ü Ò packets, and flow 2 will see an Ü expected loss of µ Ü ½ Ü Ò packets, where Ò is the total packet loss 3. This brings the system to which is closer to the fairness line. By repeating this cycle, we see that the system converges to the fairness line as in LIMD case. In Section V, we give a formal specification of this fairness property and show that the convergence of LIPD is robust against short-term deviations from ideal fair dropping. However, because LIPD does not react to packet drops as aggressively as LIMD, it is also less sensitive to the onset of sudden congestion, causes more packet drops, and takes longer to converge to the fairness point (see Figures 2.(a) and (b)). These drawbacks of LIPD motivate the maintenance of congestion history, as described below. C. Maintaining Congestion History In this section, we will show that, there is a fundamental conflict in achieving both efficiency and fairness, for both LIMD and LIPD, so long as they do not distinguish the cause of packet loss and react appropriately. Then we describe a history mechanism to resolve this conflict. In LIMD, if the multiplicative constant is large (e.g..5), then LIMD reacts fast to congestion loss, but is not efficient in the presence of probe loss and random loss. If is small (e.g..5), then LIMD will be efficient and react to random/probe loss well, but it will not alleviate congestion quickly and may observe many rounds of packet losses before converging eventually. In LIPD, by throttling linearly to the observed loss, LIPD is efficient, and reacts to random loss well, but it does not converge to fairness as quickly as LIMD with large, and does not throttle down the rate quickly enough to alleviate congestion without losing many packets. In essence, both LIMD and LIPD suffer from the consequences of not distinguishing the cause of packet loss and reacting appropriately. As a result of their design choices, LIMD becomes less efficient and LIPD observes more packet loss upon congestion. However, the above dilemma also sheds some light on the solution. Consider the behavior of LIPD during congestion: it does not throttle the sending rate sufficiently, and is thus likely to experience packet losses in successive epochs. On the other hand, if there is no congestion, then packet losses due to both probe loss and random loss, For a connection that passes through multiple fair dropping routers, the expected loss is different from the simple formula described above. But LIPD is proportional fair in this case, as shown in Section VI rate available bandwidth rate available bandwidth rate available bandwidth Fig. 2. Expected Behavior of: (a) LIMD; (b)lipd; (c) LIPD/H are less likely to occur. While these observations are by no means precise, they serve as the motivation for maintaining the history of packet losses and sending rates in the recent past: if losses are observed in successive epochs, the sender should start to throttle more aggressively to packet loss. Otherwise, the sender can react gently to packet loss. By incorporating congestion history into LIPD, we arrive at a new congestion control algorithm, which we call LIPD/H. LIPD/H preserves all the appealing properties of LIPD; moreover, it is able to predict the cause of packet loss and react appropriately, in particular it reacts as aggressively as LIMD to congestion (see Figure 2.(c)). V. ALGORITHMS FOR CONGESTION CONTROL In this section, we present our congestion control algorithms in a rate-based framework. Since the goal of this paper is to evaluate the fundamental behavior of LIPD and congestion history, therefore, we have not cast our algorithms in a specific implementation framework such as TCP. Besides, we do not consider the startup behavior in detail, though we use slow start for newly arriving flows in the simulations of Section VI. In Section V-A, we first describe the basic LIPD algorithm, and illustrate its properties in terms of fairness, rate of convergence, robustness to short-term deviations from fair dropping, efficiency and steady state throughput in the presence of random loss. Then we proceed to describe the design of the congestion history algorithm in Section V-B.1, based on which a refined LIPD/H algorithm is presented in Section V-B.2. In all algorithms, we adopt notations as follows: Given a flow, Ü Øµ be its transmission rate during epoch Ø, ص the fraction of loss in epoch Ø, «the linear increase constant, the proportional decrease constant, and the multiplicative constant below which the sending rate will not decrease across epochs. In our simulations, we set «to 1 rate unit 4, ½, and ¼ as in LIMD. For analysis, we consider a network with a set of links Ä. Each link Ð Ä has a capacity Ð ¼, and a number of flows compete for access to each link. A flow with route Ö may be also identified as Ö by a slight abuse of notation. The set of routes is Ê. We states Ð Ö when route Ö passes through link Ð. In the analysis, we assume fluid flows and ignore the A rate unit is ½Ì ÔÓ, where Ì ÔÓ is the duration of an epoch.

5 problems of granularity due to packet size, as in [1][13]. Besides, we do not address the issue of buffering effect at each router explicitly. A. The Basic LIPD Algorithm and its Properties A.1 The Basic LIPD Algorithm As described in Section IV, LIPD increases the transmission rate Ü by a constant factor «if no packet loss was lost in the last epoch, and throttles its transmission rate proportional to the fraction of packet loss, but lower bounded by a multiplicative factor of the transmission rate, if packets were lost in the last epoch. More precisely, Ü Øµ «if Ð Ü ¼ Ø ½µ ÑÜÜ Øµ ½ صµ Ü Øµ if Ð ¼ In general, ص is also a nonlinear function of Ü. For a flow with route Ö, can be taken to be the sum of packet dropping fraction Ô Ð Ýµ over each link Ð Ö along the route Ö, where the total rate allocation at link Ð is Ý. Therefore, we have ص È ÐÖ Ô Ð È Ð Ü Øµµ. A.2 LIPD Properties In this section, we describe the properties of LIPD in terms of fairness, its convergence rate, robustness to imperfect fair dropping and delayed feedback information, efficiency and throughput in the presence of random loss. 1. Convergence and Fairness With fair packet dropping routers, flows that transmit at a higher rate will experience more packet drops and hence throttle rate by a larger absolute value, therefore LIPD can converge asymptotically to fairness. In the following, we will show that this is true even in the cases of imperfect fair dropping and delayed feedback information. The LIPD algorithm of Section V-A.1 describes a nonlinear switching system, which switches between the increase phase and the decrease phase over, but oscillates around a stable point, which is of our interest. In general, it is hard to analyze it directly. In order to characterize LIPD s convergence and stability properties, we approximate the behavior of LIPD through the following system 5 (Note similar approaches have been adopted in the literature for LIMD analysis by Kelly and others [1]): Ü Ø ½µ Ü Øµ «Øµ ص ص«Øµ ØµÜ Øµ ØµÛ Øµ (4) where we model the packet dropping according to Definition II.2 of Section II, and the drop fraction as ص È ÐÖ Ô Ð È Ð Ü Øµµ. Note that compared to the LIPD algorithm of Section V- A.1, the above system µ models the increase phase (when For simplicity of presentation, we do not consider the lower bound multiplicative-term Ü Øµ here. A more involved analysis shows that the convergence and fairness properties still hold for the system which incorporates this term. ص ¼ and Û Øµ ¼) accurately, but does not model the decrease phase precisely. Specifically, µ introduces a modeling error ½ ص صµ«Øµ for a decrease phase; we introduce a gain factor ص to reduce the effect of this modeling error through an appropriate choice of ص. Case (A): Ideal Fair Packet Dropping: We consider the case of an ideal fair dropping by Definition II.1 of Section II. We assume that RTTs are negligible. Proposition V.1: (Fairness in an ideal fair dropping router) Consider a single bottleneck link with an ideal fair dropping by Definition II.1. For Ò flows that use LIPD of Section V-A.1 and share the bottleneck link, given any given initial rate allocation Ü Ü ½ ¼µ Ü ¼µ Ü Ò ¼µ, the sending rate will converge to the fair allocation: Ü Øµ Ü Øµ as Ø ½ (5) Â Ü Øµµ Ü µ Ò Ü µ ½ as Ø ½ (6) Proof By combining arguments similar to Figure 1.(b) and [1], we can show that Jain s fairness index  ܵ ½. Given a congestion control law Ö Ø ½µ Ö Øµ, define, Jain s fairness index at epoch Ø ½ is given by Â Ü Ø ½µµ ÈÒ ½ Ö Ø ½µµ Ò ÈÒ ½ Ö Ø ½µµ Â Ü Øµµ ½ Â Ü Øµµµ ½ È Ò½ Ö È Øµ Ò½ Ö Øµµ Hence, a sufficient condition to ensure Â Ü Ø ½µµ Â Ü Øµµ is ¼, with Â Ü Ø ½µµ Â Ü Øµµ when ¼. For LIPD, the linear rule Ö Ø ½µ Ö Øµ is given more specifically by Ö Øµ ½ if no loss at Ø Ö Ø ½µ Ö Øµ ½ صµ if loss ص ¼ at Ø For a fair dropping gateway, it is easy to derive that the number of dropped packets satisifies Рص Ö Øµ ص Ö ØµÔ Øµ where Ô Øµ Ò with Ò being the number of total dropped packets and is the link capacity. Hence, ص Ô Øµ which is a flow-independent factor. Therefore, it is easy to see that ¼ during the increase phase and ¼ during the decrease phase. Hence, Â Ø ½µ is monotonically increasing, since it is also upper bounded by 1, i.e. Â Ø ½µ ½, then  ص ½ as Ø ½. Case (B): Practical Packet Dropping of Definition II.2 Now we analyze the convergence property of the LIPD algorithm µ under imperfect fair dropping by Definition II.2. We neglect the effect of different RTTs at this moment. To study this, the following associated ordinary

6 differential equation (ODE) for the LIPD algorithm µ Corollary V.1: In the limiting case, the convergence of proves to be very useful: the above algorithm µ approximates the proportional Ü fairness µ. Ø «Øµ Ô Ð Ü Øµµ ص«Øµ ØµÜ Øµµ ܵ (7) Proof From the proof of Proposition V.2, we know ÐÖ Ð that the stable point Ü maximizes the Lyapunov function Î Ü Øµ. Therefore, in the limiting case, if Ô Ð Ýµ is chosen as a step function, i.e. Ô Ð Ýµ ¼, if Ý Ð ; Ô Ð Ýµ ½, if Ý Ð. Then, we can see Ü maximizes Proposition V.2: Consider a generic network topology with practical fair dropping routers by definition II.2. For the LIPD algorithm µ, if the parameters are chosen such that «Øµ ¼ «ÐÑ Ø½ «Øµ ¼, and ص ¼ ÐÑ Ø½ ص ¼, then for sufficiently small «¼, given any arbitrary ¼, there exists a constant «µ, such that ÐÑ ÙÔ Ø½ È Ü Øµ Ü µ «µ (8) where «µ tends to zero as «and tend to zero. Remark V.1: Proposition V.2 states that for small constant gain parameters «and, the LIPD algorithm µ converges to a globally stable point Ü in probability. Proof The proof will be based on the ODE method for the analysis of randomized algorithm [11], which we describe in two steps. For the LIPD algorithm described above, it is of the form: Ü Ò ½µ Ü Òµ ÒµÀ Ü Ò Ò ½ µ where À µ is a smooth function, Ò ½ is a random variable representing the on-line observation of the system and Òµ is a gain factor. Its convergence behavior can be characterized by studying the associated ODE: ÜØ Ü À Ü Ò Ò ½ µµ. Therefore, the associated ODE for the LIPD algorithm µ is given by: Ü Ø «Øµ ÐÖ Ô Ð Ð Ü Øµµ ص«Øµ ØµÜ Øµµ where we use the fact Û Øµµ ¼. We first show that the above ODE µ is globally stable, with a unique stable equilibrium point Ü. To do this, consider the following Lyapunov function: Æ «Øµ Î Ü Øµ ÐÓ Øµ«Øµ ØµÜ Øµµ ص ½ È Ð Ü ÐÄ ¼ Ô Ð ÝµÝ It is easy to show that Î Ü Øµ is a strictly concave function of Ü and achieves a maximum at some Ü ¼, and the maximum point is unique. Besides, it is easy to arrive at: Î Ø Æ Æ ½ ½ Î Ü Ü Ø «Øµ ½ ص«Øµ ØµÜ Øµ ÐÖ Ô Ð Ð Ü Øµµ ص«Øµ ØµÜ Øµµ µ Hence, Î Ü ØµØ ¼ and Î Ü Øµ is strictly increasing. By Lyapunov theory, the ODE is globally stable and has a unique stable point È Ü. Note that ¼ ÐÖ Ô Ð È Ð Ü Øµµ ½ and Û Øµµ ½, then, by Theorem 3 (Chapter 2) of [11], we conclude the proof. Πܵ Æ ½ «ÐÓ «Ü µ È subject to the constraint Ð Ü Ð. Note that from Proposition V.2, the convergence occurs when «are sufficiently small. Then using arguments similar to [1], the result follows readily. Case (C): Delayed Feedback Information In both Cases (A) and (B), we have neglected the effect of the loss feedback delay and RTTs. The following proposition shows the convergence and fairness properties of LIPD still hold when they are taken into consideration: Proposition V.3: (Delayed Information Feedback) In the setting of Proposition V.2, consider the following LIPD algorithm with delayed loss feedback: Ü Ø ½µ Ü Øµ «Øµ ÐÖ Ô Ð Ð Ü Ø Ð µµ ص«Øµ ØµÜ Øµ ØµÛ Ø Ð µ (9) where Ð model the delayed information on flow s packet dropping over link Ð. Then µ converges to a unique global stable point Ü. In the limiting case, the stable point Ü approximates proportional fairness µ. Proof Consider the associated ODE: Ü Ø «Øµ Ô Ð Ü Ø Ð µµ ص«Øµ ØµÜ Øµ ÐÖ Ð Using arguments similar to Proposition V.2, we can prove the result. 2. Rate of Convergence The following proposition specifies the convergence rate of the LIPD algorithm µ through the ODE µ under imperfect fair dropping by Definition II.2: Proposition V.4: (Rate of Convergence) In the setting of Proposition V.2, let Ü Ò be calculated via the LIPD algorithm µ and Ü Ø Ò µ be computed via the ODE µ, where Ø Ò Ò. Let ««and for some constants «and. As ¼ and Ø Ò ½, we have ÐÑ Ò½ Ü Ò Ü Ø Ò µ Ô Æ ¼ È µ where Æ µ denotes a normal distribution with mean and deviation, and È ¼ is the unique symmetric solution to: Ü µ È È Ì Ü µ Î Ö Ûµ ¼

7 where µ is a function specified by µ, and Î Ö Ûµ is the covariance matrix È of the random error Û. Proof Since ¼ ÐÖ Ô Ð È Ð Ü Øµµ ½, it is easy to verify that the assumptions of Theorem 2 (Chapter 3) [11] are valid, hence, the result follows readily. 3. Efficiency In general, since LIPD tries to keep the pipe full at all s, the efficiency is close to optimal, and the variations in rate are not severe. Therefore, LIPD improves the link efficiency compared to LIMD, this is particularly true if the number of flows sharing a link is small: Proposition V.5: Consider a single bottleneck link case. In the ideal case, if all flows receive the loss signal at the same, then LIPD is strictly better than LIMD in terms of the bottleneck link efficiency. Proof In this case, since the fairness index Â Ü Øµµ ½ as Ø ½, all flows will be synchronized asymptotically. Hence, LIMD can only achieve ¼ link utilization, but LIPD can achieve close to ½¼ link utilization. Proposition V.6: (Link efficiency for large number of flows and practical dropping) If the number of flows sharing a link is large enough, then both LIPD and LIMD may achieve link efficiency close to ½¼¼± through statistical multiplexing. 4. and rate variation in the presence of random loss In this section, we study the steady state throughput of both LIMD and LIPD when there is a loss probability Ô for each packet of each flow. In this case, Ô is the limiting factor for rate increase. Proposition V.7: Let Ô be the loss probability with each packet. Then the steady state throughput Ì Å of LIMD is given by «µ Ì Å Ô and the steady state throughput Ì È of LIPD of Section V- A.1 is «Ôµ Ì È Ô Proof For LIMD, the result follows from [12]. For LIPD, the rate oscillates between ½ µö ½ µö «½ µö «Ö at steady state. Then the following equations hold: ½ Ô ½ ½ µö Ö Ö «½ ½ Ôµ Ö Ö Ô Ì È ½ ½ µö Ö The steady state throughput of LIPD can thus be derived. Remark V.2: Note that the steady state rate oscillates between ½ µö Å ½ µö Å «½ µö Å «Ö Å for LIMD, where ¼. In LIPD, the rate oscillates between ½ µö ½ µö «½ µö «Ö, where Ô. Since in general Ô, LIPD performs strictly better than LIMD in terms of throughput and rate variation. Though the basic LIPD algorithm has many appealing properties as detailed above, it has a severe drawback: since it does not react aggressively to packet drops, more packets are dropped in reacting to congestion, convergence to fairness is slower, and new flows take longer to attain fair rate allocation. These drawbacks are eliminated by the LIPD/H algorithm described next. B. LIPD with History (LIPD/H) B.1 The history mechanism in LIPD/H In order to address the issue of reacting quickly to sudden changes in the available network bandwidth for a flow, we introduce the LIPD/H algorithm. The basic goal of LIPD/H is to distinguish random and channel probing losses from congestion losses, and react differently to these losses. In LIPD/H, a conscious effort is made to keep track of the progress of the sending rate and loss ratio in each epoch, and hence, keep track of the profile of the effective sending rate, i.e., the estimated sending rate that can transmit packets successfully to the receiver. We show later through simulations that maintaining a history of the sending rate and packet loss enables LIPD/H to successfully distinguish the cause of packet loss and react appropriately. The history in LIPD/H is maintained via three variables: the moving average of the effective sending rate Ê, the average deviation of the effective sending rate below the average effective sending rate, and the decrease variable. The decrease variable reflects the aggressiveness of reaction to the congestion induced packet losses, and increases exponentially when the sender perceives packet losses in successive epochs, and is reset to 1 if the sender predicts no congestion. At any, LIPD/H predicts that so long as the effective sending rate in epoch is within (greater than) Ê, packet losses were either random loss or channel probe loss. If the effective sending rate falls below Ê, then LIPD/H charges all losses within the Ê bound to random/probe loss, and the remainder of the losses to the decrease in the bandwidth of the path. The details of the history algorithm are: 1. Ê Ê ½ µ Ê 2. No loss (if ¼): ½ 3. Non-congestion induced loss ( ¼ and Ê Ê ): ½ ½ ½ µ Ê µ ½ 4. Congestion related loss (if ¼ and Ê Ê ): ½ ½ ½ µ Ê µ B.2 The Complete LIPD/H Algorithm Using the above history mechanism, LIPD/H tries to preserve LIPD s property of graceful variation of the send-

8 ing rate for random loss and probe loss, but at the same, throttle the sending rate quickly for the decrease in the bandwidth for a connection due to reduction of network resources, or the advent of new connections. The following is the complete LIPD/H algorithm: Case (A): If no packet was lost in the last epoch, ½ ½ ½ µ Ê Ü µ, where ¼ Ê Ê ½ µü Ü ½ Ü «Case (B): If there are packet losses in the last epoch, and the effective transmission rate is greater than Ê, then LIPD/H predicts that the packet losses must be due to random loss or channel probing. Ü ½ µ If Ê µ, Ü ½ ½ ½ ½ µ Ê µ Ê Ê ½ µ Case (C): If there are packet losses in the last epoch, and the effective transmission rate in the last epoch is less than Ê, then LIPD/H predicts that all losses that cause the effective transmission rate to go below Ê are related to the decrease in the available bandwidth. If Ê µ, Ü ½ ÑÜ Ê Ê µ Ü «½ ½ µ Ê µ Ê Ê ½ µ B.3 Properties of LIPD/H We found that LIPD/H is able to react accurately and quickly to the arrival of new flows. This is mainly due to the fact that, when the number of losses is large, the multiplication of the majority of losses by the decrease variable takes over, and the throttling becomes exponential. A rough estimate shows that this process converges in Ç ÐÓ Ü ÓÐ Ü ÒÛ µ epochs, where Ü ÓÐ and Ü ÒÛ are the old and new steady state transmission rates, respectively. Once each flow stabilizes within its mean and deviation region, the plain LIPD takes into effect. All the properties of LIPD described in Section V-A.2 still hold for LIPD/H, which are further illustrated by simulations in next section. VI. PERFORMANCE EVALUATION In this section, we evaluate the performance of the algorithms proposed in Section V through simulations using ns-2 [7]. Throughout the simulations, we have used the following constants: (a) «½, ¼, and ½ for LIPD and LIPD/H, and (b) ¼ for LIMD and.5 for LIMD/H. To simulate fair drop with randomness in the routers, we use RED, in which the minimum and the maximum thresholds were set to 1 and 2 packets, respectively. Finally, we did not include slow start for most of our simulations except for the scenarios when new connections are started midway through the simulations in Section VI-A.2, since our focus is on the steady-state behavior rather than the startup behavior of connections. The following metrics have been used to compare the performance of the algorithms: 1. Transmission Rate: We plot transmission rate as a function of for each connection. The objective is to obtain only small variations in transmission rate close to the optimal value for high efficiency, and a small band in the variation of the transmission rate for all flows for good fairness. 2. Aggregate Service: We plot the aggregate service in terms of the number of packets received by the receiver as a function of for each connection. The objective is to maximize the aggregate service curve, obtain a smooth curve with no significant variations in the gradient, and achieve a small band in the variation of the service among connections that does not expand over. 3. Fairness: Connections that share a bottleneck link should observe approximately the same instantaneous rates as well as long term rates. We plot the Jain fairness index (  ص È Ö Øµµ Ò È Ö Øµ µ) [1] over, and the objective is to maintain ص close to 1 at all s. 4. Responsiveness: The plots of transmission rates and packet losses over also show the responsiveness of connections to congestion or to the introduction of new flows. When the available bandwidth for a connection decreases, the objective is to see a quick reduction in the transmission rate with the loss of few packets. 5. Number of Packets Lost: We plot packet loss over for each of the connections. The objective is to minimize the packet loss, though the recovery of packet loss may be achieved efficiently (e.g. SACK in TCP [?]) and may not be perceived by the end user. In the figures illustrating the performance of each congestion control algorithm, the Ü axis denotes the in seconds and the Ý axis denotes the performance metrics of interest (transmission rate, number of packet transmitted, fairness index, number of packet loss). A. A Single Network Bottleneck Link Scenario Figure 3 shows the network topology for the first set of simulations. In the Figure, we also mark the link capacity and latency. The epoch size is set to 65 msec, and the packet size is set to 1 KB. Throughout this section, we will identify connection by their sender node (i.e., Ë ½ instead S2... S1 S1 4 Mbps 2 ms, each 4 Mbps 2 ms, each Mbps, 26 ms R1 R2... R1 Fig. 3. A Simple Topology With One Bottleneck Link

9 2 2 1 LILD LIMD LILD 1 1 Fairness Index.85 LIMD Fig. 4. Transmission Rate: (a) LIMD; (b) LIPD Fig. 7. Fairness Indices of LIMD and LIPD of Ë ½ Ê ½ ) when there is no ambiguity. Thus in this case, there are 1 connections (Ë ½ Ë ½¼ ) sharing one bottleneck link flow flow Fig. 8. Effect of Random Loss: (a) LIMD and (b) LIPD Packet loss Fig. 5. : (a) LIMD; (b) LIPD Packet loss Fig. 6. Packet Loss: (a) LIMD; (b) LIPD A.1 Comparison of LIMD and the plain LIPD We start all of 1 flows at approximately the same and observe how each connection converges to the steady state and how it behaves during the steady state. Each simulation was run for 2 seconds. We compare the transmission rate, throughput, packet loss, fairness and effect of random loss for LIMD and the plain LIPD algorithm. For clarify of presentation, we present the results with three randomly selected flows. 1. Transmission Rate The transmission rates of LIMD and LIPD are plotted in Figure 4. As shown in Figure 4.(a), LIMD has large variations of the transmission rate, with the maximum and the minimum rates average at about 14 packet per second (pps) and 4 pps, respectively. On the other hand, in Figure 4.(b), LIPD shows a tight band of transmission rates variations, with the maximum and the minimum transmission rates of LIPD average at around 12 pps and 9 pps, respectively. Overall, the average transmission rates of LIMD and LIPD are at around 9 pps and 11 pps, respectively. This result vindicates our claim that LIPD is superior to LIMD in measuring and maintaining its transmission rate close to the optimal value. In addition to the transmission variation, we also observe the synchronization of loss in Figure 4.(a), which is one of the well-known drawbacks of LIMD. Although, in this particular case, the behavior of LIMD is aggravated by the fact that all the flows started approximately at the same,the loss synchronization problem is much less perceptible in LIPD (Figure 4.(b)). 2. Figure 5 shows the number of packets successfully transmitted using LIMD and LIPD, where the slope of the curve represents the throughput of each flow. Ideally with a fair congestion control algorithm, we expect the slope of every flow sharing the same link to be identical. From the figures, we observe that the divergence of the slopes with LIMD is greater than that of LIPD. Also we observe that the total number of packets received by the receiver using LIPD is higher (around 2, packets) than that of LIMD (around 15, packets). 3. Packet Loss Figure 6 shows the number of packet loss with LIMD and LIPD. We observe that the total number of packet losses with LIPD is greater than with LIMD. This is mainly due to the small oscillations of rate for LIPD around the optimal point. In other words, since LIPD tries to maintain its transmission rate close to the available bandwidth, it frequently increases the rate beyond the available bandwidth and sees the packet drops more often. Also, the near-optimal utilization of network bandwidth results in frequent packet drops in the RED routers. This result suggests that LIPD can be better utilized with nonpacket dropping fair feedback mechanism such as explicit congestion notification (ECN) routers. 4. Fairness Figure 7 shows the evolution of the fairness indices of LIMD and LIPD over. The fairness index of LIPD remains close to 1., which is the optimal value all the,

10 whereas that of LIMD varies between.7 and 1.. This means though both algorithms are approximately longterm fair, LIPD exhibits better short-term fairness than LIMD. This results from the smaller variation of the transmission rate of LIPD. 5. Effects of Random Loss Figure 8 shows the behavior of LIMD and LIPD under random loss. The probability of packet loss was uniformly distributed with average loss rate of.1 %. In order to avoid the side effects from the loss of acknowledgments, we assumed that there is no random loss in the acknowledgment channel. With random loss, the average transmission rate of LIMD (around 7 packets per second) is much smaller than that of the steady state case. Although the band of the transmission rate variation of LIMD is no wider, the fairness index is considerably smaller in this case due to the aggressive decrease of transmission rate upon packet loss (not shown due to the space constraints). On the other hand, the performance of LIPD does not degrade much with the introduction of the random loss. Although the transmission rate variation is slightly larger than in the absence of random loss, the average transmission rate and the fairness index remains almost the same (fairness index not shown due to the space constraints). Overall, when network remains in steady state, we observe that LIPD performs better than LIMD in all the performance criteria other than the number of packet losses. We also observed that these two algorithms have the selfhealing property that no single flow can consume more than its fair share in the long term even if there exists short term unfairness among flows. Additional simulations with larger number of flows exhibited the same behavior. A.2 Efficacy of History In this section, we evaluate the efficacy of history by comparing the basic algorithms (LIMD and LIPD) with more sophisticated algorithms with history (LIMD/H, and LIPD/H) in the same simple network in Figure 3. In particular, we are interested in how these algorithms react to sudden congestion induced by start of new flows. We first start five flows at, then start five new flows at 2, when the ongoing flows have reached steady state. For the simulations in this section, we implemented slow start (similar to TCP[8]) of transmission for new flows, since we are not only interested in steady state behavior but also in the convergence characteristics of each algorithm. Figure 9 and 1 show the transmission rate change and throughput performance of LIMD and LIMD/H. For clarity, we present the behavior of four randomly selected flows, where two of them start early and the other two start later. In case of LIMD, because of the aggressive nature of decrease, the existing flows react fast and reduce their trans transmission rate Fig. 9. Transmission Rate: (a) LIMD and (b) LIMD/H Number of packets received Fig. 1. Performance: (a) LIMD and (b) LIMD/H mission rate quickly. However, this responsiveness comes at a price of large oscillation during the steady state. On the other hand, we observe that LIMD/H achieves smaller transmission rate variations than LIMD, while achieving approximately the same degree of responsiveness to the sudden bandwidth reduction on the network. Thanks to the smaller oscillations, LIMD/H also achieves higher throughput than LIMD. Figure 11 and 12 show the transmission rate and throughput performance of LIPD, and LIPD/H, respectively. Again we only show for the case of four randomly selected flows, where two of them start early and the other two start later. The result with LIPD indicates that the existing flows give up their resources slower than the case with LIMD. Hence, it takes more for the new flows to catch up with the ongoing flows. The poor responsiveness of LIPD is due to its mild transmission rate decrease policy. However, with the introduction of progressively sophisticated history mechanism, we see better responsiveness with LIPD/H. As shown in the figure, LIPD/H reacts to the onset of congestion as fast as LIMD. In essence, we found that LIPD/H successfully encapsulates the efficiency and the fairness of LIPD and the responsiveness of LIMD. Specifically, LIPD/H achieves a comparable throughput performance to LIPD in steady state and also has a fairness index of approximately 1.. In addition, the history information about the transmission rate and the loss profile enables LIPD/H to distinguish different causes of packet loss and react quickly upon perceiving congestion. In the next section, we simulate the behavior of LIPD/H in larger network topologies to show that the fairness property of LIPD/H scales. B. Performance in Multiple Link Network In this section, we show that fairness and efficiency of LIPD/H are retained in generic network topologies

11 Fig. 11. Transmission Rate: (a) LIPD, and (c) LIPD/H 6 6 Ë is already constrained at the link 4 5, flow Ë and Ë ½½ share the remaining bandwidth. Flow Ë and Ë shares the link 1 2, which is the bottleneck link on their paths. Since Ë ½¼ shares the link 2 3 only with flow Ë and Ë, whose sending rate is already constrained by the link 1 2, it gets the remaining bandwidth of link 2 3. Essentially we observe that LIPD/H ensures fair bandwidth allocation even in the multi-hop network where each flow interacts with many other flows, traversing links with different round-trip Fig. 12. Performance: (a) LIPD, and (b) LIPD/H through a large network configuration scenario. The network configuration, shown in Figure 13, is composed of 7 backbone links with different link bandwidths and latencies There are 11 flows in the network sharing the backbone each other. Assume shortest path routing for each flow. With this configuration, we show that LIPD/H retains the fairness property, presented in previous sections, in a larger network configuration. Figure 14 shows the transmission rate variation and the throughput performance of LIPD/H in this configuration. The bandwidth of Ë is limited by the bandwidth of the global bottleneck link 5 Ê. Flow Ë ½ Ë Ë Ë, and Ë share the link 4 5 with flow Ë, which is already constrained by the link 5 Ê. Hence, they share the remaining bandwidth fairly. On the other hand, flow Ë and Ë ½½ share the link 2 4 with flow Ë. Since the bandwidth of S7 S8 S4 S5 S3 R8 S6 S2 5Mbps, 1 ms 1 R7 3Mbps, 7 ms S9 S1 3 2 S1 R1 5Mbps, 1 ms 5Mbps 1 ms S11 3Mbps, 7 ms 4 5 R9 R11 3Mbps, 3 ms R1 the other links : 4 Mbps, 1ms 7 R4 R6 R2 3Mbps, 6 ms 6 R3 5 Kbps, 1 ms Fig. 13. Complex Network Topology with Different Round Trip Times flow3 flow7 flow8 1 flow7 flow8 flow flow8 flow7 flow3 R5 flow3 flow7 flow Fig. 14. Performance of LIPD/H: (a) Transmission Rate and (b) VII. CONCLUSION With the explosive growth of the Internet traffic load and its increasing diversity, network designers have realized that the laissez faire service model of the current Internet is not sufficient. As a via-media solution, there have been several proposals for smart router technology like RED [3] and its variants FRED [5] and WRED [6], random drop switches [4], CSFQ[9] to name a few. These router technologies aim to notify/detect congestion as it just starts to occur, and drop/mark packets in proportion to the sending rate of the connections. The deployment of these smarter router technology will have an impact on not only router behavior, but also end host behavior. Specifically, congestion control algorithms can now exploit the fair dropping/marking property of these new router designs. In this paper, we have revisited the LIPD paradigm for congestion control, long discredited because of its inapplicability with tail drop routers. We show that LIPD is in fact fair, efficient, insensitive to random loss, robust and stable with these new fair drop routers. In fact, our simulation and analysis have indicated that LIPD performs better than or comparable to LIMD in all these aspects. However, the plain LIPD suffers from one severe drawback: it is not effective in quickly alleviating sudden congestion due to newly arriving flows, since it does not decrease its transmission rate as aggressively as LIMD. To address this issue, we propose a refined LIPD/H algorithm, which in addition incorporates the congestion and loss history. Fundamentally, LIPD/H can now predict the cause of packet losses, whether it is congestion induced or not, and react appropriately. Through simulation and analysis, we have concluded that a combination of LIPD and congestion history information makes LIPD/H work very well as an efficient, fair, and robust congestion control paradigm. Therefore, we believe that LIPD/H is a viable congestion control algorithm in a network wherein all the routers perform fair packet marking/dropping with early congestion feedback. REFERENCES [1] D.-M. Chiu and R. Jain, Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks, Journal of Computer Networks and ISDN Systems, 17(1), [2] S. Floyd and V. Jacobson, On Traffic Phase Effects in Packet-

Analysis of Binary Adjustment Algorithms in Fair Heterogeneous Networks

Analysis of Binary Adjustment Algorithms in Fair Heterogeneous Networks Analysis of Binary Adjustment Algorithms in Fair Heterogeneous Networks Sergey Gorinsky Harrick Vin Technical Report TR2000-32 Department of Computer Sciences, University of Texas at Austin Taylor Hall

More information

Utility-Based Rate Control in the Internet for Elastic Traffic

Utility-Based Rate Control in the Internet for Elastic Traffic 272 IEEE TRANSACTIONS ON NETWORKING, VOL. 10, NO. 2, APRIL 2002 Utility-Based Rate Control in the Internet for Elastic Traffic Richard J. La and Venkat Anantharam, Fellow, IEEE Abstract In a communication

More information

RED behavior with different packet sizes

RED behavior with different packet sizes RED behavior with different packet sizes Stefaan De Cnodder, Omar Elloumi *, Kenny Pauwels Traffic and Routing Technologies project Alcatel Corporate Research Center, Francis Wellesplein, 1-18 Antwerp,

More information

End-to-end bandwidth guarantees through fair local spectrum share in wireless ad-hoc networks

End-to-end bandwidth guarantees through fair local spectrum share in wireless ad-hoc networks End-to-end bandwidth guarantees through fair local spectrum share in wireless ad-hoc networks Saswati Sarkar and Leandros Tassiulas 1 Abstract Sharing the locally common spectrum among the links of the

More information

Chapter II. Protocols for High Speed Networks. 2.1 Need for alternative Protocols

Chapter II. Protocols for High Speed Networks. 2.1 Need for alternative Protocols Chapter II Protocols for High Speed Networks 2.1 Need for alternative Protocols As the conventional TCP suffers from poor performance on high bandwidth delay product links [47] meant for supporting transmission

More information

Random Early Detection (RED) gateways. Sally Floyd CS 268: Computer Networks

Random Early Detection (RED) gateways. Sally Floyd CS 268: Computer Networks Random Early Detection (RED) gateways Sally Floyd CS 268: Computer Networks floyd@eelblgov March 20, 1995 1 The Environment Feedback-based transport protocols (eg, TCP) Problems with current Drop-Tail

More information

Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015

Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015 Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015 I d like to complete our exploration of TCP by taking a close look at the topic of congestion control in TCP. To prepare for

More information

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services Overview 15-441 15-441 Computer Networking 15-641 Lecture 19 Queue Management and Quality of Service Peter Steenkiste Fall 2016 www.cs.cmu.edu/~prs/15-441-f16 What is QoS? Queuing discipline and scheduling

More information

Reliable Transport II: TCP and Congestion Control

Reliable Transport II: TCP and Congestion Control Reliable Transport II: TCP and Congestion Control Stefano Vissicchio UCL Computer Science COMP0023 Recap: Last Lecture Transport Concepts Layering context Transport goals Transport mechanisms and design

More information

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks Hybrid Control and Switched Systems Lecture #17 Hybrid Systems Modeling of Communication Networks João P. Hespanha University of California at Santa Barbara Motivation Why model network traffic? to validate

More information

15-744: Computer Networking. Overview. Queuing Disciplines. TCP & Routers. L-6 TCP & Routers

15-744: Computer Networking. Overview. Queuing Disciplines. TCP & Routers. L-6 TCP & Routers TCP & Routers 15-744: Computer Networking RED XCP Assigned reading [FJ93] Random Early Detection Gateways for Congestion Avoidance [KHR02] Congestion Control for High Bandwidth-Delay Product Networks L-6

More information

CS 268: Computer Networking

CS 268: Computer Networking CS 268: Computer Networking L-6 Router Congestion Control TCP & Routers RED XCP Assigned reading [FJ93] Random Early Detection Gateways for Congestion Avoidance [KHR02] Congestion Control for High Bandwidth-Delay

More information

Scan Scheduling Specification and Analysis

Scan Scheduling Specification and Analysis Scan Scheduling Specification and Analysis Bruno Dutertre System Design Laboratory SRI International Menlo Park, CA 94025 May 24, 2000 This work was partially funded by DARPA/AFRL under BAE System subcontract

More information

Increase-Decrease Congestion Control for Real-time Streaming: Scalability

Increase-Decrease Congestion Control for Real-time Streaming: Scalability Increase-Decrease Congestion Control for Real-time Streaming: Scalability Dmitri Loguinov City University of New York Hayder Radha Michigan State University 1 Motivation Current Internet video streaming

More information

Analyzing the Receiver Window Modification Scheme of TCP Queues

Analyzing the Receiver Window Modification Scheme of TCP Queues Analyzing the Receiver Window Modification Scheme of TCP Queues Visvasuresh Victor Govindaswamy University of Texas at Arlington Texas, USA victor@uta.edu Gergely Záruba University of Texas at Arlington

More information

Lecture 21: Congestion Control" CSE 123: Computer Networks Alex C. Snoeren

Lecture 21: Congestion Control CSE 123: Computer Networks Alex C. Snoeren Lecture 21: Congestion Control" CSE 123: Computer Networks Alex C. Snoeren Lecture 21 Overview" How fast should a sending host transmit data? Not to fast, not to slow, just right Should not be faster than

More information

Wireless TCP Performance Issues

Wireless TCP Performance Issues Wireless TCP Performance Issues Issues, transport layer protocols Set up and maintain end-to-end connections Reliable end-to-end delivery of data Flow control Congestion control Udp? Assume TCP for the

More information

Congestion Control. COSC 6590 Week 2 Presentation By Arjun Chopra, Kashif Ali and Mark Obsniuk

Congestion Control. COSC 6590 Week 2 Presentation By Arjun Chopra, Kashif Ali and Mark Obsniuk Congestion Control COSC 6590 Week 2 Presentation By Arjun Chopra, Kashif Ali and Mark Obsniuk Topics Congestion control TCP and the internet AIMD congestion control Equation Based congestion control Comparison

More information

Three-section Random Early Detection (TRED)

Three-section Random Early Detection (TRED) Three-section Random Early Detection (TRED) Keerthi M PG Student Federal Institute of Science and Technology, Angamaly, Kerala Abstract There are many Active Queue Management (AQM) mechanisms for Congestion

More information

CHAPTER 5 PROPAGATION DELAY

CHAPTER 5 PROPAGATION DELAY 98 CHAPTER 5 PROPAGATION DELAY Underwater wireless sensor networks deployed of sensor nodes with sensing, forwarding and processing abilities that operate in underwater. In this environment brought challenges,

More information

Lecture 14: Congestion Control"

Lecture 14: Congestion Control Lecture 14: Congestion Control" CSE 222A: Computer Communication Networks George Porter Thanks: Amin Vahdat, Dina Katabi and Alex C. Snoeren Lecture 14 Overview" TCP congestion control review Dukkipati

More information

Communication Networks

Communication Networks Communication Networks Spring 2018 Laurent Vanbever nsg.ee.ethz.ch ETH Zürich (D-ITET) April 30 2018 Materials inspired from Scott Shenker & Jennifer Rexford Last week on Communication Networks We started

More information

Improving TCP Performance over Wireless Networks using Loss Predictors

Improving TCP Performance over Wireless Networks using Loss Predictors Improving TCP Performance over Wireless Networks using Loss Predictors Fabio Martignon Dipartimento Elettronica e Informazione Politecnico di Milano P.zza L. Da Vinci 32, 20133 Milano Email: martignon@elet.polimi.it

More information

On the Design of Load Factor based Congestion Control Protocols for Next-Generation Networks

On the Design of Load Factor based Congestion Control Protocols for Next-Generation Networks 1 On the Design of Load Factor based Congestion Control Protocols for Next-Generation Networks Ihsan Ayyub Qazi Taieb Znati Student Member, IEEE Senior Member, IEEE Abstract Load factor based congestion

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2014 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

Transmission Control Protocol (TCP)

Transmission Control Protocol (TCP) TETCOS Transmission Control Protocol (TCP) Comparison of TCP Congestion Control Algorithms using NetSim @2017 Tetcos. This document is protected by copyright, all rights reserved Table of Contents 1. Abstract....

More information

The War Between Mice and Elephants

The War Between Mice and Elephants The War Between Mice and Elephants Liang Guo and Ibrahim Matta Computer Science Department Boston University 9th IEEE International Conference on Network Protocols (ICNP),, Riverside, CA, November 2001.

More information

Reliable Transport II: TCP and Congestion Control

Reliable Transport II: TCP and Congestion Control Reliable Transport II: TCP and Congestion Control Brad Karp UCL Computer Science CS 3035/GZ01 31 st October 2013 Outline Slow Start AIMD Congestion control Throughput, loss, and RTT equation Connection

More information

Chapter III. congestion situation in Highspeed Networks

Chapter III. congestion situation in Highspeed Networks Chapter III Proposed model for improving the congestion situation in Highspeed Networks TCP has been the most used transport protocol for the Internet for over two decades. The scale of the Internet and

More information

An Adaptive Virtual Queue (AVQ) Algorithm for Active Queue Management

An Adaptive Virtual Queue (AVQ) Algorithm for Active Queue Management University of Pennsylvania ScholarlyCommons Departmental Papers (ESE) Department of Electrical & Systems Engineering April 2004 An Adaptive Virtual Queue (AVQ) Algorithm for Active Queue Management Srisankar

More information

Transmission Control Protocol. ITS 413 Internet Technologies and Applications

Transmission Control Protocol. ITS 413 Internet Technologies and Applications Transmission Control Protocol ITS 413 Internet Technologies and Applications Contents Overview of TCP (Review) TCP and Congestion Control The Causes of Congestion Approaches to Congestion Control TCP Congestion

More information

On Network Dimensioning Approach for the Internet

On Network Dimensioning Approach for the Internet On Dimensioning Approach for the Internet Masayuki Murata ed Environment Division Cybermedia Center, (also, Graduate School of Engineering Science, ) e-mail: murata@ics.es.osaka-u.ac.jp http://www-ana.ics.es.osaka-u.ac.jp/

More information

Local and Global Stability of Symmetric Heterogeneously- Delayed Control Systems

Local and Global Stability of Symmetric Heterogeneously- Delayed Control Systems Local and Global Stability of Symmetric Heterogeneously- Delayed Control Systems Yueping Zhang and Dmitri Loguinov Department of Computer Science Texas A&M University College Station, TX 77843 1 Outline

More information

From Static to Dynamic Routing: Efficient Transformations of Store-and-Forward Protocols

From Static to Dynamic Routing: Efficient Transformations of Store-and-Forward Protocols From Static to Dynamic Routing: Efficient Transformations of Store-and-Forward Protocols Christian Scheideler Ý Berthold Vöcking Þ Abstract We investigate how static store-and-forward routing algorithms

More information

Router s Queue Management

Router s Queue Management Router s Queue Management Manages sharing of (i) buffer space (ii) bandwidth Q1: Which packet to drop when queue is full? Q2: Which packet to send next? FIFO + Drop Tail Keep a single queue Answer to Q1:

More information

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction In a packet-switched network, packets are buffered when they cannot be processed or transmitted at the rate they arrive. There are three main reasons that a router, with generic

More information

Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes

Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes Zhili Zhao Dept. of Elec. Engg., 214 Zachry College Station, TX 77843-3128 A. L. Narasimha Reddy

More information

RSA (Rivest Shamir Adleman) public key cryptosystem: Key generation: Pick two large prime Ô Õ ¾ numbers È.

RSA (Rivest Shamir Adleman) public key cryptosystem: Key generation: Pick two large prime Ô Õ ¾ numbers È. RSA (Rivest Shamir Adleman) public key cryptosystem: Key generation: Pick two large prime Ô Õ ¾ numbers È. Let Ò Ô Õ. Pick ¾ ½ ³ Òµ ½ so, that ³ Òµµ ½. Let ½ ÑÓ ³ Òµµ. Public key: Ò µ. Secret key Ò µ.

More information

On TCP friendliness of VOIP traffic

On TCP friendliness of VOIP traffic On TCP friendliness of VOIP traffic By Rashmi Parthasarathy WSU ID # 10975537 A report submitted in partial fulfillment of the requirements of CptS 555 Electrical Engineering and Computer Science Department

More information

Congestion Control. Andreas Pitsillides University of Cyprus. Congestion control problem

Congestion Control. Andreas Pitsillides University of Cyprus. Congestion control problem Congestion Control Andreas Pitsillides 1 Congestion control problem growing demand of computer usage requires: efficient ways of managing network traffic to avoid or limit congestion in cases where increases

More information

Congestion Avoidance

Congestion Avoidance Congestion Avoidance Richard T. B. Ma School of Computing National University of Singapore CS 5229: Advanced Compute Networks References K. K. Ramakrishnan, Raj Jain, A Binary Feedback Scheme for Congestion

More information

! Network bandwidth shared by all users! Given routing, how to allocate bandwidth. " efficiency " fairness " stability. !

! Network bandwidth shared by all users! Given routing, how to allocate bandwidth.  efficiency  fairness  stability. ! Motivation Network Congestion Control EL 933, Class10 Yong Liu 11/22/2005! Network bandwidth shared by all users! Given routing, how to allocate bandwidth efficiency fairness stability! Challenges distributed/selfish/uncooperative

More information

Advanced Computer Networks

Advanced Computer Networks Advanced Computer Networks QoS in IP networks Prof. Andrzej Duda duda@imag.fr Contents QoS principles Traffic shaping leaky bucket token bucket Scheduling FIFO Fair queueing RED IntServ DiffServ http://duda.imag.fr

More information

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 Question 344 Points 444 Points Score 1 10 10 2 10 10 3 20 20 4 20 10 5 20 20 6 20 10 7-20 Total: 100 100 Instructions: 1. Question

More information

Resource allocation in networks. Resource Allocation in Networks. Resource allocation

Resource allocation in networks. Resource Allocation in Networks. Resource allocation Resource allocation in networks Resource Allocation in Networks Very much like a resource allocation problem in operating systems How is it different? Resources and jobs are different Resources are buffers

More information

Fair Bandwidth Sharing Between Unicast and Multicast Flows in Best-Effort Networks

Fair Bandwidth Sharing Between Unicast and Multicast Flows in Best-Effort Networks Fair Bandwidth Sharing Between Unicast and Multicast Flows in Best-Effort Networks Fethi Filali and Walid Dabbous INRIA, 2004 Route des Lucioles, BP-93, 06902 Sophia-Antipolis Cedex, France Abstract In

More information

Promoting the Use of End-to-End Congestion Control in the Internet

Promoting the Use of End-to-End Congestion Control in the Internet Promoting the Use of End-to-End Congestion Control in the Internet Sally Floyd and Kevin Fall IEEE/ACM Transactions on Networking May 1999 ACN: TCP Friendly 1 Outline The problem of Unresponsive Flows

More information

CSE 123A Computer Networks

CSE 123A Computer Networks CSE 123A Computer Networks Winter 2005 Lecture 14 Congestion Control Some images courtesy David Wetherall Animations by Nick McKeown and Guido Appenzeller The bad news and the good news The bad news: new

More information

Congestion Control in Communication Networks

Congestion Control in Communication Networks Congestion Control in Communication Networks Introduction Congestion occurs when number of packets transmitted approaches network capacity Objective of congestion control: keep number of packets below

More information

ECE 610: Homework 4 Problems are taken from Kurose and Ross.

ECE 610: Homework 4 Problems are taken from Kurose and Ross. ECE 610: Homework 4 Problems are taken from Kurose and Ross. Problem 1: Host A and B are communicating over a TCP connection, and Host B has already received from A all bytes up through byte 248. Suppose

More information

ADVANCED COMPUTER NETWORKS

ADVANCED COMPUTER NETWORKS ADVANCED COMPUTER NETWORKS Congestion Control and Avoidance 1 Lecture-6 Instructor : Mazhar Hussain CONGESTION CONTROL When one part of the subnet (e.g. one or more routers in an area) becomes overloaded,

More information

ROBUST TCP: AN IMPROVEMENT ON TCP PROTOCOL

ROBUST TCP: AN IMPROVEMENT ON TCP PROTOCOL ROBUST TCP: AN IMPROVEMENT ON TCP PROTOCOL SEIFEDDINE KADRY 1, ISSA KAMAR 1, ALI KALAKECH 2, MOHAMAD SMAILI 1 1 Lebanese University - Faculty of Science, Lebanon 1 Lebanese University - Faculty of Business,

More information

Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks. Congestion Control in Today s Internet

Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks. Congestion Control in Today s Internet Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks Ion Stoica CMU Scott Shenker Xerox PARC Hui Zhang CMU Congestion Control in Today s Internet Rely

More information

A Simple Mechanism for Improving the Throughput of Reliable Multicast

A Simple Mechanism for Improving the Throughput of Reliable Multicast A Simple Mechanism for Improving the Throughput of Reliable Multicast Sungwon Ha Kang-Won Lee Vaduvur Bharghavan Coordinated Sciences Laboratory University of Illinois at Urbana-Champaign fs-ha, kwlee,

More information

CS4700/CS5700 Fundamentals of Computer Networks

CS4700/CS5700 Fundamentals of Computer Networks CS4700/CS5700 Fundamentals of Computer Networks Lecture 15: Congestion Control Slides used with permissions from Edward W. Knightly, T. S. Eugene Ng, Ion Stoica, Hui Zhang Alan Mislove amislove at ccs.neu.edu

More information

Congestion Control End Hosts. CSE 561 Lecture 7, Spring David Wetherall. How fast should the sender transmit data?

Congestion Control End Hosts. CSE 561 Lecture 7, Spring David Wetherall. How fast should the sender transmit data? Congestion Control End Hosts CSE 51 Lecture 7, Spring. David Wetherall Today s question How fast should the sender transmit data? Not tooslow Not toofast Just right Should not be faster than the receiver

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2014 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

ISSN: Index Terms Wireless networks, non - congestion events, packet reordering, spurious timeouts, reduce retransmissions.

ISSN: Index Terms Wireless networks, non - congestion events, packet reordering, spurious timeouts, reduce retransmissions. ISSN:2320-0790 A New TCP Algorithm to reduce the number of retransmissions in Wireless Networks A Beulah, R Nita Marie Ann Assistant Professsor, SSN College of Engineering, Chennai PG Scholar, SSN College

More information

Equation-Based Congestion Control for Unicast Applications. Outline. Introduction. But don t we need TCP? TFRC Goals

Equation-Based Congestion Control for Unicast Applications. Outline. Introduction. But don t we need TCP? TFRC Goals Equation-Based Congestion Control for Unicast Applications Sally Floyd, Mark Handley AT&T Center for Internet Research (ACIRI) Jitendra Padhye Umass Amherst Jorg Widmer International Computer Science Institute

More information

Performance Analysis of TCP Variants

Performance Analysis of TCP Variants 102 Performance Analysis of TCP Variants Abhishek Sawarkar Northeastern University, MA 02115 Himanshu Saraswat PES MCOE,Pune-411005 Abstract The widely used TCP protocol was developed to provide reliable

More information

Computer Networking. Queue Management and Quality of Service (QOS)

Computer Networking. Queue Management and Quality of Service (QOS) Computer Networking Queue Management and Quality of Service (QOS) Outline Previously:TCP flow control Congestion sources and collapse Congestion control basics - Routers 2 Internet Pipes? How should you

More information

Chapter 24 Congestion Control and Quality of Service 24.1

Chapter 24 Congestion Control and Quality of Service 24.1 Chapter 24 Congestion Control and Quality of Service 24.1 Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 24-1 DATA TRAFFIC The main focus of congestion control

More information

TCP. CSU CS557, Spring 2018 Instructor: Lorenzo De Carli (Slides by Christos Papadopoulos, remixed by Lorenzo De Carli)

TCP. CSU CS557, Spring 2018 Instructor: Lorenzo De Carli (Slides by Christos Papadopoulos, remixed by Lorenzo De Carli) TCP CSU CS557, Spring 2018 Instructor: Lorenzo De Carli (Slides by Christos Papadopoulos, remixed by Lorenzo De Carli) 1 Sources Fall and Stevens, TCP/IP Illustrated Vol. 1, 2nd edition Congestion Avoidance

More information

TCP and BBR. Geoff Huston APNIC

TCP and BBR. Geoff Huston APNIC TCP and BBR Geoff Huston APNIC Computer Networking is all about moving data The way in which data movement is controlled is a key characteristic of the network architecture The Internet protocol passed

More information

Performance Consequences of Partial RED Deployment

Performance Consequences of Partial RED Deployment Performance Consequences of Partial RED Deployment Brian Bowers and Nathan C. Burnett CS740 - Advanced Networks University of Wisconsin - Madison ABSTRACT The Internet is slowly adopting routers utilizing

More information

One More Bit Is Enough

One More Bit Is Enough One More Bit Is Enough Yong Xia, RPI Lakshmi Subramanian, UCB Ion Stoica, UCB Shiv Kalyanaraman, RPI SIGCOMM 05, Philadelphia, PA 08 / 23 / 2005 Motivation #1: TCP doesn t work well in high b/w or delay

More information

Oscillations and Buffer Overflows in Video Streaming under Non- Negligible Queuing Delay

Oscillations and Buffer Overflows in Video Streaming under Non- Negligible Queuing Delay Oscillations and Buffer Overflows in Video Streaming under Non- Negligible Queuing Delay Presented by Seong-Ryong Kang Yueping Zhang and Dmitri Loguinov Department of Computer Science Texas A&M University

More information

Enhancing TCP Throughput over Lossy Links Using ECN-capable RED Gateways

Enhancing TCP Throughput over Lossy Links Using ECN-capable RED Gateways Enhancing TCP Throughput over Lossy Links Using ECN-capable RED Gateways Haowei Bai AES Technology Centers of Excellence Honeywell Aerospace 3660 Technology Drive, Minneapolis, MN 5548 E-mail: haowei.bai@honeywell.com

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2015 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

In-network Resource Allocation (Scribed by Ambuj Ojha)

In-network Resource Allocation (Scribed by Ambuj Ojha) In-network Resource Allocation (Scribed by Ambuj Ojha) Key Insight Adding intelligence in routers can improve congestion control relative to purely end to end schemes as seen in TCP Tahoe and BBR. We discuss

More information

15-744: Computer Networking TCP

15-744: Computer Networking TCP 15-744: Computer Networking TCP Congestion Control Congestion Control Assigned Reading [Jacobson and Karels] Congestion Avoidance and Control [TFRC] Equation-Based Congestion Control for Unicast Applications

More information

Improving Internet Performance through Traffic Managers

Improving Internet Performance through Traffic Managers Improving Internet Performance through Traffic Managers Ibrahim Matta Computer Science Department Boston University Computer Science A Glimpse of Current Internet b b b b Alice c TCP b (Transmission Control

More information

ADVANCED TOPICS FOR CONGESTION CONTROL

ADVANCED TOPICS FOR CONGESTION CONTROL ADVANCED TOPICS FOR CONGESTION CONTROL Congestion Control The Internet only functions because TCP s congestion control does an effective job of matching traffic demand to available capacity. TCP s Window

More information

RSA (Rivest Shamir Adleman) public key cryptosystem: Key generation: Pick two large prime Ô Õ ¾ numbers È.

RSA (Rivest Shamir Adleman) public key cryptosystem: Key generation: Pick two large prime Ô Õ ¾ numbers È. RSA (Rivest Shamir Adleman) public key cryptosystem: Key generation: Pick two large prime Ô Õ ¾ numbers È. Let Ò Ô Õ. Pick ¾ ½ ³ Òµ ½ so, that ³ Òµµ ½. Let ½ ÑÓ ³ Òµµ. Public key: Ò µ. Secret key Ò µ.

More information

Assignment 10: TCP and Congestion Control Due the week of November 14/15, 2012

Assignment 10: TCP and Congestion Control Due the week of November 14/15, 2012 Assignment 10: TCP and Congestion Control Due the week of November 14/15, 2012 I d like to complete our exploration of TCP by taking a close look at the topic of congestion control in TCP. To prepare for

More information

Implementing stable TCP variants

Implementing stable TCP variants Implementing stable TCP variants IPAM Workshop on Large Scale Communications Networks April 2002 Tom Kelly ctk21@cam.ac.uk Laboratory for Communication Engineering University of Cambridge Implementing

More information

BANDWIDTH MEASUREMENT IN WIRELESS NETWORKS

BANDWIDTH MEASUREMENT IN WIRELESS NETWORKS BANDWIDTH MEASUREMENT IN WIRELESS NETWORKS Andreas Johnsson, Bob Melander, and Mats Björkman The Department of Computer Science and Electronics Mälardalen University Sweden Abstract Keywords: For active,

More information

TCP START-UP BEHAVIOR UNDER THE PROPORTIONAL FAIR SCHEDULING POLICY

TCP START-UP BEHAVIOR UNDER THE PROPORTIONAL FAIR SCHEDULING POLICY TCP START-UP BEHAVIOR UNDER THE PROPORTIONAL FAIR SCHEDULING POLICY J. H. CHOI,J.G.CHOI, AND C. YOO Department of Computer Science and Engineering Korea University Seoul, Korea E-mail: {jhchoi, hxy}@os.korea.ac.kr

More information

XCP: explicit Control Protocol

XCP: explicit Control Protocol XCP: explicit Control Protocol Dina Katabi MIT Lab for Computer Science dk@mit.edu www.ana.lcs.mit.edu/dina Sharing the Internet Infrastructure Is fundamental Much research in Congestion Control, QoS,

More information

Online Facility Location

Online Facility Location Online Facility Location Adam Meyerson Abstract We consider the online variant of facility location, in which demand points arrive one at a time and we must maintain a set of facilities to service these

More information

A Flow Table-Based Design to Approximate Fairness

A Flow Table-Based Design to Approximate Fairness A Flow Table-Based Design to Approximate Fairness Rong Pan Lee Breslau Balaji Prabhakar Scott Shenker Stanford University AT&T Labs-Research Stanford University ICIR rong@stanford.edu breslau@research.att.com

More information

TCP Congestion Control : Computer Networking. Introduction to TCP. Key Things You Should Know Already. Congestion Control RED

TCP Congestion Control : Computer Networking. Introduction to TCP. Key Things You Should Know Already. Congestion Control RED TCP Congestion Control 15-744: Computer Networking L-4 TCP Congestion Control RED Assigned Reading [FJ93] Random Early Detection Gateways for Congestion Avoidance [TFRC] Equation-Based Congestion Control

More information

PERFORMANCE COMPARISON OF THE DIFFERENT STREAMS IN A TCP BOTTLENECK LINK IN THE PRESENCE OF BACKGROUND TRAFFIC IN A DATA CENTER

PERFORMANCE COMPARISON OF THE DIFFERENT STREAMS IN A TCP BOTTLENECK LINK IN THE PRESENCE OF BACKGROUND TRAFFIC IN A DATA CENTER PERFORMANCE COMPARISON OF THE DIFFERENT STREAMS IN A TCP BOTTLENECK LINK IN THE PRESENCE OF BACKGROUND TRAFFIC IN A DATA CENTER Vilma Tomço, 1 Aleksandër Xhuvani 2 Abstract: The purpose of this work is

More information

Effect of Number of Drop Precedences in Assured Forwarding

Effect of Number of Drop Precedences in Assured Forwarding Internet Engineering Task Force Internet Draft Expires: January 2000 Mukul Goyal Arian Durresi Raj Jain Chunlei Liu The Ohio State University July, 999 Effect of Number of Drop Precedences in Assured Forwarding

More information

844 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 3, JUNE /$ IEEE

844 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 3, JUNE /$ IEEE 844 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 3, JUNE 2010 Equilibrium of Heterogeneous Congestion Control: Optimality and Stability Ao Tang, Member, IEEE, Xiaoliang (David) Wei, Member, IEEE,

More information

Probabilistic analysis of algorithms: What s it good for?

Probabilistic analysis of algorithms: What s it good for? Probabilistic analysis of algorithms: What s it good for? Conrado Martínez Univ. Politècnica de Catalunya, Spain February 2008 The goal Given some algorithm taking inputs from some set Á, we would like

More information

On the Effectiveness of CoDel for Active Queue Management

On the Effectiveness of CoDel for Active Queue Management 1 13 Third International Conference on Advanced Computing & Communication Technologies On the Effectiveness of CoDel for Active Queue Management Dipesh M. Raghuvanshi, Annappa B., Mohit P. Tahiliani Department

More information

TCP and BBR. Geoff Huston APNIC

TCP and BBR. Geoff Huston APNIC TCP and BBR Geoff Huston APNIC The IP Architecture At its heart IP is a datagram network architecture Individual IP packets may be lost, re-ordered, re-timed and even fragmented The IP Architecture At

More information

Performance Evaluation of Controlling High Bandwidth Flows by RED-PD

Performance Evaluation of Controlling High Bandwidth Flows by RED-PD Performance Evaluation of Controlling High Bandwidth Flows by RED-PD Osama Ahmed Bashir Md Asri Ngadi Universiti Teknology Malaysia (UTM) Yahia Abdalla Mohamed Mohamed Awad ABSTRACT This paper proposed

More information

TCP and BBR. Geoff Huston APNIC. #apricot

TCP and BBR. Geoff Huston APNIC. #apricot TCP and BBR Geoff Huston APNIC The IP Architecture At its heart IP is a datagram network architecture Individual IP packets may be lost, re-ordered, re-timed and even fragmented The IP Architecture At

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2015 1 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

CS3600 SYSTEMS AND NETWORKS

CS3600 SYSTEMS AND NETWORKS CS3600 SYSTEMS AND NETWORKS NORTHEASTERN UNIVERSITY Lecture 24: Congestion Control Prof. Alan Mislove (amislove@ccs.neu.edu) Slides used with permissions from Edward W. Knightly, T. S. Eugene Ng, Ion Stoica,

More information

Congestion Control In the Network

Congestion Control In the Network Congestion Control In the Network Brighten Godfrey cs598pbg September 9 2010 Slides courtesy Ion Stoica with adaptation by Brighten Today Fair queueing XCP Announcements Problem: no isolation between flows

More information

TCP so far Computer Networking Outline. How Was TCP Able to Evolve

TCP so far Computer Networking Outline. How Was TCP Able to Evolve TCP so far 15-441 15-441 Computer Networking 15-641 Lecture 14: TCP Performance & Future Peter Steenkiste Fall 2016 www.cs.cmu.edu/~prs/15-441-f16 Reliable byte stream protocol Connection establishments

More information

Edge versus Host Pacing of TCP Traffic in Small Buffer Networks

Edge versus Host Pacing of TCP Traffic in Small Buffer Networks Edge versus Host Pacing of TCP Traffic in Small Buffer Networks Hassan Habibi Gharakheili 1, Arun Vishwanath 2, Vijay Sivaraman 1 1 University of New South Wales (UNSW), Australia 2 University of Melbourne,

More information

Appendix B. Standards-Track TCP Evaluation

Appendix B. Standards-Track TCP Evaluation 215 Appendix B Standards-Track TCP Evaluation In this appendix, I present the results of a study of standards-track TCP error recovery and queue management mechanisms. I consider standards-track TCP error

More information

Markov Model Based Congestion Control for TCP

Markov Model Based Congestion Control for TCP Markov Model Based Congestion Control for TCP Shan Suthaharan University of North Carolina at Greensboro, Greensboro, NC 27402, USA ssuthaharan@uncg.edu Abstract The Random Early Detection (RED) scheme

More information

Congestion Control for High Bandwidth-delay Product Networks. Dina Katabi, Mark Handley, Charlie Rohrs

Congestion Control for High Bandwidth-delay Product Networks. Dina Katabi, Mark Handley, Charlie Rohrs Congestion Control for High Bandwidth-delay Product Networks Dina Katabi, Mark Handley, Charlie Rohrs Outline Introduction What s wrong with TCP? Idea of Efficiency vs. Fairness XCP, what is it? Is it

More information

Fundamental Trade-offs in Aggregate Packet Scheduling

Fundamental Trade-offs in Aggregate Packet Scheduling Fundamental Trade-offs in Aggregate Packet Scheduling Zhi-Li Zhang Ý, Zhenhai Duan Ý and Yiwei Thomas Hou Þ Ý Dept. of Computer Science & Engineering Þ Fujitsu Labs of America University of Minnesota 595

More information

CS644 Advanced Networks

CS644 Advanced Networks What we know so far CS644 Advanced Networks Lecture 6 Beyond TCP Congestion Control Andreas Terzis TCP Congestion control based on AIMD window adjustment [Jac88] Saved Internet from congestion collapse

More information