A Simple Mechanism for Improving the Throughput of Reliable Multicast

Size: px
Start display at page:

Download "A Simple Mechanism for Improving the Throughput of Reliable Multicast"

Transcription

1 A Simple Mechanism for Improving the Throughput of Reliable Multicast Sungwon Ha Kang-Won Lee Vaduvur Bharghavan Coordinated Sciences Laboratory University of Illinois at Urbana-Champaign fs-ha, kwlee, Abstract Single-rate reliable multicast protocols are known to have scalability problems in the presence of a large number of receivers because the sender cannot distinguish independent packet losses at different receivers from multiple packet losses at the same receiver. In the presence of a large number of receivers, the sender may perceive a large aggregate packet loss even if no individual receiver is highly congested. Consequently, the multicast session progresses at a much slower rate than its desired sending rate. In order to solve this problem, we argue for decoupling of the mechanisms to achieve congestion control and reliability in reliable multicast protocols, and present a very simple loss notification based multicast congestion control mechanism that can be used to augment both ACK-based and NAK-based reliable multicast protocols and solve the independent packet loss problem. We illustrate the efficacy of our approach via simulations using the ns2 simulator. I. INTRODUCTION Recent years have witnessed a tremendous increase in the use of the Internet for a large variety of applications including commerce, web access, software distribution, multimedia, and of course, data communication. Many of these applications involve multiple communicating entities, and require reliable data transmission from a sender to multiple receivers. While it is of course possible to establish multiple unicast TCP connections and accomplish reliable data transmissions between each sender-receiver pair, this approach has two distinct disadvantages: (a) it involves a high overhead in terms of duplicating packet transmissions for connections which may potentially be able to share common links, and (b) it requires application-level synchronization between a sender and its receiver set. For both these reasons, reliable multicasting service is gaining popularity as a highly desirable service of the future Internet infrastructure. Recently, a number of reliable multicast protocols have been proposed [1, 2, 3, 4, 5, 6]. In this paper, we consider the popular class of reliable multicast protocols that support loose synchronization among receivers, i.e. all the receivers in the same multicast group perceive approximately the same progress of the session. Reliable multicast protocols in this class are single-rate and are constrained to progress at a rate that is bounded by the rate of the slowest receiver in the multicast group. Unfortunately, this class of protocols is highly susceptible to degradation in throughput as the number of receivers in a multicast group increases. This is because of the well known independent loss phenomenon, also referred to as the drop-to-zero problem or loss path multiplicity problem in related literature [7, 8], wherein the total amount of loss feedback received by the sender increases with the number of receivers, even when no single receiver experiences significant congestion or packet loss. From the sender s perspective, it is difficult to distinguish independent losses at multiple receivers from bulk losses experienced by a single bottleneck receiver. However, as described before, the progress of a single-rate reliable multicast protocol that provides synchronization among receivers is constrained by the slowest receiver, and in the presence of independent losses at multiple receivers, the sender mistakenly reacts to cumulative loss (i.e. assuming worst case behavior) rather than distinguishing independent losses. This can cause a severe degradation of throughput, as we have observed in our experiments and as has been reported in related work [6, 8]. In this paper, we seek to improve the throughput of reliable multicast protocols in particular, and single-rate multicast protocols in general, by proposing a very simple mechanism to augment the existing multicast congestion control algorithms in such protocols. Basically, our mechanism helps to overcome the independent loss problem by triggering the multicast congestion control algorithms only for significant losses (i.e. losses at the bottleneck receiver(s)). We show that our mechanism is applicable for both ACK-based [1, 6] and NAK-based [2, 4, 5] protocols. Basically our approach is to decouple the congestion feedback and the reliability feedback that are generated by the receivers. In reliable multicast protocols, receivers either generate positive (cumulative) acknowledgements in the ACK-based paradigm to indicate the successful reception of packets, or negative acknowledgements in the NAK-based paradigm to indicate loss of packets. Typically, cumulative ACKs are aggregated by specialized multicast routers in the network [6] and duplicate NAKs are suppressed by receivers or specialized filters in intermediate routers [5, 9] in order to prevent feedback implosion at the sender. When the sender receives ACK/NAK feedback, it identifies which packets have been lost and retransmits all lost packets (packets that may be lost by any receiver). In other words, the sender must respond to every packet loss (for simplicity, we do not consider local recovery in this paper). On the other hand, in terms of adjusting the sending rate for multicast congestion control, the sender only needs to respond to loss notifications from the slowest receiver(s). This naturally calls for a decoupling of reliability feedback (wherein the sender retransmits all lost packets at any receiver) from congestion feedback (wherein the sender throttles its rate according to the receiver that lost the maximum number of packets). Unfortunately, due to ACK aggregation and NAK suppression, the sender cannot figure out the latter merely by monitoring the source of the ACK/NAK feedback even if it chose to do so. Thus, we need some additional mechanisms to augment the existing ACK/NAK feedback in order to enable the sender to determine when to trigger congestion control. A fortuitous consequence of decoupling congestion feedback from from reliability feedback is that we can also support congestion control for unreliable or semi-reliable single-rate multicast protocols using the same mechanism. In this paper, we argue for very simple loss notification feedback from the receivers that can be used by the sender to trigger multicast congestion control selectively (i.e. in response to losses at bottleneck receivers) and overcome the independent loss phenomenon. Loss notification can be piggybacked to the ACK/NAK feedback, and have aggregation/suppression mechanisms that are similar to, but distinct from, the reliability feedback. We show that even for fairly small multicast groups, the effect of augmenting our loss notification mechanism to existing ACK-based and NAK-based reliable multicast protocols is very significant in terms of throughput. As the size of the multicast group grows, the percentage improvement in the throughput achieved using our mechanism progressively improves. The rest of the paper is organized as follows: Section 2 presents the motivation for loss notification. Section 3 describes the simple idea of our mechanism and the operation of the loss notification mechanism in the context of both ACK-based and NAK-based reliable multicast. Section 4 presents initial simulation results illustrating the performance

2 improvements achieved using loss notification. Section 5 describes related work and Section 6 summarizes the paper. II. INDEPENDENT LOSS PROBLEM p p p 1 - (1 - p) n... r r 1 n split connection p p r r 1 n... data retransmit loss notification loss notification Independent Loss Independent Loss Independent Loss loss probability p loss probability p loss probability p < p n receivers (a) n < n receivers (b) n receivers (c) Figure 1: Figure (a) illustrates the independent loss problem. Figure (b) shows how it can be alleviated by breaking up the multicast tree, at the expense of destroying the end-to-end semantics of the multicast protocol. Figure (c) shows how to alleviate the problem by using local servers in addition to, rather than replace, the end-to-end semantics. The motivation for considering these approaches is to solve the independent loss problem merely by intelligent tree management without requiring any special congestion control mechanisms. Clearly, the approaches described above are only of limited use. The problem of independent packet losses in multicast communication has also been referred to in literature as the drop-to-zero-problem or the loss path multiplicity problem [7, 8]. The problem can be simply stated as follows: even if the packet loss probability on an individual path from the sender to a receiver is small, when there are multiple such paths in the multicast tree, the perceived (aggregate) loss probability at the sender can become very large as the number of the paths increases. Consider a simple example of a multicast tree with n independent paths where each path has a packet loss probability p, where 0 < p 1 (see Figure 1.a). In this case, the aggregate packet loss probability p 0 that the sender will perceive is 1? (1? p) n, and p 0! 1 as n! 1. Consider that the sender uses a linear increase/multiplicative decrease congestion control algorithm (LIMD) [10]. For such algorithms, it is well known that the average sending rate is inversely proportional to the square root of the mean loss probability, i.e. r / 1= p p. Now, if the mean loss probability on a path is p and there are n independent paths, the ideal multicast sending rate should be / 1= p p while in fact, the actual multicast sending rate is / 1= f1? (1? p) n g. As p becomes small, f1? (1? p) n g! np, and the actual multicast sending rate / 1= p np. In other words, the multicast sending rate degrades by a factor of the square root of the number of independent loss paths. It has been reported that most of the losses in the MBONE, for example, occur near the receivers [11]; in this case, for a multicast group of 100 receivers, the multicast sending rate may degrade by up to an order of magnitude from the ideal multicast sending rate. The independent path loss problem is thus a very serious issue for scalable single-rate multicast protocols. An immediate first-cut solution to this problem without requiring any new mechanisms is to break up the multicast tree into a forest of trees, such that each leaf of the top-level multicast tree that is rooted at the sender is itself the root of a subtree (i.e. it becomes a local server) that spans a subset of the receivers (see Figure 1.b). While this can reduce the number of independent paths on each tree, splitting the tree into multiple subtrees violates end-to-end reliability, receiver synchronization, and the single rate semantics of the multicast protocol. If we wish to preserve the original semantics of the multicast protocol and still augment it with local retransmission, then it is possible for a local server to send data packets in response to a loss notification from a receiver but still forward the loss notification upstream (see Figure 1.c). Augmenting end-to-end retransmissions with local retransmissions can help to reduce, but not eliminate, the problem of independent losses because in the worst case, the sender will still react to the aggregate of the loss notifications. In summary, we want the sender to react to all loss notifications for the purposes of providing end-to-end reliability, but only some of the loss notifications for the purposes of multicast congestion control. This clearly calls for at least some enhancement to the basic multicast congestion control mechanisms. There are several possible solutions to this problem, and we briefly review some of the contemporary approaches in Section V after describing our approach in Section 3. III. LOSS NOTIFICATION MECHANISM In this section, we describe the implementation of the loss notification mechanism in both ACK-based and NAK-based frameworks. It is important to note that we are not trying to develop a new multicast congestion control algorithm per se in this paper; rather, we are trying to design a mechanism that enables the sender to determine for which losses it must invoke the existing multicast congestion control algorithm. While the loss notification mechanism does not really care what congestion control algorithm is being used, it has been argued that popular linear increase/multiplicative decrease (LIMD) unicast congestion control paradigm is also suitable for the multicast environment [13, 14, 15]. Thus we model the congestion control at the sender to be LIMD that operates as follows: the sender periodically (once every epoch) invokes the congestion control algorithm. If the loss notification mechanism tells the sender that there has been a significant loss in the last epoch, the sender decreases the sending rate by a multiplicative constant (0.5); otherwise it increases the sending rate by an additive constant (1). A. Solving the Independent Loss Problem Using Loss Notification The approach we explore in this paper is very simple. We know that the sending rate of a multicast connection can be no faster than the sustainable rate of the slowest receiver. In a fluid model, the instantaneous max-min fair share for a single rate multicast session at any time is exactly equal to the instantaneous max-min fair share of the connection from the sender to the slowest receiver at that time. In the packet model, due to buffering at intermediate routers, it is acceptable and even desirable for rate of the multicast session to be higher than the instantaneous minimum of the fair shares over all receivers, so long as the rate is no faster than the sustainable rate of the slowest receiver over a short time duration (the precise analytical characterization of the relationship between the ideal multicast session rate and the max-min fair shares of the receivers is dictated by several factors such as buffering in the network, round-trip times, etc. and is a part of ongoing work). This points to the fact that the sender must react to the maximum of the packet losses observed among all its receivers in the recent past. The first cut solution is for each receiver to send the total number of packet losses observed from the start of the connection as its loss notification to the sender. The sender can then react only when the maximum feedback that it sees increases. Of course, we know that this simplistic approach will not work for two reasons: (a) as the number of receivers increases, the sender will be inundated with loss feedback, and (b) the sender must only take into account losses in the recent past and not from the start of the connection. The first point calls for aggregation/suppression of loss notification feedback and is considered in this subsection. The second point is considered in the next subsection. 1) Loss Notification in the ACK-based Framework In the basic ACK-with-aggregation approach, multicast routers in the tree [6] or designated receivers [1] are responsible for aggregating cumulative ACKs as they progress upstream

3 from the receivers to the sender. Aggregation involves computing the minimum of the cumulative ACKs from all the children, and forwarding this value to the upstream multicast router. Similar to ACKs, loss notification can be aggregated by taking the maximum of the loss notifications from all the children and propagating it upstream. The loss notification from a receiver contains the total number of packets lost from the start of the connection to the current time, and is generated periodically (at the start of each epoch, where the epoch marks the frequency of sending loss notifications but epochs across receivers need not be synchronized). Loss notifications can be piggybacked on ACKs, and are aggregated as they propagate upstream. The loss notification received by the sender is thus the maximum of the loss notifications over all receivers in an epoch. At the sender, if the loss notification mechanism detects an increase in the loss feedback over the previous feedback, then it triggers a congestion response according to congestion control algorithm (e.g. multiplicative decrease); otherwise it triggers a no congestion response (e.g. additive increase). 2) Loss Notification in the NAK-based Framework In the basic NAK-with-suppression approach, when a receiver detects the loss of a packet, it waits for a random time period (upperbounded by a timeout value) to listen for NAKs corresponding to the lost packet initiated by some other receiver, and then multicasts a NAK to the multicast group (including the sender) for the packet if it does not hear a NAK in the meantime. Any receiver that hears the NAK and has lost the same packet suppresses its own NAK. When the sender receives a NAK, it knows that at least one receiver did not receive the packet, and retransmits the packet. Similar to NAK suppression, we can achieve loss notification suppression. Once every epoch, the receiver with the largest loss notification from the previous epoch multicasts its updated loss notification to the multicast group (including the sender). Any receiver whose loss notification is not greater than the advertised value suppresses its own loss notification. A receiver whose loss notification is greater than the advertised value backs off for a random time (upper bounded by a timeout) and then multicasts its own loss notification if all the advertised notifications it has heard in the meantime are smaller than its own value. The loss notification mechanism at the sender maintains the largest loss notification heard by it in each epoch, and at the end of an epoch (epochs need not be synchronized across hosts) triggers a congestion response according to the congestion control algorithm (e.g. multiplicative decrease) if the maximum loss in the current epoch is larger than in the previous epoch; otherwise it triggers a no congestion response (e.g. additive increase). Thus we see that achieving the aggregation or suppression of loss notification similar to ACK-aggregation and NAK-suppression is a straight-forward task, and prevents the sender from getting inundated with loss notifications as the number of receivers increases. At the same time, it enables the sender to react to the maximum loss among all receivers, rather than aggregate loss summed over all receivers. Furthermore, it can work with different congestion control algorithms (both rate-based and window-based), and enables a significant improvement in the throughput of the multicast session as the number of receivers in the multicast group increases. B. Distinguishing Serial Losses from Parallel Losses The key remaining issue is to adjust the loss notification mechanism described above to account only for loss notifications over the recent past as opposed to loss notifications from the start of the connection. Consider a simple case with one sender and two receivers in a multicast group, as shown in Figure 2. Each receiver experiences 4 packet losses. For this simple scenario, consider the two loss feedback patterns in Figure 2.a and Figure 2.b (in (a) (b) loss at R1 feedback to sender feedback to sender epoch feedback from receivers feedback to sender feedback from receivers loss at R2 Figure 2: This figure illustrates parallel and serial loss feedback from two receivers to a sender in a multicast group. Figure (a) is the upper axis and losses from both receivers occur in parallel in epochs 1, 3, 4 and 7. Consequently, the sender receives 4 loss indications. Figure (b) is the lower axis and losses from the receivers occur in series. Receiver 1 observes losses in epochs 1-4, and receiver 2 observes losses in epochs 5-8. Thus 8 loss indications are sent to the sender. this example, the feedback indicates individual losses rather than cumulative loss notifications). In each case, when and how much should the loss feedback to the sender be? Intuitively, the losses in Figure 2.a occur in parallel, hence we expect the loss feedback to the sender to be 4, as shown in the figure. On the other hand, the losses in Figure 2.b occur in series, hence we expect the loss feedback to be 8, as shown in the figure. From the illustration above, it seems intuitively clear that the sender should get a loss feedback that is the maximum among all receivers for parallel losses, and the aggregate over all receivers for serial losses. The problem is that it is very difficult to characterize a given loss pattern as being parallel or serial. Moreover, the boundary between parallel losses and serial losses is somewhat arbitrary and will depend on several factors such as the buffering in the network, round trip times to the receivers, etc., but no single host has the global knowledge of these factors. Epoch boundaries can be used as a first cut classifier, i.e. all losses in the same epoch are parallel and across epochs are serial. The problems with this approach are two-fold: (a) epochs need not be synchronized across hosts, and (b) since epochs are initially used for controlling the frequency of sending feedback (and reacting to feedback), it may be inappropriate to use the same periodicity to determine parallel/serial losses. Nevertheless, we believe that epochs can be used in classification, as described below. In order to take into account both serial and parallel losses, we can use a running average of the number of packet losses in an epoch. Let li j be the number of losses observed by receiver i in epoch j, and let li be the running average of the loss notification. Then, at the end of each epoch j, each receiver updates the running average as follows: li li + li j The sender maintains the maximum loss notification L j corresponding to epoch j. At the end of epoch j, the sender performs the following computation: L j maxif lig If (L j > L j?1 ) detect congestion else detect no congestion Of the two steps described above, the first one is achieved through the aggregation or suppression of loss notifications, while the second one is explicitly computed by the sender. In this step, the sender also scales back the loss notification information collected over the last epoch by the same factor, so that if there are no losses in the current epoch, the feedback in the current epoch matches the scaled back value in the last epoch; thus only significant packet losses are detected. The values of the constants and play a crucial role in determining which losses are classified as parallel and which losses are classified as serial. In particular, = 1 and = 1 corresponds to counting losses from the start of the connection, while = 0 and = 1 corresponds to reseting the loss value every epoch. Setting appropriate values for and based on the session dynamics is a very interesting problem that is ongoing work. In a related work [11], Yajnik et al. observed that the packet loss pattern in the MBone (or the multicast backbone)

4 has very small time and spatial correlation and most of the packet loss bursts are short (one or two packets). In our terminology, most of the losses in such an environment could be classified as parallel. In the extreme case when all losses are parallel, then the running average approach becomes equivalent to the simplistic approach of just maintaining a cumulative loss notification at each receiver (i.e. = 1; = 1). A key point to note is that the mechanisms described in this section are independent of the specific reliability scheme and congestion control algorithm. We believe that they can be used to augment any reliable multicast protocol, and even unreliable or semi-reliable multicast protocols. In effect, we believe that loss notification is a simple yet effective mechanism that can be used to augment any single-rate multicast protocol to solve the problem of independent packet losses across multiple receivers. In the next section, we evaluate the performance enhancements obtained using our mechanism. IV. PERFORMANCE EVALUATION We now evaluate the performance of the proposed loss notification mechanism using the ns2 simulator [12]. We have incorporated the loss notification mechanism in the simulation codes of IRMA [6] (for the ACK-based scheme) and SRM [5] (for the NAK-based scheme). As a background information, in IRMA, the sender adopts the flow control and congestion control of TCP, which employs a modified LIMD congestion control: the sender increases the congestion window by one packet when there was no packet loss in the last congestion window but decreases the congestion window by half when it detects packet losses in the last congestion window. In SRM, maintaining traffic under the connection capacity is not a transport issue. The designers of SRM assume that connection capacity is determined by a separate session management protocol or the congestion control is performed by the application. Thus we have implemented an adaptive ns application, which periodically probes the state variable of the SRM and determines the sending rate of the next epoch according to the variable. When an SRM agent observes request packets (or NAKs), it sets the state variable which is read by the application at the end of each epoch. After reading the variable, the application clears the variable and determines the new sending rate using LIMD congestion control. For simplicity, we used the basic SRM without adaptive adjustment of random timer algorithms or local recovery mechanism in the simulation. We call this integrated congestion control mechanism (of SRM and the adaptive application) as SRM in this section for simplicity. These basic congestion control schemes of IRMA and SRM were extended by the loss notification mechanism proposed in this paper. We call IRMA and SRM with these extensions IRMA+ and SRM+, respectively. One of the interesting problems with the loss notification mechanism is how to effectively distinguish the serial losses from parallel losses, and vice versa. A comprehensive analysis to this problem is an ongoing work. In this paper, we present the result with = 1 and = 1, i.e. all losses in the network are assumed to be parallel. Of course this simplifying assumption may not be sufficient to handle all types of loss patterns and transmission scenarios. However, we find that even this simple scheme is suitable for addressing the various scenarios simulated in this section and results in performance improvement. In each test, we sent packets with size of 1 KB for 100 seconds and measured the effective throughput at the receivers. In IRMA, TCP Reno congestion control mechanism was employed at the sender. The parameters for SRM (e.g. C 1 ; C 2 ; D 1 ; D 2 ) were adopted from [5]. We model network congestion as random packet loss on the link. Local recovery mechanisms were disabled in both protocols to assess the net effect of the loss notification mechanism. In addition to IRMA, IRMA+, SRM, and SRM+, we also present the performance of TCP unicast from the sender to the farthest receiver (which is the most constrained receiver) as a reference, which is the performance upperbound of the single rate reliable multicast [6]. We now present the simulation results in four network topologies with progressively increasing complexity: simple star, two level binary tree, 7 and 15 receiver asymmetric trees (See Figure 3). In all cases the link bandwidth were set to 3 Mbps and one-way link delay was set to either 5 msec unless otherwise noted. Test 1. Simple binary tree In this test, we compare the performance of TCP, IRMA, IRMA+, SRM, and SRM+ in the simplest configuration as shown in topology 1 in Figure 3. We measured the throughput of each protocol with various (independent) packet loss probability on the level 1 links. Table 1 summarizes the result. Table 1 loss TCP IRMA IRMA+ SRM SRM+ 0% % % We observe two major results: First, when there is no packet loss, all the protocols perform comparably; and second, when the packet loss probability of links is non-zero, the loss notification mechanism improves the throughput (by 11 % in IRMA and 13 % in SRM on the average). Note that our goal is not to compare the performance of IRMA and SRM in this paper; but we are interested in the performance improvement resulting from the loss notification mechanism in each case. The poor relative performance of SRM in this test must be mainly due to the conservative parameter selection and disabling adaptive timer managements. We expect SRM performance to improve with fine parameter tuning and the adaptive timers adjustment [5]. The performance gap between TCP and IRMA+ is because IRMA+ cannot resolve all serial/parallel losses correctly. In other words, the simple mechanism with = 1; = 1 reacts wrongly when losses are in series occasionally. Test 2. Two level binary tree In this test, we measured the performance of each protocol in a network configuration as shown in topology 2 in Figure 3 with various independent link loss rates. We tested 6 different cases: (a) no loss on the links, (b) only level 2 links have 0.3 % loss rate, (c) the level 1 links have loss rate 0.1 % and the level 2 links have 0.2 %, (d) both level 1 and level 2 links have loss rate 0.15 %, (e) the level 1 links have loss rate 0.2 % and the level 2 links have 0.1 %, and (f) only level 1 links have 0.3 % loss rate. In other words, we tried to measure the effect of the location of packet losses on the efficacy of the proposed scheme. Table 2 summarizes the result. Table 2 case TCP IRMA IRMA+ SRM SRM+ a (0:0) b (0:0.3) c (0.1:0.2) d (0.15:0.15) e (0.2:0.1) f (0.3:0) The behavior of TCP is mostly independent of the location of packet loss which is expected. For IRMA and SRM, we expect the performance to degrade as the location of packet losses moves away from the sender (i.e. from level 1 to level 2 links) since it increases the effective loss rate by making most packet loss to be independent and serial. We observe this effect

5 topology 1 topology 2 topology 3 topology 4 level Figure 3: Network topologies for the simulations 1 4. The circles denote end hosts and the squares denote multicast routers, where the gray nodes are the senders and white nodes are the receivers. All links have 5 msec delay except topology 4 where the links have 10 msec delay. clearly in IRMA whose throughput varies from 865 Kbps to 1.9 Mbps. But in case of SRM the effect is less prominent. This may be due to the NAK suppression mechanism of SRM which reduces the generation of NAK ideally to a few packets per loss. Thus, in case of IRMA, we observe a large variance in the improvement of performance by the introduction of IRMA+: 47 % (case f) 150 % (case b). On the other hand, the variance of the improvement by SRM+ is not very large: 44 % (case b) 73 % (case f). In any case, we observe that IRMA+ and SRM+ improve the basic congestion control of IRMA and SRM. We also observe that with the loss notification mechanism the effect of congestion location is less than the case without one. Test 3. Asymmetric topology We now consider an asymmetric tree configuration with 7 receivers as shown in topology 3 in Figure 3. In this configuration, we performed five tests varying the independent packet loss probability on each link. There are five cases: (a) no packet loss on the links, (b) the loss probability of the level 1, level 2, level 3 links are 0.1 %, 0.3 %, and 0.5 %, respectively, (c) the loss probability of the level 2 links is 0.4 % and that of level 3 links is 0.5 %, (d) the loss probability of the level 1 links is 0.1 % and the other links are error-free, (e) the loss probability of the level 2 links is 0.3 % and the other links are error-free, and (f) the loss probability of the level 3 links is 0.5 % and the other links are error-free. Table 3 summarizes the result. Table 3 case TCP IRMA IRMA+ SRM SRM+ a (0:0:0) b (0.1:0.3:0.5) c (0:0.4:0.5) d (0.1:0:0) e (0:0.3:0) f (0:0:0.5) In this test we are interested in how the loss probability and its location affect the performance of the loss notification mechanism in an asymmetric setting. In terms of location, essentially we observed the same trend as the previous test: in case of IRMA, the improvement due to IRMA+ becomes more significant as the location of the losses move down the multicast tree, e.g 22 % (case b) 66 % (case c); however, the variance in the performance of SRM+ is rather small. We also observed that performance improvement is greater with larger loss probability, e.g. performance improvement of IRMA+ in case d is 5 % whereas in case e it is 87 %; similarly the performance improvement of SRM+ in case d is 51 % and in case e is 97 %. We conjecture the proposed loss notification mechanism is suitable for the reliable multicast protocols on the MBone, which has been reported to have very low time and space correlations and where most of the packet losses are localized near the leaves of multicast tree [11]. Test 4, Larger configuration We now briefly illustrate that the performance improvement we have observed in the preceding tests is still valid in a slightly more complex network configuration as shown in topology 4 in Figure 3. Though we admit that it is still very simplistic model of real multicast environments, and hence the simulation result presented in this section is still far from comprehensive, we believe the performance results presented in this paper give an insight on the advantage of the simple loss notification mechanism. We simulated three cases: (a) all links are error-free, (b) only the level 1 links have the independent loss probability of 0.1 %, and (c) only the level 3 links have the loss probability of 0.1 %. The effective throughput of each protocol is presented in Table 4. Table 4 case TCP IRMA IRMA+ SRM SRM+ a (0:0:0:0:0) b (0.1:0:0:0:0) c (0:0:0.1:0:0) In this case, we observe a different trend of performance improvements in this test from the previous case: IRMA+ still greatly improves the performance when the location of packet losses is close to the receivers and losses are mostly independent, and so does SRM+. In case when the congestion is located near sender (case b), IRMA+ and SRM+ only slightly improves the performance of IRMA and SRM: 13 % in IRMA+ and 7 % in SRM+, respectively. However, when most losses happen at level 3 links the performance improvement is much greater since most receivers are connected via the level 3 links and hence most losses are independent. In this case, the performance of plain IRMA and SRM degrades severely due to the independent packet loss problem. From the result, we observe that IRMA+ and SRM+ handle the problem effectively improving the performance of the protocols by 339 % and 143 %, respectively. Overall, we observed that: (a) the proposed loss notification mechanism handles the independent packet loss problem effectively; and (b) the proposed mechanism scales well in terms of network complexity; in fact the percentile performance improvement increases with the network complexity. Essentially, the loss notification is most effective when the independent losses are located close to the receivers, which is the common case of MBone loss characteristics observed from the empirical study [11]. We observe this trend in all the test cases with IRMA+. With SRM+, the trend was not very clear in small network configurations but became obvious with a larger network topology.

6 V. RELATED WORK In this section, we review existing approaches to overcome the independent loss problem in a multicast environment. Montgomery [13] proposed flow and congestion control mechanisms called loss tolerant rate controller (LTRC) in the reliable multicast context. LTRC reduces the sending rate only when the loss rate reported by some receiver is greater than a threshold. In addition, it does not reduce the sending rate again within a certain time period to make sure that the sending rate decrease is not too aggressive. The performance of LTRC thus is closely dependent on the choice of the threshold and the time period: if the time period is large, then the sending rate decrease may be too slow and the algorithm may not be responsive; otherwise, the adaptation of time period may not be effective. DeLucia et al. [14] proposed a multicast congestion control mechanism which uses the concept of representatives, a small number of receivers that provide congestion (or loss) feedback. The basic idea is to reduce the number of congestion feedback by ensuring only representatives provide congestion feedback on behalf of all the receivers. It is sender s job to choose which receivers to be in the representative set based on the recent feedback. The rate adaptation algorithm adopted by the sender is a variation of LIMD congestion control with TCP Vegas-like delay estimation. The performance, of course, is highly dependent on the choice and the effectiveness of the representatives [8]. Random listening algorithm (RLA) [15] tries to estimate the number of congested receivers (say n) in the multicast group, and react to the congestion feedback with the probability of 1=n. The sender s congestion control algorithm is basically a rate-based modified LIMD (decrease with probability) and updated on the basis of the SACK-like feedback from the receivers. A receiver is considered to be congested if the loss probability is greater than a certain threshold level presented in the paper. The RLA was shown to be essentially fair to TCP in terms of the bandwidth usage. Bhattacharyya et al. [8] formulated the independent loss problem in detail and presented a general framework called filtered loss indication-based congestion avoidance (FLICA) for various LIMD-based multicast congestion control algorithms (including [13, 14]). They provided analysis and simulation results on different filter-based algorithms and concluded that filter algorithm can only alleviate the level of congestion but cannot handle all the problems. They also presented the description of an ideal multicast congestion control algorithm that can trace the worst receiver in the multicast group at all time and showed that it can achieve desired fairness properties. The algorithm is surely effective but at a relatively high cost: the sender needs to keep track of the congestion level of all the receivers. Approaches mentioned above were typically proposed in the context of NAK or SACK-based reliable multicast. In ACK-based protocols, researchers typically tried to handle multicast congestion control problem using local recovery, i.e. the retransmissions of lost packets are handled by local repair servers near the receivers without waiting for the sender to retransmit [1, 6, 7]. Some approaches perform local recovery as an enhancement to basic end-to-end retransmission [6] and the others adopted a split-connection reliability semantics where local servers themselves act as the sender [1, 7]. The loss notification mechanism proposed in this work can be viewed as a simple, efficient, and low overhead instantiation of a filter in the FLICA framework. However, it is more robust than RLA and representative schemes, and it is applicable to both NAK and ACK based protocols. We are not trying to develop a new multicast congestion control algorithm; instead, we our goal is to determine when it is appropriate to trigger the congestion control algorithm in the presence of independent losses at different receivers. VI. SUMMARY As the number of multicast receivers and links in the network increases, the number of independent losses also increases. A single-rate multicast sender must retransmit upon every loss feedback to achieve reliability but does not need to react to every loss for congestion control. Therefore, decoupling of congestion control and reliability mechanisms is necessary in a reliable multicast environment. To this end, we proposed a simple and efficient loss notification mechanism that can augment both ACK-based and NAK-based reliable multicast. A fortuitous consequence of this decoupling congestion control from reliability is that we can also support congestion control for single-rate unreliable or semi-reliable multicast protocols using the same mechanism. The basic idea of loss notification is to trigger congestion response only to the maximum of the losses experienced by any receiver rather than the sum of the losses experienced by all receivers. This takes care of the independent loss problem, and enables the single-rate multicast congestion control algorithm to closely approximate the ideal rate, i.e. the fair share of the bottleneck receiver. We have implemented the loss notification mechanism in both ACK-based and NAK-based protocols. Initial performance evaluation shows that loss notification is an effective approach to the independent loss problem, and that it is robust and easy to incorporate. Even though the proposed loss notification mechanism is very simple, tuning it appropriately to obtain the optimal performance is still a challenging task, and is the focus of our ongoing work. VII. REFERENCES [1] J. C. Lin and S. Paul. RMTP: A Reliable Multicast Transport Protocol. Proceedings of IEEE INFOCOM, March [2] A. Koifman and S. Zabele. RAMP: A Reliable Adaptive Multicast Protocol. Proceedings of IEEE INFOCOM, March [3] H. W. Holbrook, S. K. Singhai, and D. R. Cheriton. Log- Based Receiver-Reliable Multicast for Distributed Interactive Simulation. Proceedings of ACM SIGCOMM, September [4] R. Talpade and M. H. Ammar. Single Connection Emulation (SCE): An Architecture for Providing a Reliable Multicast Transport Service. Proceedings of the 15th IEEE International Conference on Distributed Computing Systems, June [5] S. Floyd, V. Jacobson, C-G Liu, S. McCanne, and L. Zhang. A Reliable Multicast Framework for Light-weight Sessions and Application Level Framing. IEEE/ACM Transactions on Networking, November [6] K.-W. Lee, S. Ha, and V. Bharghavan. IRMA: A Reliable Multicast Architecture in the Internet. Proceedings of IEEE INFOCOM, March [7] I. Rhee, N. Balaguru, and G. N. Rouskas. MTCP: Scalable TCPlike Congestion Control for Reliable Multicast. Proceedings of IEEE INFOCOM, March [8] S. Bhattacharyya, D. Towsley, and J. Kurose. The Loss Path Multiplicity Problem in Multicast Congestion Control. Proceedings of IEEE INFOCOM, March [9] B. DeCleene, S. Bhattacharyya, T. Friedman, M. Keaton, J. Kurose, D. Rubenstein, and D. Towsley. Reliable Multicast Framework (RMF) Specification. A white paper, University of Massachusetts, March [10] V. Jacobson. Congestion Avoidance and Control. Proceedings of ACM SIGCOMM, August [11] M. Yajnik, J. Kurose, and Don Towsley. Packet Loss Correlation in the MBone Multicast Network. Proceedings of IEEE GLOBECOM, November [12] On-line Document. Network Simulator. [13] T. Montgomery. A Loss Tolerant Rate Controller for Reliable Multicast. Technical Report: NASA-IVV , August [14] D. DeLucia and K. Obraczka. A Multicast Congestion Control Mechanism for Reliable Multicast. Proceedings of IEEE Symposium on Computers and Communications, June [15] H. A. Wang and M. Schwartz. Achieving Bounded Fairness for Multicast and TCP Traffic in the Internet. Proceedings of ACM SIGCOMM, September 1998.

RED behavior with different packet sizes

RED behavior with different packet sizes RED behavior with different packet sizes Stefaan De Cnodder, Omar Elloumi *, Kenny Pauwels Traffic and Routing Technologies project Alcatel Corporate Research Center, Francis Wellesplein, 1-18 Antwerp,

More information

AN IMPROVED STEP IN MULTICAST CONGESTION CONTROL OF COMPUTER NETWORKS

AN IMPROVED STEP IN MULTICAST CONGESTION CONTROL OF COMPUTER NETWORKS AN IMPROVED STEP IN MULTICAST CONGESTION CONTROL OF COMPUTER NETWORKS Shaikh Shariful Habib Assistant Professor, Computer Science & Engineering department International Islamic University Chittagong Bangladesh

More information

CS Transport. Outline. Window Flow Control. Window Flow Control

CS Transport. Outline. Window Flow Control. Window Flow Control CS 54 Outline indow Flow Control (Very brief) Review of TCP TCP throughput modeling TCP variants/enhancements Transport Dr. Chan Mun Choon School of Computing, National University of Singapore Oct 6, 005

More information

A Rate-based End-to-end Multicast Congestion Control Protocol

A Rate-based End-to-end Multicast Congestion Control Protocol A Rate-based End-to-end Multicast Congestion Control Protocol Sherlia Shi Marcel Waldvogel Department of Computer Science Washington University in St. Louis Missouri, USA E-mail: sherlia, mwa@arl.wustl.edu

More information

Fairness Evaluation Experiments for Multicast Congestion Control Protocols

Fairness Evaluation Experiments for Multicast Congestion Control Protocols Fairness Evaluation Experiments for Multicast Congestion Control Protocols Karim Seada, Ahmed Helmy Electrical Engineering-Systems Department University of Southern California, Los Angeles, CA 989 {seada,helmy}@usc.edu

More information

CS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP

CS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP CS 5520/ECE 5590NA: Network Architecture I Spring 2008 Lecture 13: UDP and TCP Most recent lectures discussed mechanisms to make better use of the IP address space, Internet control messages, and layering

More information

Improving TCP Performance over Wireless Networks using Loss Predictors

Improving TCP Performance over Wireless Networks using Loss Predictors Improving TCP Performance over Wireless Networks using Loss Predictors Fabio Martignon Dipartimento Elettronica e Informazione Politecnico di Milano P.zza L. Da Vinci 32, 20133 Milano Email: martignon@elet.polimi.it

More information

UNIT IV -- TRANSPORT LAYER

UNIT IV -- TRANSPORT LAYER UNIT IV -- TRANSPORT LAYER TABLE OF CONTENTS 4.1. Transport layer. 02 4.2. Reliable delivery service. 03 4.3. Congestion control. 05 4.4. Connection establishment.. 07 4.5. Flow control 09 4.6. Transmission

More information

MTCP: Scalable TCP-like Congestion Control for Reliable Multicast

MTCP: Scalable TCP-like Congestion Control for Reliable Multicast MTCP: Scalable TCP-like Congestion Control for Reliable Multicast Injong Rhee Nallathambi Balaguru George N. Rouskas Department of Computer Science Department of Mathematics North Carolina State University

More information

ECE 333: Introduction to Communication Networks Fall 2001

ECE 333: Introduction to Communication Networks Fall 2001 ECE 333: Introduction to Communication Networks Fall 2001 Lecture 28: Transport Layer III Congestion control (TCP) 1 In the last lecture we introduced the topics of flow control and congestion control.

More information

SIMPLE MODEL FOR TRANSMISSION CONTROL PROTOCOL (TCP) Irma Aslanishvili, Tariel Khvedelidze

SIMPLE MODEL FOR TRANSMISSION CONTROL PROTOCOL (TCP) Irma Aslanishvili, Tariel Khvedelidze 80 SIMPLE MODEL FOR TRANSMISSION CONTROL PROTOCOL (TCP) Irma Aslanishvili, Tariel Khvedelidze Abstract: Ad hoc Networks are complex distributed systems that consist of wireless mobile or static nodes that

More information

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks Hybrid Control and Switched Systems Lecture #17 Hybrid Systems Modeling of Communication Networks João P. Hespanha University of California at Santa Barbara Motivation Why model network traffic? to validate

More information

A Hybrid Systems Modeling Framework for Fast and Accurate Simulation of Data Communication Networks. Motivation

A Hybrid Systems Modeling Framework for Fast and Accurate Simulation of Data Communication Networks. Motivation A Hybrid Systems Modeling Framework for Fast and Accurate Simulation of Data Communication Networks Stephan Bohacek João P. Hespanha Junsoo Lee Katia Obraczka University of Delaware University of Calif.

More information

TCP Congestion Control

TCP Congestion Control 1 TCP Congestion Control Onwutalobi, Anthony Claret Department of Computer Science University of Helsinki, Helsinki Finland onwutalo@cs.helsinki.fi Abstract This paper is aimed to discuss congestion control

More information

Performance Consequences of Partial RED Deployment

Performance Consequences of Partial RED Deployment Performance Consequences of Partial RED Deployment Brian Bowers and Nathan C. Burnett CS740 - Advanced Networks University of Wisconsin - Madison ABSTRACT The Internet is slowly adopting routers utilizing

More information

Outline Computer Networking. TCP slow start. TCP modeling. TCP details AIMD. Congestion Avoidance. Lecture 18 TCP Performance Peter Steenkiste

Outline Computer Networking. TCP slow start. TCP modeling. TCP details AIMD. Congestion Avoidance. Lecture 18 TCP Performance Peter Steenkiste Outline 15-441 Computer Networking Lecture 18 TCP Performance Peter Steenkiste Fall 2010 www.cs.cmu.edu/~prs/15-441-f10 TCP congestion avoidance TCP slow start TCP modeling TCP details 2 AIMD Distributed,

More information

Lecture 14: Congestion Control"

Lecture 14: Congestion Control Lecture 14: Congestion Control" CSE 222A: Computer Communication Networks George Porter Thanks: Amin Vahdat, Dina Katabi and Alex C. Snoeren Lecture 14 Overview" TCP congestion control review Dukkipati

More information

CMPE 257: Wireless and Mobile Networking

CMPE 257: Wireless and Mobile Networking CMPE 257: Wireless and Mobile Networking Katia Obraczka Computer Engineering UCSC Baskin Engineering Lecture 10 CMPE 257 Spring'15 1 Student Presentations Schedule May 21: Sam and Anuj May 26: Larissa

More information

Analyzing the Receiver Window Modification Scheme of TCP Queues

Analyzing the Receiver Window Modification Scheme of TCP Queues Analyzing the Receiver Window Modification Scheme of TCP Queues Visvasuresh Victor Govindaswamy University of Texas at Arlington Texas, USA victor@uta.edu Gergely Záruba University of Texas at Arlington

More information

Contents. Overview Multicast = Send to a group of hosts. Overview. Overview. Implementation Issues. Motivation: ISPs charge by bandwidth

Contents. Overview Multicast = Send to a group of hosts. Overview. Overview. Implementation Issues. Motivation: ISPs charge by bandwidth EECS Contents Motivation Overview Implementation Issues Ethernet Multicast IGMP Routing Approaches Reliability Application Layer Multicast Summary Motivation: ISPs charge by bandwidth Broadcast Center

More information

Transmission Control Protocol. ITS 413 Internet Technologies and Applications

Transmission Control Protocol. ITS 413 Internet Technologies and Applications Transmission Control Protocol ITS 413 Internet Technologies and Applications Contents Overview of TCP (Review) TCP and Congestion Control The Causes of Congestion Approaches to Congestion Control TCP Congestion

More information

Chapter 8 Fault Tolerance

Chapter 8 Fault Tolerance DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S. TANENBAUM MAARTEN VAN STEEN Chapter 8 Fault Tolerance 1 Fault Tolerance Basic Concepts Being fault tolerant is strongly related to

More information

Chapter III. congestion situation in Highspeed Networks

Chapter III. congestion situation in Highspeed Networks Chapter III Proposed model for improving the congestion situation in Highspeed Networks TCP has been the most used transport protocol for the Internet for over two decades. The scale of the Internet and

More information

MTCP: scalable TCP-like congestion control for reliable multicast q

MTCP: scalable TCP-like congestion control for reliable multicast q Computer Networks 38 (2002) 553 575 www.elsevier.com/locate/comnet MTCP: scalable TCP-like congestion control for reliable multicast q Injong Rhee, Nallathambi Balaguru, George N. Rouskas * Department

More information

RD-TCP: Reorder Detecting TCP

RD-TCP: Reorder Detecting TCP RD-TCP: Reorder Detecting TCP Arjuna Sathiaseelan and Tomasz Radzik Department of Computer Science, King s College London, Strand, London WC2R 2LS {arjuna,radzik}@dcs.kcl.ac.uk Abstract. Numerous studies

More information

Appendix B. Standards-Track TCP Evaluation

Appendix B. Standards-Track TCP Evaluation 215 Appendix B Standards-Track TCP Evaluation In this appendix, I present the results of a study of standards-track TCP error recovery and queue management mechanisms. I consider standards-track TCP error

More information

Investigating the Use of Synchronized Clocks in TCP Congestion Control

Investigating the Use of Synchronized Clocks in TCP Congestion Control Investigating the Use of Synchronized Clocks in TCP Congestion Control Michele Weigle (UNC-CH) November 16-17, 2001 Univ. of Maryland Symposium The Problem TCP Reno congestion control reacts only to packet

More information

Multicast Transport Protocol Analysis: Self-Similar Sources *

Multicast Transport Protocol Analysis: Self-Similar Sources * Multicast Transport Protocol Analysis: Self-Similar Sources * Mine Çağlar 1 Öznur Özkasap 2 1 Koç University, Department of Mathematics, Istanbul, Turkey 2 Koç University, Department of Computer Engineering,

More information

Impact of transmission errors on TCP performance. Outline. Random Errors

Impact of transmission errors on TCP performance. Outline. Random Errors Impact of transmission errors on TCP performance 1 Outline Impact of transmission errors on TCP performance Approaches to improve TCP performance Classification Discussion of selected approaches 2 Random

More information

Wireless TCP Performance Issues

Wireless TCP Performance Issues Wireless TCP Performance Issues Issues, transport layer protocols Set up and maintain end-to-end connections Reliable end-to-end delivery of data Flow control Congestion control Udp? Assume TCP for the

More information

CS321: Computer Networks Congestion Control in TCP

CS321: Computer Networks Congestion Control in TCP CS321: Computer Networks Congestion Control in TCP Dr. Manas Khatua Assistant Professor Dept. of CSE IIT Jodhpur E-mail: manaskhatua@iitj.ac.in Causes and Cost of Congestion Scenario-1: Two Senders, a

More information

AMCA: an Active-based Multicast Congestion Avoidance Algorithm

AMCA: an Active-based Multicast Congestion Avoidance Algorithm AMCA: an Active-based Multicast Congestion Avoidance Algorithm M. Maimour and C. D. Pham RESO/LIP - ENS, 46 alle d Italie 69364 Lyon Cedex 7 - France email: mmaimour,cpham @ens-lyon.fr Abstract Many works

More information

Equation-Based Congestion Control for Unicast Applications. Outline. Introduction. But don t we need TCP? TFRC Goals

Equation-Based Congestion Control for Unicast Applications. Outline. Introduction. But don t we need TCP? TFRC Goals Equation-Based Congestion Control for Unicast Applications Sally Floyd, Mark Handley AT&T Center for Internet Research (ACIRI) Jitendra Padhye Umass Amherst Jorg Widmer International Computer Science Institute

More information

TCP. CSU CS557, Spring 2018 Instructor: Lorenzo De Carli (Slides by Christos Papadopoulos, remixed by Lorenzo De Carli)

TCP. CSU CS557, Spring 2018 Instructor: Lorenzo De Carli (Slides by Christos Papadopoulos, remixed by Lorenzo De Carli) TCP CSU CS557, Spring 2018 Instructor: Lorenzo De Carli (Slides by Christos Papadopoulos, remixed by Lorenzo De Carli) 1 Sources Fall and Stevens, TCP/IP Illustrated Vol. 1, 2nd edition Congestion Avoidance

More information

ENRICHMENT OF SACK TCP PERFORMANCE BY DELAYING FAST RECOVERY Mr. R. D. Mehta 1, Dr. C. H. Vithalani 2, Dr. N. N. Jani 3

ENRICHMENT OF SACK TCP PERFORMANCE BY DELAYING FAST RECOVERY Mr. R. D. Mehta 1, Dr. C. H. Vithalani 2, Dr. N. N. Jani 3 Research Article ENRICHMENT OF SACK TCP PERFORMANCE BY DELAYING FAST RECOVERY Mr. R. D. Mehta 1, Dr. C. H. Vithalani 2, Dr. N. N. Jani 3 Address for Correspondence 1 Asst. Professor, Department of Electronics

More information

ANALYSIS OF THE CORRELATION BETWEEN PACKET LOSS AND NETWORK DELAY AND THEIR IMPACT IN THE PERFORMANCE OF SURGICAL TRAINING APPLICATIONS

ANALYSIS OF THE CORRELATION BETWEEN PACKET LOSS AND NETWORK DELAY AND THEIR IMPACT IN THE PERFORMANCE OF SURGICAL TRAINING APPLICATIONS ANALYSIS OF THE CORRELATION BETWEEN PACKET LOSS AND NETWORK DELAY AND THEIR IMPACT IN THE PERFORMANCE OF SURGICAL TRAINING APPLICATIONS JUAN CARLOS ARAGON SUMMIT STANFORD UNIVERSITY TABLE OF CONTENTS 1.

More information

CSE 461. TCP and network congestion

CSE 461. TCP and network congestion CSE 461 TCP and network congestion This Lecture Focus How should senders pace themselves to avoid stressing the network? Topics Application Presentation Session Transport Network congestion collapse Data

More information

image 3.8 KB Figure 1.6: Example Web Page

image 3.8 KB Figure 1.6: Example Web Page image. KB image 1 KB Figure 1.: Example Web Page and is buffered at a router, it must wait for all previously queued packets to be transmitted first. The longer the queue (i.e., the more packets in the

More information

Multicast EECS 122: Lecture 16

Multicast EECS 122: Lecture 16 Multicast EECS 1: Lecture 16 Department of Electrical Engineering and Computer Sciences University of California Berkeley Broadcasting to Groups Many applications are not one-one Broadcast Group collaboration

More information

Performance Analysis of TCP Variants

Performance Analysis of TCP Variants 102 Performance Analysis of TCP Variants Abhishek Sawarkar Northeastern University, MA 02115 Himanshu Saraswat PES MCOE,Pune-411005 Abstract The widely used TCP protocol was developed to provide reliable

More information

Bandwidth Allocation & TCP

Bandwidth Allocation & TCP Bandwidth Allocation & TCP The Transport Layer Focus Application Presentation How do we share bandwidth? Session Topics Transport Network Congestion control & fairness Data Link TCP Additive Increase/Multiplicative

More information

Implementation of a Reliable Multicast Transport Protocol (RMTP)

Implementation of a Reliable Multicast Transport Protocol (RMTP) Implementation of a Reliable Multicast Transport Protocol (RMTP) Greg Nilsen University of Pittsburgh Pittsburgh, PA nilsen@cs.pitt.edu April 22, 2003 Abstract While many network applications can be created

More information

Cross-layer TCP Performance Analysis in IEEE Vehicular Environments

Cross-layer TCP Performance Analysis in IEEE Vehicular Environments 24 Telfor Journal, Vol. 6, No. 1, 214. Cross-layer TCP Performance Analysis in IEEE 82.11 Vehicular Environments Toni Janevski, Senior Member, IEEE, and Ivan Petrov 1 Abstract In this paper we provide

More information

Delay Performance of the New Explicit Loss Notification TCP Technique for Wireless Networks

Delay Performance of the New Explicit Loss Notification TCP Technique for Wireless Networks Delay Performance of the New Explicit Loss Notification TCP Technique for Wireless Networks Wenqing Ding and Abbas Jamalipour School of Electrical and Information Engineering The University of Sydney Sydney

More information

Transmission Control Protocol (TCP)

Transmission Control Protocol (TCP) TETCOS Transmission Control Protocol (TCP) Comparison of TCP Congestion Control Algorithms using NetSim @2017 Tetcos. This document is protected by copyright, all rights reserved Table of Contents 1. Abstract....

More information

ROBUST TCP: AN IMPROVEMENT ON TCP PROTOCOL

ROBUST TCP: AN IMPROVEMENT ON TCP PROTOCOL ROBUST TCP: AN IMPROVEMENT ON TCP PROTOCOL SEIFEDDINE KADRY 1, ISSA KAMAR 1, ALI KALAKECH 2, MOHAMAD SMAILI 1 1 Lebanese University - Faculty of Science, Lebanon 1 Lebanese University - Faculty of Business,

More information

Overview. TCP congestion control Computer Networking. TCP modern loss recovery. TCP modeling. TCP Congestion Control AIMD

Overview. TCP congestion control Computer Networking. TCP modern loss recovery. TCP modeling. TCP Congestion Control AIMD Overview 15-441 Computer Networking Lecture 9 More TCP & Congestion Control TCP congestion control TCP modern loss recovery TCP modeling Lecture 9: 09-25-2002 2 TCP Congestion Control Changes to TCP motivated

More information

ADVANCED COMPUTER NETWORKS

ADVANCED COMPUTER NETWORKS ADVANCED COMPUTER NETWORKS Congestion Control and Avoidance 1 Lecture-6 Instructor : Mazhar Hussain CONGESTION CONTROL When one part of the subnet (e.g. one or more routers in an area) becomes overloaded,

More information

Congestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University

Congestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University Congestion Control Daniel Zappala CS 460 Computer Networking Brigham Young University 2/25 Congestion Control how do you send as fast as possible, without overwhelming the network? challenges the fastest

More information

Randomization. Randomization used in many protocols We ll study examples:

Randomization. Randomization used in many protocols We ll study examples: Randomization Randomization used in many protocols We ll study examples: Ethernet multiple access protocol Router (de)synchronization Switch scheduling 1 Ethernet Single shared broadcast channel 2+ simultaneous

More information

Randomization used in many protocols We ll study examples: Ethernet multiple access protocol Router (de)synchronization Switch scheduling

Randomization used in many protocols We ll study examples: Ethernet multiple access protocol Router (de)synchronization Switch scheduling Randomization Randomization used in many protocols We ll study examples: Ethernet multiple access protocol Router (de)synchronization Switch scheduling 1 Ethernet Single shared broadcast channel 2+ simultaneous

More information

Chapter III: Transport Layer

Chapter III: Transport Layer Chapter III: Transport Layer UG3 Computer Communications & Networks (COMN) Mahesh Marina mahesh@ed.ac.uk Slides thanks to Myungjin Lee and copyright of Kurose and Ross Principles of congestion control

More information

TCP based Receiver Assistant Congestion Control

TCP based Receiver Assistant Congestion Control International Conference on Multidisciplinary Research & Practice P a g e 219 TCP based Receiver Assistant Congestion Control Hardik K. Molia Master of Computer Engineering, Department of Computer Engineering

More information

Enhancing TCP Throughput over Lossy Links Using ECN-capable RED Gateways

Enhancing TCP Throughput over Lossy Links Using ECN-capable RED Gateways Enhancing TCP Throughput over Lossy Links Using ECN-capable RED Gateways Haowei Bai AES Technology Centers of Excellence Honeywell Aerospace 3660 Technology Drive, Minneapolis, MN 5548 E-mail: haowei.bai@honeywell.com

More information

An analysis of retransmission strategies for reliable multicast protocols

An analysis of retransmission strategies for reliable multicast protocols An analysis of retransmission strategies for reliable multicast protocols M. Schuba, P. Reichl Informatik 4, Aachen University of Technology 52056 Aachen, Germany email: marko peter@i4.informatik.rwth-aachen.de

More information

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4 Principles of reliable data transfer 3.5 Connection-oriented transport: TCP segment

More information

An Empirical Study of Reliable Multicast Protocols over Ethernet Connected Networks

An Empirical Study of Reliable Multicast Protocols over Ethernet Connected Networks An Empirical Study of Reliable Multicast Protocols over Ethernet Connected Networks Ryan G. Lane Daniels Scott Xin Yuan Department of Computer Science Florida State University Tallahassee, FL 32306 {ryanlane,sdaniels,xyuan}@cs.fsu.edu

More information

CS 557 Congestion and Complexity

CS 557 Congestion and Complexity CS 557 Congestion and Complexity Observations on the Dynamics of a Congestion Control Algorithm: The Effects of Two-Way Traffic Zhang, Shenker, and Clark, 1991 Spring 2013 The Story So Far. Transport layer:

More information

Channel Quality Based Adaptation of TCP with Loss Discrimination

Channel Quality Based Adaptation of TCP with Loss Discrimination Channel Quality Based Adaptation of TCP with Loss Discrimination Yaling Yang, Honghai Zhang, Robin Kravets University of Illinois-Urbana Champaign Abstract TCP responds to all losses by invoking congestion

More information

CS519: Computer Networks. Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control

CS519: Computer Networks. Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control : Computer Networks Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control TCP performance We ve seen how TCP the protocol works Sequencing, receive window, connection setup and teardown And

More information

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 Question 344 Points 444 Points Score 1 10 10 2 10 10 3 20 20 4 20 10 5 20 20 6 20 10 7-20 Total: 100 100 Instructions: 1. Question

More information

CS4700/CS5700 Fundamentals of Computer Networks

CS4700/CS5700 Fundamentals of Computer Networks CS4700/CS5700 Fundamentals of Computer Networks Lecture 15: Congestion Control Slides used with permissions from Edward W. Knightly, T. S. Eugene Ng, Ion Stoica, Hui Zhang Alan Mislove amislove at ccs.neu.edu

More information

Transport layer issues

Transport layer issues Transport layer issues Dmitrij Lagutin, dlagutin@cc.hut.fi T-79.5401 Special Course in Mobility Management: Ad hoc networks, 28.3.2007 Contents Issues in designing a transport layer protocol for ad hoc

More information

TCP PERFORMANCE FOR FUTURE IP-BASED WIRELESS NETWORKS

TCP PERFORMANCE FOR FUTURE IP-BASED WIRELESS NETWORKS TCP PERFORMANCE FOR FUTURE IP-BASED WIRELESS NETWORKS Deddy Chandra and Richard J. Harris School of Electrical and Computer System Engineering Royal Melbourne Institute of Technology Melbourne, Australia

More information

CS268: Beyond TCP Congestion Control

CS268: Beyond TCP Congestion Control TCP Problems CS68: Beyond TCP Congestion Control Ion Stoica February 9, 004 When TCP congestion control was originally designed in 1988: - Key applications: FTP, E-mail - Maximum link bandwidth: 10Mb/s

More information

Congestion control in TCP

Congestion control in TCP Congestion control in TCP If the transport entities on many machines send too many packets into the network too quickly, the network will become congested, with performance degraded as packets are delayed

More information

Random Early Detection (RED) gateways. Sally Floyd CS 268: Computer Networks

Random Early Detection (RED) gateways. Sally Floyd CS 268: Computer Networks Random Early Detection (RED) gateways Sally Floyd CS 268: Computer Networks floyd@eelblgov March 20, 1995 1 The Environment Feedback-based transport protocols (eg, TCP) Problems with current Drop-Tail

More information

F-RTO: An Enhanced Recovery Algorithm for TCP Retransmission Timeouts

F-RTO: An Enhanced Recovery Algorithm for TCP Retransmission Timeouts F-RTO: An Enhanced Recovery Algorithm for TCP Retransmission Timeouts Pasi Sarolahti Nokia Research Center pasi.sarolahti@nokia.com Markku Kojo, Kimmo Raatikainen University of Helsinki Department of Computer

More information

Understanding TCP Parallelization. Qiang Fu. TCP Performance Issues TCP Enhancements TCP Parallelization (research areas of interest)

Understanding TCP Parallelization. Qiang Fu. TCP Performance Issues TCP Enhancements TCP Parallelization (research areas of interest) Understanding TCP Parallelization Qiang Fu qfu@swin.edu.au Outline TCP Performance Issues TCP Enhancements TCP Parallelization (research areas of interest) Related Approaches TCP Parallelization vs. Single

More information

under grant CNS This work was supported in part by the National Science Foundation (NSF)

under grant CNS This work was supported in part by the National Science Foundation (NSF) Coordinated Multi-Layer Loss Recovery in TCP over Optical Burst-Switched (OBS) Networks Rajesh RC Bikram, Neal Charbonneau, and Vinod M. Vokkarane Department of Computer and Information Science, University

More information

Transport Protocols and TCP

Transport Protocols and TCP Transport Protocols and TCP Functions Connection establishment and termination Breaking message into packets Error recovery ARQ Flow control Multiplexing, de-multiplexing Transport service is end to end

More information

Lecture 11. Transport Layer (cont d) Transport Layer 1

Lecture 11. Transport Layer (cont d) Transport Layer 1 Lecture 11 Transport Layer (cont d) Transport Layer 1 Agenda The Transport Layer (continue) Connection-oriented Transport (TCP) Flow Control Connection Management Congestion Control Introduction to the

More information

ENSC 835: COMMUNICATION NETWORKS

ENSC 835: COMMUNICATION NETWORKS ENSC 835: COMMUNICATION NETWORKS Evaluation of TCP congestion control mechanisms using OPNET simulator Spring 2008 FINAL PROJECT REPORT LAXMI SUBEDI http://www.sfu.ca/~lsa38/project.html lsa38@cs.sfu.ca

More information

ENHANCING ENERGY EFFICIENT TCP BY PARTIAL RELIABILITY

ENHANCING ENERGY EFFICIENT TCP BY PARTIAL RELIABILITY ENHANCING ENERGY EFFICIENT TCP BY PARTIAL RELIABILITY L. Donckers, P.J.M. Havinga, G.J.M. Smit, L.T. Smit University of Twente, department of Computer Science, PO Box 217, 7 AE Enschede, the Netherlands

More information

A Survey on Quality of Service and Congestion Control

A Survey on Quality of Service and Congestion Control A Survey on Quality of Service and Congestion Control Ashima Amity University Noida, U.P, India batra_ashima@yahoo.co.in Sanjeev Thakur Amity University Noida, U.P, India sthakur.ascs@amity.edu Abhishek

More information

6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long

6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long 6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long Please read Chapter 19 of the 6.02 book for background, especially on acknowledgments (ACKs), timers,

More information

RCRT:Rate-Controlled Reliable Transport Protocol for Wireless Sensor Networks

RCRT:Rate-Controlled Reliable Transport Protocol for Wireless Sensor Networks RCRT:Rate-Controlled Reliable Transport Protocol for Wireless Sensor Networks JEONGYEUP PAEK, RAMESH GOVINDAN University of Southern California 1 Applications that require the transport of high-rate data

More information

Fast Retransmit. Problem: coarsegrain. timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission

Fast Retransmit. Problem: coarsegrain. timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission Fast Retransmit Problem: coarsegrain TCP timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission Packet 1 Packet 2 Packet 3 Packet 4 Packet 5 Packet 6 Sender Receiver

More information

Transport Layer PREPARED BY AHMED ABDEL-RAOUF

Transport Layer PREPARED BY AHMED ABDEL-RAOUF Transport Layer PREPARED BY AHMED ABDEL-RAOUF TCP Flow Control TCP Flow Control 32 bits source port # dest port # head len sequence number acknowledgement number not used U A P R S F checksum Receive window

More information

Exercises TCP/IP Networking With Solutions

Exercises TCP/IP Networking With Solutions Exercises TCP/IP Networking With Solutions Jean-Yves Le Boudec Fall 2009 3 Module 3: Congestion Control Exercise 3.2 1. Assume that a TCP sender, called S, does not implement fast retransmit, but does

More information

Dynamic Deferred Acknowledgment Mechanism for Improving the Performance of TCP in Multi-Hop Wireless Networks

Dynamic Deferred Acknowledgment Mechanism for Improving the Performance of TCP in Multi-Hop Wireless Networks Dynamic Deferred Acknowledgment Mechanism for Improving the Performance of TCP in Multi-Hop Wireless Networks Dodda Sunitha Dr.A.Nagaraju Dr. G.Narsimha Assistant Professor of IT Dept. Central University

More information

Performance Evaluation of TCP Westwood. Summary

Performance Evaluation of TCP Westwood. Summary Summary This project looks at a fairly new Transmission Control Protocol flavour, TCP Westwood and aims to investigate how this flavour of TCP differs from other flavours of the protocol, especially TCP

More information

Congestion Control End Hosts. CSE 561 Lecture 7, Spring David Wetherall. How fast should the sender transmit data?

Congestion Control End Hosts. CSE 561 Lecture 7, Spring David Wetherall. How fast should the sender transmit data? Congestion Control End Hosts CSE 51 Lecture 7, Spring. David Wetherall Today s question How fast should the sender transmit data? Not tooslow Not toofast Just right Should not be faster than the receiver

More information

TCP-Peach and FACK/SACK Options: Putting The Pieces Together

TCP-Peach and FACK/SACK Options: Putting The Pieces Together TCP-Peach and FACK/SACK Options: Putting The Pieces Together Giacomo Morabito, Renato Narcisi, Sergio Palazzo, Antonio Pantò Dipartimento di Ingegneria Informatica e delle Telecomunicazioni University

More information

A Tree-Based Reliable Multicast Scheme Exploiting the Temporal Locality of Transmission Errors

A Tree-Based Reliable Multicast Scheme Exploiting the Temporal Locality of Transmission Errors A Tree-Based Reliable Multicast Scheme Exploiting the Temporal Locality of Transmission Errors Jinsuk Baek 1 Jehan-François Pâris 1 Department of Computer Science University of Houston Houston, TX 77204-3010

More information

Chapter 24 Congestion Control and Quality of Service 24.1

Chapter 24 Congestion Control and Quality of Service 24.1 Chapter 24 Congestion Control and Quality of Service 24.1 Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 24-1 DATA TRAFFIC The main focus of congestion control

More information

Interference avoidance in wireless multi-hop networks 1

Interference avoidance in wireless multi-hop networks 1 Interference avoidance in wireless multi-hop networks 1 Youwei Zhang EE228A Project Report, Spring 2006 1 Motivation Wireless networks share the same unlicensed parts of the radio spectrum with devices

More information

Buffer Requirements for Zero Loss Flow Control with Explicit Congestion Notification. Chunlei Liu Raj Jain

Buffer Requirements for Zero Loss Flow Control with Explicit Congestion Notification. Chunlei Liu Raj Jain Buffer Requirements for Zero Loss Flow Control with Explicit Congestion Notification Chunlei Liu Raj Jain Department of Computer and Information Science The Ohio State University, Columbus, OH 432-277

More information

Predicting connection quality in peer-to-peer real-time video streaming systems

Predicting connection quality in peer-to-peer real-time video streaming systems Predicting connection quality in peer-to-peer real-time video streaming systems Alex Giladi Jeonghun Noh Information Systems Laboratory, Department of Electrical Engineering Stanford University, Stanford,

More information

100 Mbps. 100 Mbps S1 G1 G2. 5 ms 40 ms. 5 ms

100 Mbps. 100 Mbps S1 G1 G2. 5 ms 40 ms. 5 ms The Influence of the Large Bandwidth-Delay Product on TCP Reno, NewReno, and SACK Haewon Lee Λ, Soo-hyeoung Lee, and Yanghee Choi School of Computer Science and Engineering Seoul National University San

More information

TCP OVER AD HOC NETWORK

TCP OVER AD HOC NETWORK TCP OVER AD HOC NETWORK Special course on data communications and networks Zahed Iqbal (ziqbal@cc.hut.fi) Agenda Introduction Versions of TCP TCP in wireless network TCP in Ad Hoc network Conclusion References

More information

MaVIS: Media-aware Video Streaming Mechanism

MaVIS: Media-aware Video Streaming Mechanism MaVIS: Media-aware Video Streaming Mechanism Sunhun Lee and Kwangsue Chung School of Electronics Engineering, Kwangwoon University, Korea sunlee@adamskwackr and kchung@kwackr Abstract Existing streaming

More information

CIS 632 / EEC 687 Mobile Computing

CIS 632 / EEC 687 Mobile Computing CIS 632 / EEC 687 Mobile Computing TCP in Mobile Networks Prof. Chansu Yu Contents Physical layer issues Communication frequency Signal propagation Modulation and Demodulation Channel access issues Multiple

More information

Multimedia Systems Project 3

Multimedia Systems Project 3 Effectiveness of TCP for Video Transport 1. Introduction In this project, we evaluate the effectiveness of TCP for transferring video applications that require real-time guarantees. Today s video applications

More information

TCP in Asymmetric Environments

TCP in Asymmetric Environments TCP in Asymmetric Environments KReSIT, IIT Bombay Vijay T. Raisinghani TCP in Asymmetric Environments 1 TCP Overview Four congestion control algorithms Slow start Congestion avoidance Fast retransmit Fast

More information

Fast Convergence for Cumulative Layered Multicast Transmission Schemes

Fast Convergence for Cumulative Layered Multicast Transmission Schemes Fast Convergence for Cumulative Layered Multicast Transmission Schemes A. Legout and E. W. Biersack Institut EURECOM B.P. 193, 694 Sophia Antipolis, FRANCE flegout,erbig@eurecom.fr October 29, 1999 Eurecom

More information

Rate Based Pacing with Various TCP Variants

Rate Based Pacing with Various TCP Variants International OPEN ACCESS Journal ISSN: 2249-6645 Of Modern Engineering Research (IJMER) Rate Based Pacing with Various TCP Variants Mr. Sreekanth Bandi 1, Mr.K.M.Rayudu 2 1 Asst.Professor, Dept of CSE,

More information

Congestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources

Congestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources Congestion Source 1 Source 2 10-Mbps Ethernet 100-Mbps FDDI Router 1.5-Mbps T1 link Destination Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets

More information

Lecture 7: Sliding Windows. CSE 123: Computer Networks Geoff Voelker (guest lecture)

Lecture 7: Sliding Windows. CSE 123: Computer Networks Geoff Voelker (guest lecture) Lecture 7: Sliding Windows CSE 123: Computer Networks Geoff Voelker (guest lecture) Please turn in HW #1 Thank you From last class: Sequence Numbers Sender Receiver Sender Receiver Timeout Timeout Timeout

More information

Lecture 14: Congestion Control"

Lecture 14: Congestion Control Lecture 14: Congestion Control" CSE 222A: Computer Communication Networks Alex C. Snoeren Thanks: Amin Vahdat, Dina Katabi Lecture 14 Overview" TCP congestion control review XCP Overview 2 Congestion Control

More information