Teunis J. Ott Neil Aggarwal. Bellcore. 445 South Street. Morristown, NJ 07962{1910. link.

Size: px
Start display at page:

Download "Teunis J. Ott Neil Aggarwal. Bellcore. 445 South Street. Morristown, NJ 07962{1910. link."

Transcription

1 TCP over ATM: ABR or UBR? Teunis J. Ott Neil Aggarwal Bellcore 445 South Street Morristown, NJ 7962{9 Abstract This paper reports on a simulation study of the relative performances of the ATM ABR and UBR service categories in transporting TCP/IP ows through an ATM Network. The objective is two-fold: (i) to understand the interaction between the window - based end-to-end owcontrol TCP and the rate based owcontrol ABR which is restricted to the ATM part of the network, and (ii) to decide whether the greater complexity of ABR (than UBR) pays o in better performance of ABR (than UBR). The most important conclusion is that there does not seem to be strong evidence that for TCP/IP workloads the greater complexity of ABR pays o in better performance. Introduction The ATM Forum has nalized a draft standard [] for a number of service categories for transporting the of VCs (Virtual Circuits) through an ATM network. Among these service categories are ABR (Available Bit Rate) and UBR (Unspecied Bit Rate). Section 2 in this paper contains a quick sketch of how these service categories work. Details can be found in []. This paper reports on a simulation study of performances of these ABR and UBR service categories in transporting the ATM resulting from segmentation of TCP/IP packets. Among the goals are (i) to understand the interaction between the TCP/IP window based end-to-end protocol and the rate based ABR owcontrol which isre- stricted to the ATM part of the network and (ii) to decide whether the greater complexity of ABR pays of in better performance. We study this issue in particular in what we call the \Large Cross-Section" situation, where not just a few tens, but actually a few hundreds or even a few thousands of TCP ows are competing for bandwidth in an ATM bottleneck link. Among the more important conclusions and ndings are:. UBR vs ABR. There is no convincing evidence yet that the greater complexity of ABR (than UBR) pays o in better performance, at least as long as the workload is TCP/IP. 2. RM. (See below for the denition of RM ). When the rate of an ABR decreases below some point, the overhead in RM increases signicantly. This decrease in rate can be due to either low ACR or low bandwidths of the hosts feeding the VC. 3. Segmentation Buer Drain Time. (See below for the denition of Segmentation Buer). When the ACR of a VC decreases, the drain time increases and the risk that packet loss in a TCP ow causes segmentation buer underow disappears. Consequences of these ndings will be discussed later in this paper. The primary focus of the paper is on total throughput in a bottleneck link under overload. A secondary focus is on how the throughput is shared between types of connections (expressed in bandwidths of the original source of the TCP ows). For ABR, our simulation uses the ER (Explicit Rate, see [] or see section 2 in this paper) version. Our simulation contains a \switch behavior" (that is, a method for switches to set the ER values in Resource Management Cells) of our own design. This design is based on [2] and follows the max-min philosophy. For more complex networks, the switch behavior is expected to have a big impact on performance. We will argue that in the relatively simple networks we study, performance is relatively insensitive to switch behavior. We will also argue that our ndings must be expected to remain valid in more complex networks. Section 2 gives outline of ABR and UBR. Section 3 gives an outline of the networks we will simulate. Section 4 describes the simulation experiments we did. Section 5 explains how to read the simulation outputs. Section 6 makes

2 a number of observations about the simulation results, and section 7 explains the result. Finally, section 8 contains the conclusions. 2 Setting the stage The original motivation for the \TCP/IP over ATM with ABR" work was to study the interaction between the TCP/IP end-to-end window ow control and the ABR rate based ow control in the ATM part of the network, see Figure. packets H Entry Edge-Router Segm Switch Exit Edge-Router VD H* packets controls the rate at which ow from the segmentation buer to the output port of the Edge-Router. In what we call un-shaped UBR there eectively is no segmentation buer or VS: after segmentation, the are immediately put into the appropriate cell output port of the Edge-Router. In what we call shaped UBR, there is a segmentation buer, but the VS has a very simple role: there is a constant rate ACR (Allowed Cell Rate) at which the exit from the segmentation buer as long as that buer is not empty. This ACR of a shaped UBR VC presumably is determined at set-up of the VC, and remains constant until tear-down of the VC. In ABR, the ACR is a dynamically changing entity. There are two avors of ABR: ER (Explicit Rate) and EFCI (Explicit Forward Congestion Indicator). Our simulation contains the ER version. Both the ER and EFCI versions use RM (Resource Management). In the ER version, these RM ow as in Figure 2. Segm. Buffer VS Output Q Output Q packets Reassembly Output Q data VSf Data & RM Forward VC VDf data RM RM Figure : A single TCP/IP Connection over ATM with ABR data VDr Data & RM Reverse VC VSr data In Figure, a host H (source) sends a stream of data packets to a host (destination) H*. These two hosts use the TCP/IP protocol, so that H* sends acknowledgement packets back to H. For more detail on the TCP/IP protocol, see [2]. In the situation of Figure, the IP Packets go from H through an arbitrary network of routers (not shown in Figure ) to what we call an Edge-Router, where packets are segmented into ATM. We call the Edge-Router where segmentation occurs the entry Edge-Router (for this ow). These then move through an arbitrary ATM Network to an exit Edge-Router, where reassembly occurs, and the resulting IP packets go through another arbitrary network of routers (not shown) to the host H*. The ATM network between the two Edge-Routers is arbitrary. In Figure it is represented by a single ATM switch. Edge-Routers are a device of our invention. In reality, the function can be either in the rst ATM switch on the way from H to H*, orin the last packet router before entering the ATM part of the network. Acknowledgement packets follow the opposite path, roles of entry and exit Edge-Routers are then reversed. Every TCP connection is assigned to an ATM VC (Virtual Circuit). A TCP connection can have its own VC, or can share its VC with (possibly many) other TCP connections. After segmentation, the resulting are put in what we call the segmentation buer of its VC. \Generically", there is a VS (Virtual Source) assigned to the VC which Entry Edge-Router Exit Edge-Router Figure 2: The ow of RM In ABR, every VC not only has a VS as described above, but also a VD (Virtual Destination). In addition, in ATM every VC has a reverse VC. In our simulation, we have assumed that the data packets use the forward VC and the acknowledgement packets use the reverse VC. Both VCs have both a VS and a VD. (VSf and VDf for the forward VC, and VSr and VDr for the reverse VC). In the ER version of ABR, each VSf frequently sends RM along the VC to its VDf. One of the elds in the RM cell is the ER (Explicit Rate) eld, in which the VS places the rate at which it would like to send. All switches along the VC inspect the RM cell and may decrease the ER. The VDf hands over the RM cell to the VSr which sends it along the reverse VC to the VDr. The VDr then hands over the RM cell to the VSf, which reads the contents and updates the ACR of the forward VC. The VSf always discards returning RM. RM may already be discarded before returning, see [] for details. The ATM Forum draft Trac Management standard (see []) precisely prescribes the layout of the RM cell, and the way the VS uses returning RM to update the ACR. The ATM Forum draft standard also describes when the VS must generate and send a forward RM cell. An oversimpli- ed, but essentially correct description is that the VS must make sure that one out of every Nrm (default: Nrm = 32) it sends is a forward RM cell, unless that makes the

3 time between successive forward RM more than Trm (Trm: default msec). In that case, a forward RM cell is sent once every msec. The ATM Forum draft Trac Management standard leaves what we call the switch behavior (the way switches use information about their state of congestion to decrease the ER in the RM ) completely open. Our simulation contains a switch behavior of our own design. The ATM Forum draft standard, while prescribing behavior of the VS in great detail, leaves open the choice of the total bandwidth the VS requests in its RM. DS, DS & 32Kb/sec Hosts H H H H 5 Routers Ra Rz DS3 Host Hn 5 Edge Router Switch Deciding how the VS must set the ER (requested rate) in an outgoing forward RM cell is not easy. For example, a VS does not know whether an empty segmentation buer is caused by recent packet loss, or is caused by the application in source H, for example completion of the le being ftp-ed. At this point, major open questions are what we call the switch algorithms and the source behavior. At this point, source behavior is not behavior of VS or VD or TCP/IP behavior, but behavior of the \True Source": the applications in hosts like H that hand over data to the TCP layer in those hosts. Currently, source behavior used in the simulation still is that of a very large (\innitely large") le being ftpd. 3 The networks used in the simulations Most simulation results reported on in this paper used a network of the type described in Figures 3 and 4. Routers & Hosts (See Figure 4) DS3 Hosts H H2 Edge Routers Switch Switch Edge Routers 3 6 DS3 Hosts Figure 3: Network Description { Overview H H3 Routers & Hosts In these networks we have (see Figure 3) two ATM switches connected by a DS3 ATM link. This link between the two switches is the bottleneck link. Each of the 2 switches is connected, by DS3 ATM links, to 3 Edge- Routers. The numbering of the Edge-Routers is as in Figure 3. Each of these Edge-Routers is connected, through a DS3 packet link, to a DS3 host. These DS3 hosts (which have a DS3 port on the network) represent (for example) supercomputer centers. Each of the Edge-Routers also is connected, through a DS3 packet link, to 24 packet routers. Each of these packet routers connected to the Edge- Routers also is connected (see Figure 4) through a T link with a T (or DS) host, through DS links with 24 DS Figure 4: Network Description { Detail View hosts, and through 32Kbit/sec links with 48 what we call \32Kbit/sec" hosts. We often denote the 32Kbit/sec hosts as \DSh" hosts. Details about the networks simulated, such as propagation delays and buer-sizes, can be found in Appendix A. 4 The Experiments Our simulation experiments can be categorized according to three dimensions: ABR versus UBR VCs: We either have all ABR VCs or all UBR VCs. Shared versus Non-Shared VCs: In Shared VCs, a number (possibly a large number) of TCP ows shares a VC, and (in the case of ABR VCs) the segmentation buer of that VC. In Non-Shared VCs, every TCP ow has a VC for itself. Homogoneous versus Heterogeneous Loads: In the Homogeneous runs, all TCP ows are of the same port bandwidth (32 Kbit/sec, DS, DS, or \DS3"). In the Heterogeneous runs, there are TCP ows of dierent home port bandwidths (32 Kbit/sec, DS, DS, \DS3"). In Heterogeneous Shared runs, all TCP ows sharing a VC are of the same home bandwidth. Since in the networks we are using, two TCP connections that have the same entry Edge- Router also have the same exit Edge-Router, we achieved this by having four VCs between every Edge-Router and its mate at the other side of the switches: one for the DS3 connection (at most one), one for the DS connections (at most 24), one for the DS connections (at most 24*24 = 576), and one for the 32Kbit/sec connections (at most 24*48 = 52). For the ABR VCs we must specify the size of the segmentation buer. For Shared ABR VCs this always is 372 (see table below). For Non-Shared ABR VCs we decided to let the size of the segmentation buer depend on the home port bandwidth of the TCP connection. We have two ways of scaling the size of the segmentation buer with the home bandwidth od the TCP ow, see table below.

4 Maxwnd Segmentation Buer, Cells Packets Cells SQRT LIN Shared DS DS DS DSh Table : Maxwnd and Segmentation Buer Sizes Heterogeneous runs have the form (a, b, c, d), which means that at time zero a DS3 connections, b DS connections, c DS connections and d DSh connections start up. Most of those runs ran for seconds. A few ran for only 3 seconds. Homogeneous UBR Non-shared Throughputs DS throughput DS throughput Simulation Outputs Figure 5: Homogeneous, Non-Shared UBR, low load The output of primary interest is the total \good" bandwidth utilization in the bottleneck link, i.e. in terms of that are parts of data packets that contribute to getting a le tranfered from source host to destination host. RM are considered overhead and are not part of this good bandwidth utilization. Throughput is expressed in \normalized throughput". If the VCs are ABR VCs, the RM put a ceiling of less than 3/32 on the normalized throughput. In case of Heterogeneous runs we are also interested in the amount of (normalized) bandwidth obtained by the different types of TCP ows. In the output plots, we use the \sum of home bandwidths" of the TCP connections as surrogate for the oered load. This is normalized to \DS3 equivalent". Thus, a simulation run with a DS3 TCP connections, b DS connections, c DS connections and d 32 Kbit/sec connections has an offered load of a + b 24 + c d 52 DS3 equivalents. When this oered load is above,itis more than the bottleneck ATM link can handle, and a major question is how much trac will actually be carried. Thus, we plot carried (normalized) throughput versus (DS3 equivalent) oered load. byte packet, the DS and DSh curves reach a normalized throughput of nearly one at an oered load slightly below one. Apart from this only apparent anomaly, the plots look quite good: for oered loads below the link capacity, the throughput equals the oered load, while for larger oered loads the throughput almost equals the link capacity. Figures 7, 8, 9, give similar results for Homogeneous, Non-Shared ABR. Since in ABR we have to specify the size of the segmentation buers, we chose two options: \Linear Scaling" and \Square Root Scaling", see Table. Figures 7 and 8 give the results for Square Root Scaling, and Figures 9 and give the results for Linear Scaling. In both cases, for DS3 connections and DS connections the \perfect" behavior we saw with UBR continues: At low offered load, the throughput equals the oered load, while at high oered load the throughput is close to the link capacity. This pattern changes with DS and DSh connections: the throughput decreases drastically when the oered load increases beyond the link capacity. Homogeneous UBR Non-shared Throughputs.8 6 Simulation Results 6. Homogeneous, Non-Shared Figures 5 and 6 show the results for Homogeneous, Non- Shared (unshaped) UBR. One plot is for an oered load in the range {.3, the other over the range { 8. Each plot contains 4 curves: only DS3, only DS, only DS, and only 32Kbit/sec. By looking at the oered load (say 2.), we can compare the eect of 2 DS3 connections with that of 48 DS connections or of 52 DS connections or of Kbit/sec connections. Because of the 7 ATM per DS3 throughput DS throughput DS throughput Figure 6: Homogeneous, Non-Shared UBR, high load

5 Homogeneous ABR Non-shared Throughputs (Square-root scaling) DS throughput DS throughput Figure 7: ABR with Square Root scaled Segmentation Buer This is the behavior that led to the conclusion stated in the introduction. A discussion will be given in Section 7. Homogeneous ABR Non-shared Throughputs (Square-root scaling) DS3 throughput DS throughput. DS throughput Figure 8: ABR with Square Root Scaled Segmentation Buer Homogeneous ABR Non-shared Throughputs (Linear scaling) 6.2 Homogeneous, Shared For Shared Homogeneous UBR the results, as expected, are very close to those of Non-Shared Homogeneous UBR. We decided not to present our results for the Non-Shared Homogeneous UBR situation. For Shared Homogeneous ABR, a new segmentation buer size had to be chosen. We chose a size of 372, independent of the number or type of TCP connections using the VC. The results for Homogeneous ABR are given in Figures and 2, and show the almost perfect behavior we saw in UBR (often even slightly better) DS throughput. DS throughput Heterogeneous, Non-Shared For Heterogeneous runs we take a vector like (for example) (,,, 2) (see Section 4 for an explanation of (a, b, c, d)) and do simulation runs for k*(,,, 2) over a range of values of k. With k = 288 this would be a run with 288 DS TCP connections and 576 DSh TCP connections, for an \oered load" of (nominally equivalent to one DS3 TCP connection). For Heterogeneous Non-Shared UBR we chose the vectors (,,, 2) (Figure 3), (,, 24, 48) (Figure 4), and (, 24, 576, ) (Figure 5). These gures give not only the total normalized throughput on the bottleneck link, but also the shares obtained by the various components of the load. In Figure 3, DS and DSh contribute equal parts to the oered load, and maybeitwould be fair if they got equal shares of the normalized throughput. To a good degree this is true: at overloads by a factor to 3.5, DSh gets a bit more than its fair share, and at larger overloads DS gets the advantage. Figure 9: ABR with Linearly Scaled Segmentation Buer Homogeneous ABR Non-shared Throughputs (Linear scaling) DS3 throughput DS throughput. DS throughput Figure : ABR with Linearly Scaled Segmentation Buer

6 Homogeneous Shared ABR Throughputs Heterogeneous UBR Non-shared Throughputs.2 DS throughput DS throughput DS Throughput Figure : Shared Homogeneous ABR, low load.2 DS Throughput.8 Homogeneous Shared ABR Throughputs Figure 4: Heterogeneous Non-Shared UBR with (,, 24, 48) DS3 throughput DS throughput DS throughput Figure 2: Shared Homogeneous ABR, high load.8 Heterogeneous UBR Non-shared Throughputs DS Throughput.8 Heterogeneous UBR Non-shared Throughputs DS Throughput.2 DS Throughput DS3 Throughput Figure 5: Heterogeneous Non-Shared UBR with (, 24, 576, ) Figure 3: Heterogeneous Non-Shared UBR with (,,, 2)

7 Heterogeneous ABR Non-shared Throughputs (Square-root scaling) Heterogeneous ABR Non-shared Throughputs (Square-root scaling) DS Throughput.2 DS Throughput Figure 6: Non-Shared ABR with Square Root Scaling, (,,, 2) DS Throughput Figure 7: Non-Shared ABR with Square Root Scaling, (,, 24, 48) In Figure 4 we see that the DSh and DS connections, under overload, tend to get considerably more than their fair share of throughout. Similarly, in Figure 5 we notice a consistent pattern: the DS connections get much more throughput than the DS connections, and the DS3 connections get practically no throughput. We see that for heavier loads the narrower connections tend to get more than their fair share: The opposite of what the folklore says will happen. See Section 7 for an explanation Heterogeneous ABR Non-shared Throughputs (Square-root scaling) DS Throughput The previous three paragraphs made statements about \Fairness". A systematic discussion of fairness can not be had until a \Fairness" concept tied in to the taris to be used has been formulated. For Heterogeneous Non-Shared ABR we obtained results both with Linear Scaling and with Square Root Scaling..2 DS Throughput DS3 Throughput Figure 8: Non-Shared ABR with Square Root Scaling, (, 24, 576, ) For Square Root Scaling the results are in Figures 6 (,,, 2), 7 (,, 24, 48) and 8 (, 24, 576, ). For Linear Scaling the results similarly are in Figures 9, 2 and 2. Heterogeneous ABR Non-shared Throughputs (Linear scaling) The Heterogeneous Non-Shared ABR runs show an increased advantage to narrowband TCP connections. In fact, the DS3 connections get essentially locked out. This is not surprising, since our switch behavior is based on the maxmin principle, see e.g. [2], [3], and therefore attempts to give all active VCs the same ACR (subject to not giving more than is asked for). This issue will be discussed further in Section DS Throughput 6.4 Heterogeneous, Shared Heterogeneous Shared UBR is (again) extremely similar to Heterogeneous Non-Shared UBR, and results are not given. For Heterogeneous Shared ABR results are given in the Figures 22, 23, and Figure 9: Non-Shared ABR with Linear Scaling, (,,, 2)

8 Heterogeneous ABR Non-shared Throughputs (Linear scaling) Heterogeneous Shared ABR Throughputs.2 DS Throughput DS Throughput DS Throughput Figure 2: Non-Shared ABR with Linear Scaling, (,, 24, 48).2 DS Throughput Heterogeneous ABR Non-shared Throughputs (Linear scaling) Figure 23: Heterogeneous Shared ABR, (,, 24, 48) DS Throughput.2 DS Throughput DS3 Throughput Heterogeneous Shared ABR Throughputs Figure 2: Non-Shared ABR with Linear Scaling, (, , ).8.8 Heterogeneous Shared ABR Throughputs.6.4 DS Throughput DS Throughput.6.4 DS Throughput.2 DS3 Throughput Figure 24: Heterogeneous Shared ABR, (, 24, 576, ) Figure 22: Heterogeneous Shared ABR, (,,, 2)

9 Because in ABR the switch behavior attempts to give all active VCs the same ACR, the unfair(?) advantage of low bandwidth connections we saw in the case of Heterogeneous, Non-Shared simulation has disappeared in the Heterogeneous, Shared situation. The (relatively) low throughput of the DS3 TCP connections in gure 24 is due to the smaller number ofvcs for those connection. 7 Discussion of the Results Figures 5 { 2 and also (look at the total throughput curves) 3 { 24, strongly suggest that UBR (for which shared and non-shared are the same) and Shared ABR have the best performance, and with the yardstick of total throughput are closely tied, while Non-Shared ABR, with either scaling, performs rather poorly. In Subsection 7. we will explain our understanding of why Non-Shared ABR performs so poorly, and in Subsection 7.2 we will more carefully compare the two leading contenders, UBR and Shared ABR. Subsection 7. below explains that the poor behavior of Non-Shared ABR in the Large Cross-Section situation is due to an increasing ratio of RM. Another important consideration is that the switch behavior we have implemented in the simulation is based on the max-min philosophy (see [2], [3]) and as such attempts to give all active the VCs the same ACR (unless they specifically ask for less). In the the case of non-shared VCs where TCPs of dierent home-port bandwidths are present, this tends to work against the high bandwidth TCP connections. For this, and other reasons, research on switch behaviors is urgently needed. One more important observation is that TCP connections going through the same bottleneck buer will tend to have similar congestions windows (see [7]), unless the max window or advertised window keeps this from happening. This means that as long as RTTs are the same and the ATM part of the network does not counteract this, in case of overload a (say) 32Kbit/sec TCP connection has an advantage over a DS3 TCP connection. A last important observation is that for ABR VCs the size of the segmentation buer is not of great importance. In TCP, the bottleneck buer must have a drain-time of at least the contribution of the rest of the network to the RTT of the packets in order to prevent buer overow from leading to buer underow. If the ACR of a VC gets small, the draintime always gets large, so this condition is always satised. It seems that under overload neither ABR nor UBR are likely to suer from segmentation buer underow (unless the \True" source stops sending for other reasons than TCP ow control). 7. Non-Shared ABR The poor behavior of Non-Shared ABR in the Large Cross- Section situation is the increasing density of Forward RM. As long as a VC sends at least 32 every msec, only one out of every 32 is a forward RM cell. When a VC sends less than 32 every msec, (but enough to keep sending RM, see []), it sends forward RM cell roughly once every msec. 32 per msec corresponds roughly with a data rate of = 9Kbits/sec. Hence, a TCP connection restricted by a host port of less than 9Kbits/sec, if given its own VC, will generate one forward RM cell per msec. A TCP connection passing through a 28Kbit/sec modem will have at least one forward RM cell for every 7.5 data. Similar conclusions hold if the ACR of a VC falls below 32 per msec. In that case it does not matter how many TCP connections the VC is serving. Since our switch behavior is based on the max-min philosophy, all active VCs will tend to have the same ACR. This means that as soon as the number of active VCs increases beyond 3, the ratio of (number of RM ) over (number of data ) starts increasing and the normalized throughput starts going down. 7.2 UBR and Shared ABR Comparing Figures 5 and 6 with and 2, and (total throughputs only) 3 { 5 with 22 { 24, we see that, in terms of total throughputs, UBR and Shared ABR are quite comparable, with a possible tendency for Shared ABR to do slightly better (with important exceptions: see Figures 4 and 23). There are two other ways to compare UBR and Shared ABR. One is by considering fairness, i.e. by looking at the throughput per group of connections in Figures 3 { 5 and 22 { 24. The other is by looking at dynamic behavior. We also analyzed the buer occupancy process in the bottleneck switch (not shown in this paper), and found that (as to be expected) Shared ABR almost always has a buer occupancy which is almost constant over time. UBR does not have, and is not expected to have, this nice property. By comparing group throughputs, we see that for the trac model we use, if there are are both high bandwidth connections and low bandwidth connections and their aggregate oered loads are the same, UBR tends to give greater aggregate throughput to the lower bandwidth connections. We postulate this is due to the fact that dierent TCP connections going through the same bottleneck link will tend to have similar congestion windows (unless advertised windows dictate otherwise). A related reason may be that in the situation of mixed trac high bandwidth connections are more likely to lose two packets in one congestion episode, and thus go into time-out, than low bandwidth connections.

10 In the runs we did, Shared ABR did not have this property. The reason is again the switch behavior which tends to give all VCs the same ACR. The throughputs of the different classes therefore have ratios dictated by the numbers of VCs. We studied dynamic behavior in various ways but do not report on the results in this paper. While with UBR there are large uctuations in buer occupancy in the bottleneck buer, Shared ABR has only occasionally such uctuations. Also the instantaneous rates at which the various sources get service is more constant under Shared ABR than under UBR. 7.3 Performance of UBR The reason the total throughput performance of UBR is quite good in our simulations is that the various connections do not get in phase: We found that at the congestion periods usually only one or two connections lose packets. With other parameter values this might have been dierent. In particular with larger values of Maxwnd this might have been worse. 7.4 Performance of Shared ABR We credit the following two items with causing the good performance of Shared ABR: No or hardly any segmentation buer starvation. A surprisingly eective switch feedback algorithm. For the rst item we have a strong argument: The fact that due to small ACR (there are enough VCs left over) the drain time of segmentation buers is large. Before we did our simulation runs, we expected under- ow of segmentation buers to be a serious problem. To our pleasant surprise, this was not the case. Underow of the segmentation buers would of course lead to ACR \assigned to" certain VCs, but not utilized. Smaller segmentation buers and a smaller number of VCs (higher ACR (!)) would of course make segmentation buer underow a problem. For the second item we have some evidence: we found that with Shared ABR VCs the buer occupancy in the bottleneck buer in the switch was close to constant. This needs more research, with more complicated networks. 8 Conclusions For the network we studied, with the switch feedback mechanism and trac model we used, we came to the following conclusions:. With total bottleneck throughput as yardstick, UBR and Shared ABR are both quite good, and there is no clear reason to prefer one over the other. In the situation of overload with many low bandwidth connections, Non-Shared ABR performs poorly. 2. In UBR, there seems to be a tendency for low bandwidth connections to get more than their fair share of bandwidth. This is due to the fact that dierent TCP ows encountering the same loss behavior tend to have the same congestion window. 3. In Shared ABR, the switch feedback algorithm we used tends to equalize ACRs of the various VCs. The result is that the bandwidth a TCP connection actually receives depends strongly on the number and type of other TCP connections it shares the VC with. At the same time, the amount of bandwidth a class of TCP ows receives depends mostly on the number of VCs it has access to. 4. With UBR, the bottleneck buer occupancy varies wildly over time. With Shared ABR, and a constant number of active TCP connections, the bottleneck buer occupancy in can be close to constant. 5. Because of the above conclusions, switch feedback algorithms as well as the denition of \fairness" must be studied. Also, the trade-o between fairness and total throughput must be studied. 6. The studies we did must be repeated with other trac assumptions (many small ows of limited duration in addition to large le transfers by FTP), and in networks with multiple switches, and in situations with greatly varying propagation delays. Because we were able to trace back our ndings to TCP behavior (similar loss leads to similar windows) and simple aspects of ABR (the fractions of that are data, the Max-Min character of our switch behavior), we expect that simulations on a more complicated network will produce similar ndings. However, dierent source behavior (\True Source": the applications) may very well re-introduce segmentation buer underow, and details of the switch feedback mechanism become more important in multi-switch environments. Acknowledgement We thank our colleagues Jim Burns and Larry Wong for many helpful discussions. References [] ATM Forum Trac Management Draft Standard aftm-56., April 996. [2] Brakmo, L.S. O'Malley, S.O. and Peterson, L.L. (994) TCP Vegas: New Techniques for Congestion Detection and Avoidance. Proc. ACM SIGCOMM'94.

11 [3] Floyd, S. (99) Connections with Multiple Congested Gateways in Packet-Switched Networks, Part : Oneway trac. Computer Communications Review, vol 2 no 5 pp [4] Floyd, S. and Jacobson, V. (99) On Trac Phase Eects in Packet-Switched Gateways. Internetworking: Research and Experience, vol 3 no 3 pp (An earlier version of this paper appeared in Computer Communications Review, vol 2 no 2, 99) [5] Floyd, S. and Jacobson, V. (993) Random Early Detection gateways for congestion avoidance. IEEE/ACM Transactions of Networking, vol no 4 pp [6] Jacobson, V. (988) Congestion Avoidance and Control. Proc ACM SIGCOMM'88 pp [7] Jacobson,V. (99a) Modied TCP Congestion Avoidance Algorithm. Message to end2end interest mailing list, April 99. [8] Jacobson, V. (99b) Berkeley TCP Evolution from 4.3 Tahoe to 4.3 Reno. Proc. of the 8th Internet Engineering Task Force. Vancouver, Aug. 99. [9] Jacobson, V. (99c) Compressing TCP/IP Headers for Low-Speed Serial Links, IETF RFC 44 [] Jacobson, V. Braden, R. and Borman, D. (992) TCP extensions for High Performance, IETF RFC 323. [] Heinanen, J. (993) Multiprotocol Encapsulation over ATM Adaptation Layer 5, IETF RFC 483 [2] Kalampoukas, L., Varma,A. and Ramakrishnan, K.K. (995) An ecient rate allocation algorithm for ATM networks providing max-min fairness. ATM Forum Contribution. Orlando, Fla, June 995. [3] Kalampoukas, L., Varma,A. and Ramakrishnan, K.K. (995) Examination of the TM Source Behavior with an Ecient Switch Rate Allocation Algorithm (June 995), ATM Forum/95-767, [4] Laubach, M. (994) Classical IP and ARP over ATM, IETF RFC 577. [5] Lakshman, T.V. and Madhow, U. (994) Performance Analysis of widow-based ow control using TCP/IP: the eect of high bandwidth-delay products and random loss. IFIP Transactions C-26, High performance Networking V, pp [6] Lakshman, T.V., Neidhardt, A, and Ott, T.J. (995) The Drop from Front Strategy in TCP and in TCP over ATM. Proceedings of Infocom '96. [7] Ott, T.J., Kemperman, J.H.B., and Mathis, M. (996) Window Size Behavior in TCP/IP with Constant Loss Probability, working paper. [8] Romanow, A. and Floyd, S. (994) Dynamics of TCP Trac over ATM Networks. Proc. ACM SIG- COMM'94, pp [9] Shenker, S., Zhang, L. and Clark, D.D. (99) Some observations on the dynamics of a Congestion Control Algorithm. Computer Communications Review, pp 3-39, Oct 99. [2] Stevens, W. Richard (994) TCP/IP Illustrated, Vol : The Protocols. Addison Wesley, 994. [2] Stevens, W. Richard and Wright, Gary R. (995) TCP/IP Illustrated, Vol 2: The Implementation. Addison Wesley, 995. Appendix A Details about the networks Propagation delays always are 2 msec on the bottleneck link between the ATM switches, msec on the links between switches and Edge-Routers, msec on the DS3 links between edge-routers and DS3 hosts, 5 msec on the DS3 links between Edge-Routers and routers, and 5 msec on the links between routers and DS, DS, and DSh (32Kbit/sec) hosts. The minimal RTT is the same for all TCP/IP connections: 2*( )=48msec, of which 4isinthe packet part of the network and 8 is in the cell part of the network. The network therefore is completely symmetrical around the bottleneck link. The symmetry goes even further: In all runs every TCP connection is from a host on the left side of Figure 3 to the mirror (or mate) host on the right side of Figure 3. Thus, the network in principle allows 3 DS3 TCP connections, plus 3*24 = 744 DS TCP connections, plus 3*24*24 = 7856 DS TCP connections, plus 3*24*48 = Kbit/sec TCP connections. In reality we found that even the largest computers available to us ran out of memory for runs with more than about, TCP connections. Runs have the form (a, b, c, d), which means that on time zero a DS3 connections, b DS connections, c DS connections and d DSh connections start up. Most of those runs ran for seconds. A few ran for only 3 seconds. The TCP connections used are always distributed as evenly as possible over the various Edge-Routers and routers. To decrease memory use, for every run only hosts, routers, links, and VCs actually used in the simulation are entered into the network. Throughout the simulation all DS3 TCP connection use packets of 576 bytes long and have constant \advertised window" (which we often call Maxwnd) of 52 packets (i.e Kbytes). For DS connections these number are 576 byte packets, Maxwnd = 64 packets = Kbytes. For DS connections the numbers are 296 byte packets, Maxwnd = 24 packets = 74 bytes, and for the 32Kbit/sec TCP connections the numbers are 296 byte packets and Maxwnd

12 = 6 packets = 4736 bytes. For DS and 32Kbit/sec connections these numbers are consistent with use of PPP or SLIP. Throughout the simulation, the starting value for the TCP parameter ssthresh is 64Kbytes for every TCP connection. In our simulations we assumed that at segmentation 576 byte packets always produce 2 ATM, and that 296 byte packets always produce 7 ATM. Recently we found these numbers may in fact be respectively 3 and 7, see [] and [4]. Since we express results mostly in normalized throughput (fraction of perfect), this is not an important modication. The DS3 cell links have a bandwidth of 4.74 Mbit/sec, ie 96, /sec. The DS3 packet links have a bandwidth of Mbit/sec. The dierence with the bandwidth of links is there for historical reasons: This way, due to the headers in ATM, one \DS3" TCP ow using a UBR VC exactly lls one DS3 ATM link. DS packet links have a bandwidth of.536mbit/sec, DS packet links have a bandwidth of 64Kbit/sec, and nally 32Kbit/sec packet links have the bandwidth suggested by their name. The cell ports at the bottleneck links have buer space for 644. Cell ports from switches to Edge-Routers have buer space for 372, while cell ports from Edge- Routers to ATM switches have buer space for 496. In the Edge-Routers, packet ports to DS3 links have buer space for Mbytes. In the routers, DS3 ports have buer space for Mbytes, DS ports have buer space for Mbytes, and both DS ports and 32Kbit/sec ports have buer space for Mbytes. In the hosts, DS3 ports have buer space for Mbytes, DS ports have buer space for Mbytes, and both DS ports and 32Kbit/sec ports have buer space for Kbytes. In the simulation, routers do not share buers between ports. This is probably wrong, but irrelevant, because in our networks there is no loss in the packet part of the network. Also in the ATM part of the network there is no sharing of buer space between the dierent ports on an Edge-Router or switch. This may be a questionable assumption. The TCP connections all follow the Reno [8] Protocol.

perform well on paths including satellite links. It is important to verify how the two ATM data services perform on satellite links. TCP is the most p

perform well on paths including satellite links. It is important to verify how the two ATM data services perform on satellite links. TCP is the most p Performance of TCP/IP Using ATM ABR and UBR Services over Satellite Networks 1 Shiv Kalyanaraman, Raj Jain, Rohit Goyal, Sonia Fahmy Department of Computer and Information Science The Ohio State University

More information

Acknowledgment packets. Send with a specific rate TCP. Size of the required packet. XMgraph. Delay. TCP_Dump. SlidingWin. TCPSender_old.

Acknowledgment packets. Send with a specific rate TCP. Size of the required packet. XMgraph. Delay. TCP_Dump. SlidingWin. TCPSender_old. A TCP Simulator with PTOLEMY Dorgham Sisalem GMD-Fokus Berlin (dor@fokus.gmd.de) June 9, 1995 1 Introduction Even though lots of TCP simulators and TCP trac sources are already implemented in dierent programming

More information

SRED: Stabilized RED

SRED: Stabilized RED SRED: Stabilized RED Teunis J Ott TV Lakshman Larry Wong Bellcore Bell Labs Bellcore Abstract This paper describes a mechanism we call SRED (Stabilized Random Early Drop) Like RED (Random Early Detection)

More information

Performance Consequences of Partial RED Deployment

Performance Consequences of Partial RED Deployment Performance Consequences of Partial RED Deployment Brian Bowers and Nathan C. Burnett CS740 - Advanced Networks University of Wisconsin - Madison ABSTRACT The Internet is slowly adopting routers utilizing

More information

cwnd (bytes) time (secs)

cwnd (bytes) time (secs) Improving the Start-up Behavior of a Congestion Control Scheme for TCP Janey C. Hoe Laboratory for Computer Science Massachusetts Institute of Technology janey@ginger.lcs.mit.edu Abstract Based on experiments

More information

RED behavior with different packet sizes

RED behavior with different packet sizes RED behavior with different packet sizes Stefaan De Cnodder, Omar Elloumi *, Kenny Pauwels Traffic and Routing Technologies project Alcatel Corporate Research Center, Francis Wellesplein, 1-18 Antwerp,

More information

Dynamics of an Explicit Rate Allocation. Algorithm for Available Bit-Rate (ABR) Service in ATM Networks. Lampros Kalampoukas, Anujan Varma.

Dynamics of an Explicit Rate Allocation. Algorithm for Available Bit-Rate (ABR) Service in ATM Networks. Lampros Kalampoukas, Anujan Varma. Dynamics of an Explicit Rate Allocation Algorithm for Available Bit-Rate (ABR) Service in ATM Networks Lampros Kalampoukas, Anujan Varma and K. K. Ramakrishnan y UCSC-CRL-95-54 December 5, 1995 Board of

More information

Synopsis on. Thesis submitted to Dravidian University for the award of the degree of

Synopsis on. Thesis submitted to Dravidian University for the award of the degree of Synopsis on AN EFFICIENT EXPLICIT CONGESTION REDUCTION IN HIGH TRAFFIC HIGH SPEED NETWORKS THROUGH AUTOMATED RATE CONTROLLING Thesis submitted to Dravidian University for the award of the degree of DOCTOR

More information

Source 1. Destination 1. Bottleneck Link. Destination 2. Source 2. Destination N. Source N

Source 1. Destination 1. Bottleneck Link. Destination 2. Source 2. Destination N. Source N WORST CASE BUFFER REQUIREMENTS FOR TCP OVER ABR a B. Vandalore, S. Kalyanaraman b, R. Jain, R. Goyal, S. Fahmy Dept. of Computer and Information Science, The Ohio State University, 2015 Neil Ave, Columbus,

More information

SMART Retransmission: Performance with Overload and Random Losses

SMART Retransmission: Performance with Overload and Random Losses SMART Retransmission: Performance with Overload and Random Losses S. Keshav S. P. Morgan Department of Computer Science Bell Labs, Lucent Technologies Cornell University 7 Mountain Avenue Ithaca, New York

More information

Improving the Performance of TCP/IP over ATM UBR+ Service

Improving the Performance of TCP/IP over ATM UBR+ Service Improving the Performance of TCP/IP over ATM UBR+ Service Columbus, OH 43210 Jain@CIS.Ohio-State.Edu http://www.cis.ohio-state.edu/~jain/ 1 Overview TCP/IP over Plain UBR Slow Start, FRR, SACK, New Reno

More information

Stateless Proportional Bandwidth Allocation

Stateless Proportional Bandwidth Allocation Stateless Proportional Bandwidth Allocation Prasanna K. Jagannathan *a, Arjan Durresi *a, Raj Jain **b a Computer and Information Science Department, The Ohio State University b Nayna Networks, Inc. ABSTRACT

More information

CHOKe - A simple approach for providing Quality of Service through stateless approximation of fair queueing. Technical Report No.

CHOKe - A simple approach for providing Quality of Service through stateless approximation of fair queueing. Technical Report No. CHOKe - A simple approach for providing Quality of Service through stateless approximation of fair queueing Rong Pan Balaji Prabhakar Technical Report No.: CSL-TR-99-779 March 1999 CHOKe - A simple approach

More information

Rate Based Pacing with Various TCP Variants

Rate Based Pacing with Various TCP Variants International OPEN ACCESS Journal ISSN: 2249-6645 Of Modern Engineering Research (IJMER) Rate Based Pacing with Various TCP Variants Mr. Sreekanth Bandi 1, Mr.K.M.Rayudu 2 1 Asst.Professor, Dept of CSE,

More information

Traffic Management of Internet Protocols over ATM

Traffic Management of Internet Protocols over ATM Traffic Management of Internet Protocols over ATM Columbus, OH 43210 Jain@CIS.Ohio-State.Edu http://www.cis.ohio-state.edu/~jain/ 1 Overview Why ATM? ATM Service Categories: ABR and UBR Binary and Explicit

More information

The dual #ow control problem of TCP over ATM ABR services

The dual #ow control problem of TCP over ATM ABR services INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS Int. J. Commun. Syst. 12, 309}319 (1999) The dual #ow control problem of TCP over ATM ABR services Yuan-Cheng Lai *, Ying-Dar Lin and Hsiu-Fen Hung Department

More information

\Classical" RSVP and IP over ATM. Steven Berson. April 10, Abstract

\Classical RSVP and IP over ATM. Steven Berson. April 10, Abstract \Classical" RSVP and IP over ATM Steven Berson USC Information Sciences Institute April 10, 1996 Abstract Integrated Services in the Internet is rapidly becoming a reality. Meanwhile, ATM technology is

More information

Handling two-way TCP trac in asymmetric networks

Handling two-way TCP trac in asymmetric networks Handling two-way TCP trac in asymmetric networks Fatma Louati, Chadi Barakat, Walid Dabbous Planète research group, INRIA Sophia Antipolis, France {flouati, cbarakat, dabbous}@sophia.inria.fr Abstract.

More information

Buffer Requirements for Zero Loss Flow Control with Explicit Congestion Notification. Chunlei Liu Raj Jain

Buffer Requirements for Zero Loss Flow Control with Explicit Congestion Notification. Chunlei Liu Raj Jain Buffer Requirements for Zero Loss Flow Control with Explicit Congestion Notification Chunlei Liu Raj Jain Department of Computer and Information Science The Ohio State University, Columbus, OH 432-277

More information

******************************************************************* *******************************************************************

******************************************************************* ******************************************************************* ATM Forum Document Number: ATM_Forum/96-0518 Title: Performance of TCP over UBR and buffer requirements Abstract: We study TCP throughput and fairness over UBR for several buffer and maximum window sizes.

More information

On Individual and Aggregate TCP Performance

On Individual and Aggregate TCP Performance On Individual and Aggregate TCP Performance Lili Qiu, Yin Zhang, and Srinivasan Keshav flqiu, yzhang, skeshavg@cs.cornell.edu Department of Computer Science Cornell University, Ithaca, NY 14853 Abstract

More information

Congestion Control. Principles of Congestion Control. Network-assisted Congestion Control: ATM. Congestion Control. Computer Networks 10/21/2009

Congestion Control. Principles of Congestion Control. Network-assisted Congestion Control: ATM. Congestion Control. Computer Networks 10/21/2009 Congestion Control Kai Shen Principles of Congestion Control Congestion: informally: too many sources sending too much data too fast for the network to handle results of congestion: long delays (e.g. queueing

More information

Transmission Control Protocol. ITS 413 Internet Technologies and Applications

Transmission Control Protocol. ITS 413 Internet Technologies and Applications Transmission Control Protocol ITS 413 Internet Technologies and Applications Contents Overview of TCP (Review) TCP and Congestion Control The Causes of Congestion Approaches to Congestion Control TCP Congestion

More information

Raj Jain is now at

Raj Jain is now at Fair Flow Control for ATM-ABR Multipoint Connections Sonia Fahmy, RajJain, Rohit Goyal, and Bobby Vandalore Purdue University Department of Computer Sciences E-mail: fahmy@cs.purdue.edu Raj Jain is now

More information

Which Service for TCP/IP Traffic on ATM: ABR or UBR?

Which Service for TCP/IP Traffic on ATM: ABR or UBR? Which Service for TCP/IP Traffic on ATM: ABR or UBR? Standby Guaranteed Joy Riders Confirmed Columbus, OH 43210-1277 Contact: Jain@CIS.Ohio-State.Edu http://www.cis.ohio-state.edu/~jain/ 2 1 Overview Service

More information

A CONTROL THEORETICAL APPROACH TO A WINDOW-BASED FLOW CONTROL MECHANISM WITH EXPLICIT CONGESTION NOTIFICATION

A CONTROL THEORETICAL APPROACH TO A WINDOW-BASED FLOW CONTROL MECHANISM WITH EXPLICIT CONGESTION NOTIFICATION A CONTROL THEORETICAL APPROACH TO A WINDOW-BASED FLOW CONTROL MECHANISM WITH EXPLICIT CONGESTION NOTIFICATION Hiroyuki Ohsaki, Masayuki Murata, Toshimitsu Ushio, and Hideo Miyahara Department of Information

More information

100 Mbps. 100 Mbps S1 G1 G2. 5 ms 40 ms. 5 ms

100 Mbps. 100 Mbps S1 G1 G2. 5 ms 40 ms. 5 ms The Influence of the Large Bandwidth-Delay Product on TCP Reno, NewReno, and SACK Haewon Lee Λ, Soo-hyeoung Lee, and Yanghee Choi School of Computer Science and Engineering Seoul National University San

More information

ENSC 835 project (2002) TCP performance over satellite links. Kenny, Qing Shao Grace, Hui Zhang

ENSC 835 project (2002) TCP performance over satellite links. Kenny, Qing Shao Grace, Hui Zhang ENSC 835 project (2002) TCP performance over satellite links Kenny, Qing Shao Qshao@cs.sfu.ca Grace, Hui Zhang Hzhange@cs.sfu.ca Road map Introduction to satellite communications Simulation implementation

More information

ENSC 835 project (2002) TCP performance over satellite links. Kenny, Qing Shao Grace, Hui Zhang

ENSC 835 project (2002) TCP performance over satellite links. Kenny, Qing Shao Grace, Hui Zhang ENSC 835 project (2002) TCP performance over satellite links Kenny, Qing Shao Qshao@cs.sfu.ca Grace, Hui Zhang Hzhange@cs.sfu.ca Road map Introduction to satellite communications Simulation implementation

More information

Abstract. The Internet has traditionally relied on end-to-end congestion control performed

Abstract. The Internet has traditionally relied on end-to-end congestion control performed A Rate Based Back-pressure Flow Control for the Internet Carlos M. Pazos and Mario Gerla Computer Science Department University of California, Los Angeles 5 Hilgard Ave., Los Angeles, CA 924 fpazos,gerlag@cs.ucla.edu

More information

Headend Station. Headend Station. ATM Network. Headend Station. Station. Fiber Node. Station. Station Trunk Splitter.

Headend Station. Headend Station. ATM Network. Headend Station. Station. Fiber Node. Station. Station Trunk Splitter. ATM Trac Control in Hybrid Fiber-Coax Networks { Problems and Solutions Nada Golmie y Mark D. Corner z Jorg Liebeherr z David H. Su y y NIST { National Institute of Standards and Technology Gaithersburg,

More information

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4 Principles of reliable data transfer 3.5 Connection-oriented transport: TCP segment

More information

TCP/IP over ATM over Satellite Links

TCP/IP over ATM over Satellite Links TCP/IP over ATM over Satellite Links Seong-Cheol Kim Samsung Electronics Co. Ltd. http://www.cis.ohio-state.edu/~jain/ 1 Overview TCP over ABR over Satellites TCP over UBR over Satellites Improving TCP

More information

V 1. volume. time. t 1

V 1. volume. time. t 1 On-line Trac Contract Renegotiation for Aggregated Trac R. Andreassen and M. Stoer a a Telenor AS, P.O.Box 83, 2007 Kjeller, Norway. Email: fragnar.andreassen, mechthild.stoerg@fou.telenor.no Consider

More information

based scheme is used to compensate for the end-system queues which may accumulate. The fairness of the scheme is not clear and the choice of the thres

based scheme is used to compensate for the end-system queues which may accumulate. The fairness of the scheme is not clear and the choice of the thres Explicit rate control of TCP applications Abstract This paper examines dierent methods of controlling the rate of TCP applications to achieve the goals of fairness, delay control and throughput. The key

More information

Improving TCP Throughput over. Two-Way Asymmetric Links: Analysis and Solutions. Lampros Kalampoukas, Anujan Varma. and.

Improving TCP Throughput over. Two-Way Asymmetric Links: Analysis and Solutions. Lampros Kalampoukas, Anujan Varma. and. Improving TCP Throughput over Two-Way Asymmetric Links: Analysis and Solutions Lampros Kalampoukas, Anujan Varma and K. K. Ramakrishnan y UCSC-CRL-97-2 August 2, 997 Board of Studies in Computer Engineering

More information

EVALUATING THE DIVERSE ALGORITHMS OF TRANSMISSION CONTROL PROTOCOL UNDER THE ENVIRONMENT OF NS-2

EVALUATING THE DIVERSE ALGORITHMS OF TRANSMISSION CONTROL PROTOCOL UNDER THE ENVIRONMENT OF NS-2 Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 4, Issue. 6, June 2015, pg.157

More information

Server A Server B. Sequence Number (MB) Time (seconds)

Server A Server B. Sequence Number (MB) Time (seconds) IMPROVING THE PERFORMANCE OF COOPERATING TCP CONNECTIONS Dorgival Guedes Computer Science Department Universidade Federal de Minas Gerais dorgival@dcc.ufmg.br Larry Peterson Computer Science Department

More information

Issues in Traffic Management on Satellite ATM Networks

Issues in Traffic Management on Satellite ATM Networks Issues in Traffic Management on Satellite ATM Networks Columbus, OH 43210 Jain@CIS.Ohio-State.Edu http://www.cis.ohio-state.edu/~jain/ 1 Overview Why ATM? ATM Service Categories: ABR and UBR Binary and

More information

A Flow Table-Based Design to Approximate Fairness

A Flow Table-Based Design to Approximate Fairness A Flow Table-Based Design to Approximate Fairness Rong Pan Lee Breslau Balaji Prabhakar Scott Shenker Stanford University AT&T Labs-Research Stanford University ICIR rong@stanford.edu breslau@research.att.com

More information

Random Early Detection (RED) gateways. Sally Floyd CS 268: Computer Networks

Random Early Detection (RED) gateways. Sally Floyd CS 268: Computer Networks Random Early Detection (RED) gateways Sally Floyd CS 268: Computer Networks floyd@eelblgov March 20, 1995 1 The Environment Feedback-based transport protocols (eg, TCP) Problems with current Drop-Tail

More information

TCP/IP over ATM using ABR, UBR, and GFR Services

TCP/IP over ATM using ABR, UBR, and GFR Services TCP/IP over ATM using ABR, UBR, and GFR Services Columbus, OH 43210 Jain@CIS.Ohio-State.Edu http://www.cis.ohio-state.edu/~jain/ 1 Overview Why ATM? ABR: Binary and Explicit Feedback ABR Vs UBR TCP/IP

More information

Outline Computer Networking. Functionality Split. Transport Protocols

Outline Computer Networking. Functionality Split. Transport Protocols Outline 15-441 15 441 Computer Networking 15-641 Lecture 10: Transport Protocols Justine Sherry Peter Steenkiste Fall 2017 www.cs.cmu.edu/~prs/15 441 F17 Transport introduction TCP connection establishment

More information

Traffic Management using Multilevel Explicit Congestion Notification

Traffic Management using Multilevel Explicit Congestion Notification Traffic Management using Multilevel Explicit Congestion Notification Arjan Durresi, Mukundan Sridharan, Chunlei Liu, Mukul Goyal Department of Computer and Information Science The Ohio State University

More information

CS 268: Lecture 7 (Beyond TCP Congestion Control)

CS 268: Lecture 7 (Beyond TCP Congestion Control) Outline CS 68: Lecture 7 (Beyond TCP Congestion Control) TCP-Friendly Rate Control (TFRC) explicit Control Protocol Ion Stoica Computer Science Division Department of Electrical Engineering and Computer

More information

The ERICA ALGORITHM for ABR TRAFFIC in ATM NETWORKS

The ERICA ALGORITHM for ABR TRAFFIC in ATM NETWORKS The ERICA ALGORITHM for ABR TRAFFIC in ATM NETWORKS Ibrahim Koçyigit Department of Electronics, Faculty of Engineering, Uludag University, Görükle, Bursa TURKEY E-mail: kocyigit@uludag.edu.tr Emrah Yürüklü

More information

ECE 333: Introduction to Communication Networks Fall 2001

ECE 333: Introduction to Communication Networks Fall 2001 ECE 333: Introduction to Communication Networks Fall 2001 Lecture 28: Transport Layer III Congestion control (TCP) 1 In the last lecture we introduced the topics of flow control and congestion control.

More information

ATM Quality of Service (QoS)

ATM Quality of Service (QoS) ATM Quality of Service (QoS) Traffic/Service Classes, Call Admission Control Usage Parameter Control, ABR Agenda Introduction Service Classes and Traffic Attributes Traffic Control Flow Control Special

More information

On the Transition to a Low Latency TCP/IP Internet

On the Transition to a Low Latency TCP/IP Internet On the Transition to a Low Latency TCP/IP Internet Bartek Wydrowski and Moshe Zukerman ARC Special Research Centre for Ultra-Broadband Information Networks, EEE Department, The University of Melbourne,

More information

Uncontrollable. High Priority. Users. Multiplexer. Server. Low Priority. Controllable. Users. Queue

Uncontrollable. High Priority. Users. Multiplexer. Server. Low Priority. Controllable. Users. Queue Global Max-Min Fairness Guarantee for ABR Flow Control Qingyang Hu, David W. Petr Information and Telecommunication Technology Center Department of Electrical Engineering & Computer Science The University

More information

Improving the Eectiveness of ATM Trac Control over Hybrid. Fiber-Coax Networks. Polytechnic University

Improving the Eectiveness of ATM Trac Control over Hybrid. Fiber-Coax Networks. Polytechnic University Improving the Eectiveness of ATM Trac ontrol over Hybrid Fiber-oax Networks Nada Golmie y Mark D. orner y Jorg Liebeherr David H. Su y y National Institute of Standards and Technology Gaithersburg, MD

More information

CSC 4900 Computer Networks: TCP

CSC 4900 Computer Networks: TCP CSC 4900 Computer Networks: TCP Professor Henry Carter Fall 2017 Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4 Principles of reliable

More information

Design of a Weighted Fair Queueing Cell Scheduler for ATM Networks

Design of a Weighted Fair Queueing Cell Scheduler for ATM Networks Design of a Weighted Fair Queueing Cell Scheduler for ATM Networks Yuhua Chen Jonathan S. Turner Department of Electrical Engineering Department of Computer Science Washington University Washington University

More information

Promoting the Use of End-to-End Congestion Control in the Internet

Promoting the Use of End-to-End Congestion Control in the Internet Promoting the Use of End-to-End Congestion Control in the Internet Sally Floyd and Kevin Fall IEEE/ACM Transactions on Networking May 1999 ACN: TCP Friendly 1 Outline The problem of Unresponsive Flows

More information

Exercises TCP/IP Networking With Solutions

Exercises TCP/IP Networking With Solutions Exercises TCP/IP Networking With Solutions Jean-Yves Le Boudec Fall 2009 3 Module 3: Congestion Control Exercise 3.2 1. Assume that a TCP sender, called S, does not implement fast retransmit, but does

More information

cs/ee 143 Communication Networks

cs/ee 143 Communication Networks cs/ee 143 Communication Networks Chapter 4 Transport Text: Walrand & Parakh, 2010 Steven Low CMS, EE, Caltech Recap: Internet overview Some basic mechanisms n Packet switching n Addressing n Routing o

More information

A Simple Renement of Slow-start of TCP Congestion Control. Haining Wangy Hongjie Xinz Douglas S. Reevesz Kang G. Shiny

A Simple Renement of Slow-start of TCP Congestion Control. Haining Wangy Hongjie Xinz Douglas S. Reevesz Kang G. Shiny A Simple Renement of Slow-start of TCP Congestion Control Haining Wangy Hongjie Xinz Douglas S. Reevesz Kang G. Shiny ydepartment of EECS zdepartment of Computer Science The University of Michigan North

More information

CS Transport. Outline. Window Flow Control. Window Flow Control

CS Transport. Outline. Window Flow Control. Window Flow Control CS 54 Outline indow Flow Control (Very brief) Review of TCP TCP throughput modeling TCP variants/enhancements Transport Dr. Chan Mun Choon School of Computing, National University of Singapore Oct 6, 005

More information

An Agent Based Congestion Control and Notification Scheme for TCP over ABR

An Agent Based Congestion Control and Notification Scheme for TCP over ABR An Agent Based Congestion Control and Notification Scheme for TCP over ABR (This work is funded in part by the Engineering and Physical Science Research Council under project GR/L86937) K. Djemame, M.

More information

cell rate bandwidth exploited by ABR and UBR CBR and VBR time

cell rate bandwidth exploited by ABR and UBR CBR and VBR time DI TECH.REP RT97-224 1 A comparison of and to support TCP trac Sam Manthorpe and Jean-Yves Le Boudec Abstract This paper compares the performance of and for providing high-speed network interconnection

More information

Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015

Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015 Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015 I d like to complete our exploration of TCP by taking a close look at the topic of congestion control in TCP. To prepare for

More information

TCP Selective Acknowledgments and UBR Drop Policies to Improve ATM-UBR Performance over Terrestrial and Satellite Networks Rohit Goyal, Raj Jain, Shiv

TCP Selective Acknowledgments and UBR Drop Policies to Improve ATM-UBR Performance over Terrestrial and Satellite Networks Rohit Goyal, Raj Jain, Shiv Copyright 1997 IEEE. Published in the Proceedings of ICCCN'97, September 22-25, 1997 in Las Vegas, Nevada. Personal use of this material is permitted. However, permission to reprint/republish this material

More information

Principles of congestion control

Principles of congestion control Principles of congestion control Congestion: Informally: too many sources sending too much data too fast for network to handle Different from flow control! Manifestations: Lost packets (buffer overflow

More information

The Controlled Delay (CoDel) AQM Approach to fighting bufferbloat

The Controlled Delay (CoDel) AQM Approach to fighting bufferbloat The Controlled Delay (CoDel) AQM Approach to fighting bufferbloat BITAG TWG Boulder, CO February 27, 2013 Kathleen Nichols Van Jacobson Background The persistently full buffer problem, now called bufferbloat,

More information

Congestion Control. Principles of Congestion Control. Network assisted congestion. Asynchronous Transfer Mode. Computer Networks 10/23/2013

Congestion Control. Principles of Congestion Control. Network assisted congestion. Asynchronous Transfer Mode. Computer Networks 10/23/2013 Congestion Control Kai Shen Principles of Congestion Control Congestion: Informally: too many sources sending too much data too fast for the network to handle Results of congestion: long delays (e.g. queueing

More information

Improving TCP Performance over Wireless Networks using Loss Predictors

Improving TCP Performance over Wireless Networks using Loss Predictors Improving TCP Performance over Wireless Networks using Loss Predictors Fabio Martignon Dipartimento Elettronica e Informazione Politecnico di Milano P.zza L. Da Vinci 32, 20133 Milano Email: martignon@elet.polimi.it

More information

Fore ATM Switch ASX1000 D/E Box (0 to 20000km) ACTS (36000km)

Fore ATM Switch ASX1000 D/E Box (0 to 20000km) ACTS (36000km) Performance of TCP extensions on noisy high BDP networks Charalambous P. Charalambos, Victor S. Frost, Joseph B. Evans August 26, 1998 Abstract Practical experiments in a high bandwidth delay product (BDP)

More information

Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes

Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes Zhili Zhao Dept. of Elec. Engg., 214 Zachry College Station, TX 77843-3128 A. L. Narasimha Reddy

More information

ENSC 835 project TCP performance over satellite links. Kenny, Qing Shao Grace, Hui Zhang

ENSC 835 project TCP performance over satellite links. Kenny, Qing Shao Grace, Hui Zhang ENSC 835 project TCP performance over satellite links Kenny, Qing Shao Qshao@cs.sfu.ca Grace, Hui Zhang Hzhange@cs.sfu.ca Road map Introduction to satellite communications Simulation implementation Window

More information

15-744: Computer Networking TCP

15-744: Computer Networking TCP 15-744: Computer Networking TCP Congestion Control Congestion Control Assigned Reading [Jacobson and Karels] Congestion Avoidance and Control [TFRC] Equation-Based Congestion Control for Unicast Applications

More information

A New Fair Window Algorithm for ECN Capable TCP (New-ECN)

A New Fair Window Algorithm for ECN Capable TCP (New-ECN) A New Fair Window Algorithm for ECN Capable TCP (New-ECN) Tilo Hamann Department of Digital Communication Systems Technical University of Hamburg-Harburg Hamburg, Germany t.hamann@tu-harburg.de Jean Walrand

More information

Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks. Congestion Control in Today s Internet

Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks. Congestion Control in Today s Internet Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks Ion Stoica CMU Scott Shenker Xerox PARC Hui Zhang CMU Congestion Control in Today s Internet Rely

More information

Using End-to-End Statistics to. Distinguish Congestion and Corruption Losses: A Negative Result. Department of Computer Science. Texas A&M University

Using End-to-End Statistics to. Distinguish Congestion and Corruption Losses: A Negative Result. Department of Computer Science. Texas A&M University Using End-to-End Statistics to Distinguish Congestion and Corruption Losses: A Negative Result Saad Biaz Nitin H. Vaidya Department of Computer Science Texas A&M University College Station, TX 77843-3112,

More information

TCP based Receiver Assistant Congestion Control

TCP based Receiver Assistant Congestion Control International Conference on Multidisciplinary Research & Practice P a g e 219 TCP based Receiver Assistant Congestion Control Hardik K. Molia Master of Computer Engineering, Department of Computer Engineering

More information

Queue Management for Explicit Rate Based Congestion Control. K. K. Ramakrishnan. Murray Hill, NJ 07974, USA.

Queue Management for Explicit Rate Based Congestion Control. K. K. Ramakrishnan. Murray Hill, NJ 07974, USA. Queue Management for Explicit Rate Based Congestion Control Qingming Ma Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213, USA qma@cs.cmu.edu K. K. Ramakrishnan AT&T Labs. Research

More information

Estimating Arrival Rates from the RED Packet Drop History

Estimating Arrival Rates from the RED Packet Drop History Estimating Arrival Rates from the RED Packet Drop History Sally Floyd, Kevin Fall, and Kinh Tieu Network Research Group Lawrence Berkeley National Laboratory, Berkeley CA ffloyd,kfallg@eelblgov ** DRAFT

More information

Improving TCP Congestion Control over Internets with Heterogeneous Transmission Media

Improving TCP Congestion Control over Internets with Heterogeneous Transmission Media Improving TCP Congestion Control over Internets with Heterogeneous Transmission Media We present a new implementation of TCP that is better suited to today s Internet than TCP Reno or Tahoe. Our implementation

More information

1 Introduction Virtual private networks (VPNs) are rapidly gaining popularity. A VPN uses the public Internet to transparently connect private network

1 Introduction Virtual private networks (VPNs) are rapidly gaining popularity. A VPN uses the public Internet to transparently connect private network ************************************************************************************* ATM Forum Document Number: ATM Forum/99-0403 *************************************************************************************

More information

Chapter 3 Transport Layer

Chapter 3 Transport Layer Chapter 3 Transport Layer Part c Congestion Control Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley Transport Layer 3-1 Chapter 3 outline 3.1 transport-layer

More information

ATM Virtual Private Networks. for the Internet Data Trac. Abstract. The ecient utilization and management of bandwidth in broadband networks

ATM Virtual Private Networks. for the Internet Data Trac. Abstract. The ecient utilization and management of bandwidth in broadband networks ATM Virtual Private Networks for the Internet Data Trac Carlos M. D. Pazos and Mario Gerla UCLA Computer Science Department 5 Hilgard Ave., Los Angeles CA 924, USA, Phone: (31) 26-8589, Fax: (31) 825-7578,

More information

Abstract Studying network protocols and distributed applications in real networks can be dicult due to the need for complex topologies, hard to nd phy

Abstract Studying network protocols and distributed applications in real networks can be dicult due to the need for complex topologies, hard to nd phy ONE: The Ohio Network Emulator Mark Allman, Adam Caldwell, Shawn Ostermann mallman@lerc.nasa.gov, adam@eni.net ostermann@cs.ohiou.edu School of Electrical Engineering and Computer Science Ohio University

More information

A Generalization of a TCP Model: Multiple Source-Destination Case. with Arbitrary LAN as the Access Network

A Generalization of a TCP Model: Multiple Source-Destination Case. with Arbitrary LAN as the Access Network A Generalization of a TCP Model: Multiple Source-Destination Case with Arbitrary LAN as the Access Network Oleg Gusak and Tu rul Dayar Department of Computer Engineering and Information Science Bilkent

More information

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup Chapter 4 Routers with Tiny Buffers: Experiments This chapter describes two sets of experiments with tiny buffers in networks: one in a testbed and the other in a real network over the Internet2 1 backbone.

More information

A Survey on Quality of Service and Congestion Control

A Survey on Quality of Service and Congestion Control A Survey on Quality of Service and Congestion Control Ashima Amity University Noida, U.P, India batra_ashima@yahoo.co.in Sanjeev Thakur Amity University Noida, U.P, India sthakur.ascs@amity.edu Abhishek

More information

Analysis of Dynamic Behaviors of Many TCP Connections Sharing Tail Drop/RED Routers

Analysis of Dynamic Behaviors of Many TCP Connections Sharing Tail Drop/RED Routers Analysis of Dynamic Behaviors of Many TCP Connections Sharing Tail Drop/RED Routers Go Hasegawa and Masayuki Murata Cybermedia Center, Osaka University -3, Machikaneyama, Toyonaka, Osaka 560-853, Japan

More information

Congestion Propagation among Routers in the Internet

Congestion Propagation among Routers in the Internet Congestion Propagation among Routers in the Internet Kouhei Sugiyama, Hiroyuki Ohsaki and Makoto Imase Graduate School of Information Science and Technology, Osaka University -, Yamadaoka, Suita, Osaka,

More information

Announcements Computer Networking. Outline. Transport Protocols. Transport introduction. Error recovery & flow control. Mid-semester grades

Announcements Computer Networking. Outline. Transport Protocols. Transport introduction. Error recovery & flow control. Mid-semester grades Announcements 15-441 Computer Networking Lecture 16 Transport Protocols Mid-semester grades Based on project1 + midterm + HW1 + HW2 42.5% of class If you got a D+,D, D- or F! must meet with Dave or me

More information

Chapter 6: Congestion Control and Resource Allocation

Chapter 6: Congestion Control and Resource Allocation Chapter 6: Congestion Control and Resource Allocation CS/ECPE 5516: Comm. Network Prof. Abrams Spring 2000 1 Section 6.1: Resource Allocation Issues 2 How to prevent traffic jams Traffic lights on freeway

More information

15-441: Computer Networks Homework 3

15-441: Computer Networks Homework 3 15-441: Computer Networks Homework 3 Assigned: Oct 29, 2013 Due: Nov 12, 2013 1:30 PM in class Name: Andrew ID: 1 TCP 1. Suppose an established TCP connection exists between sockets A and B. A third party,

More information

Discriminating Congestion Losses from Wireless Losses using. Inter-Arrival Times at the Receiver. Texas A&M University.

Discriminating Congestion Losses from Wireless Losses using. Inter-Arrival Times at the Receiver. Texas A&M University. Discriminating Congestion Losses from Wireless Losses using Inter-Arrival Times at the Receiver Saad Biaz y Nitin H. Vaidya Department of Computer Science Texas A&M University College Station, TX 7784-,

More information

Characterization of Performance of TCP/IP over PPP and ATM over Asymmetric Links

Characterization of Performance of TCP/IP over PPP and ATM over Asymmetric Links Characterization of Performance of TCP/IP over PPP and ATM over Asymmetric Links Kaustubh S. Phanse Luiz A. DaSilva Kalyan Kidambi (kphanse@vt.edu) (ldasilva@vt.edu) (Kalyan.Kidambi@go.ecitele.com) Bradley

More information

Appendix B. Standards-Track TCP Evaluation

Appendix B. Standards-Track TCP Evaluation 215 Appendix B Standards-Track TCP Evaluation In this appendix, I present the results of a study of standards-track TCP error recovery and queue management mechanisms. I consider standards-track TCP error

More information

2 CHAPTER 2 LANs. Until the widespread deployment of ABR compatible products, most ATM LANs will probably rely on the UBR service category. To ll the

2 CHAPTER 2 LANs. Until the widespread deployment of ABR compatible products, most ATM LANs will probably rely on the UBR service category. To ll the 2 A SIMULATION STUDY OF TCP WITH THE GFR SERVICE CATEGORY Olivier Bonaventure Research Unit in Networking,Universite de Liege,Belgium bonavent@monteore.ulg.ac.be Abstract: Recently, the Guaranteed Frame

More information

QUALITY of SERVICE. Introduction

QUALITY of SERVICE. Introduction QUALITY of SERVICE Introduction There are applications (and customers) that demand stronger performance guarantees from the network than the best that could be done under the circumstances. Multimedia

More information

header information which limits the maximum possible eciency of data transmission, especially on LANs. Further, the loss of one cell results in the lo

header information which limits the maximum possible eciency of data transmission, especially on LANs. Further, the loss of one cell results in the lo CHAPTER 1 INTRODUCTION AND PROBLEM STATEMENT 1.1 Asynchronous Transfer Mode (ATM) Networks With the convergence of telecommunication, entertainment and computer industries, computer networking is adopting

More information

Real-Time ABR, MPEG2 Streams over VBR, and Virtual Source/Virtual Destination rt-abr switch

Real-Time ABR, MPEG2 Streams over VBR, and Virtual Source/Virtual Destination rt-abr switch Real-Time ABR, MPEG2 Streams over VBR, and Virtual Source/Virtual Destination rt-abr switch Professor of Computer and Information Sciences The Ohio State University Columbus OH 432101-1277 http://www.cis.ohio-state.edu/~jain/

More information

RICE UNIVERSITY. Analysis of TCP Performance over ATM. Networks. Mohit Aron. A Thesis Submitted. in Partial Fulfillment of the

RICE UNIVERSITY. Analysis of TCP Performance over ATM. Networks. Mohit Aron. A Thesis Submitted. in Partial Fulfillment of the RICE UNIVERSITY Analysis of TCP Performance over ATM Networks by Mohit Aron A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree Master of Science Approved, Thesis Committee: Dr.

More information

Chapter 4 Network Layer: The Data Plane. Part A. Computer Networking: A Top Down Approach

Chapter 4 Network Layer: The Data Plane. Part A. Computer Networking: A Top Down Approach Chapter 4 Network Layer: The Data Plane Part A All material copyright 996-06 J.F Kurose and K.W. Ross, All Rights Reserved Computer Networking: A Top Down Approach 7 th Edition, Global Edition Jim Kurose,

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2014 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

Figure 7: Sending rate for Connection 1 for all 5 schemes

Figure 7: Sending rate for Connection 1 for all 5 schemes Figure 7: Sending rate for Connection 1 for all 5 schemes References [1] R. Caceres, P. B. Danzig, S. Jamin and D. J. Mitzel. Characteristics of application conversations in TCP/IP wide-area internetworks.

More information