TCP recvr ABR recvr. TCP xmitter ABR source

Size: px
Start display at page:

Download "TCP recvr ABR recvr. TCP xmitter ABR source"

Transcription

1 TCP over End-to-End ABR: TCP Performance with End-to-End Rate Control and Stochastic Available Capacity Sanjay Shakkottai y, Anurag Kumar z, Aditya Karnik, and Ajit Anvekar Department of Electrical Communication Engg. Indian Institute of Science (IISc), Bangalore , INDIA fanurag, karnik, February 9, 1999 Abstract We consider TCP over the Available Bit Rate (ABR) service in an ATM network, when sharing the network bandwidth with time-varying CBR/VBR trac. It is assumed that the TCP end-points are also ATM/ABR end-points. We develop an analysis, using which we compare TCP throughput, with and without end-to-end ATM/ABR transport. The TCP connection is assumed to be bottlenecked at a single link in a wide-area network; the capacity of this link available to the TCP connection is time varying (owing, e.g., to being shared by guaranteed service trac). We also provide results from a simulation on a TCP test-bed that uses actual Linux TCP code, and a simulation/emulation of the network model inside the Linux kernel. We show that, over ABR, the performance of TCP improves signicantly if the network bottleneck bandwidth variations are slow as compared to the round-trip propagation delay. Further, we nd that TCP over ABR is relatively insensitive to bottleneck buer size. These results are for a short term average link capacity feedback at the ABR level (instantaneous capacity (INSTCAP)). We use the test-bed to study another rate feedback based on a longer term history of the capacity process. We call this EFFCAP feedback, as it is motivated by the notion of the eective capacity of the bottleneck link. EFFCAP feeds back the minimum over several (a parameter N) short term averages (averaging interval set by a parameter M). We nd that EFFCAP feedback is adaptive to Research supported by a grant from Nortel; Networking paper submitted to the IEEE Transactions on y Currently with the Coordinated Sciences Lab, University of Illinois at Urbana-Champaign, IL, USA z Author for correspondence 1

2 the rate of bandwidth variations at the bottleneck link, and thus yields good performance (as compared to INSTCAP) over a wide range of the rate of bottleneck bandwidth variation. We then study the eect of the choice of M and N on the throughput performance of TCP. We provide a guideline for choosing values of these parameters that work for a wide range of the rate of bottleneck bandwidth variation, and the session round trip time. Finally, we consider the model with two TCP connections to study if TCP over ABR provides throughput fairness even if the two connections have dierent roundtrip propagation delays. We study the choice of M for the two connections case; in particular, we study whether dierent values of M should be used for EFFCAP feedbacks to the two connections, or whether an average value of M could be used. We nd that using an average value of M (the more practical alternative) provides better fairness, though there is some loss in total throughput. Further, TCP over ABR is much fairer than TCP alone, over the entire range of the rate of bottleneck bandwidth variations. 1 Introduction TCP xmitter ABR source bottleneck link TCP recvr ABR recvr end-to-end ABR Figure 1: The network under study has a large round trip delay and a single congested link in the path of the connection. The ABR connection originates at the host and ends at the destination node. We call this scenario \end-to-end" ABR. The Available Bit Rate (ABR) service in Asynchronous Transfer Mode (ATM) networks is primarily meant for transporting best-eort data trac. Connections that use the ABR service (so called ABR sessions) share the network bandwidth left over after serving CBR and VBR trac. This available bandwidth varies with the requirements of the ongoing CBR/VBR sessions, hence the switches carrying ABR sessions implement a rate-based feedback control for congestion avoidance. This control causes the ABR sources to reduce or increase their cell transmission rates depending on the availability of bandwidth in the network. As the ABR service does not guarantee end-to-end reliable transport of data to the applications above it, an additional protocol is needed between the application and the ATM layer to ensure reliable communication. In most deploy- 2

3 TCP IP host TCP IP TCP IP ABR ATM wide area network TCP IP ABR TCP IP TCP IP host router/switch router/switch LAN LAN Figure 2: TCP/IP over edge-to-edge ATM/ABR, with the TCP connection split into one edge-to-edge TCP over ABR connection, and two end-to-edge TCP over IP connections. The edge switch/router regulates the ow of TCP acknowledgements back to the TCP senders. ments of ATM networks, the Internet's Transport Control Protocol (TCP) is used to ensure end-to-end reliability for data applications. TCP, however, has its own adaptive window based congestion control mechanism that serves to slow down sources during network congestion. Hence, it is very important to know whether the adaptive window control at the TCP level, and the rate-based control at the ABR level interact benecially from the point of view of application level throughput. In this paper, we consider the situation in which the ATM network extends upto the TCP endpoints; i.e., end-to-end ABR (as opposed to edge-to-edge ABR), see Figure 1. Our results also apply to TCP over edge-to-edge ABR if the end-to-end TCP connection comprises a tandem of an edge-to-edge TCP connection, and two edge-to-end TCP connections, with TCP spoong being done at the edge-devices (see Figure 2). Consider the hypothetical situation in which the control loop has zero delay. In such a case, the ABR source of a session (i.e., the ATM network interface card (NIC) at the source node) will follow the variations in the bandwidth of the bottleneck link without delay. As a result, no loss will take place in the network. The TCP window will grow, and once the TCP window size exceeds the window required to ll the round trip pipe, the packets will be buered in the source buer. Hence, we can see that congestion is eectively pushed to the network edge. As the source buer would be much larger than the maximum window size, the TCP window will remain xed at the maximum window size and congestion control will become a purely rate based one. If ABR service was not used, however, TCP would increase its window, overshoot the required window size, and then due to packet loss, would again reduce the window size. Hence, it is clear that, for zero delay in the control loop, end-to-end ABR will denitely improve the throughput of TCP. When variations in the bottleneck bandwidth do occur, however, and there is 3

4 delay in the ABR control loop, it is not clear whether there will be any improvement. In this paper, we study TCP over end-to-end ABR, and consider a TCP connection in a network with large round-trip delay; the connection has a single bottleneck link with time varying capacity. Current trends seem to indicate that at least in the near future, ATM will remain only a WAN transport technology. Hence ATM services will only extend to the network edge, whereas TCP will continue to be used as the end-to-end protocol, with interlan IP packets being transported over ATM virtual circuits. When designing a wide-area intranet based on such IP over ATM technology one can eectively control congestion in the backbone by transporting TCP/IP trac over ABR virtual circuits. A TCP connection between hosts on two LANs would be split into a TCP over ABR widearea edge-to-edge connection, and two end-to-edge connections over each of the LANs; see Figure 2. The edge devices would then control the TCP end-points on their respective LANs by regulating the ow of acknowledgements (ACKs) back to the senders. In this framework, our results would apply to the edge-to-edge TCP over ABR connection. Many simulation studies have been carried out to study the interaction between the TCP and ATM/ABR control loops. In [7], the authors study the eect of running large unidirectional le transfer applications on TCP over ABR. An important result from their study is that cell loss ratio (CLR) is not a good indicator of TCP performance. They also show that when maximum throughput is achieved, the TCP sources are rate limited by ABR rather than window limited by TCP. Reference [8] reports a study of the buering requirements for zero cell loss for TCP over ABR. It is shown that the buer capacity required at the switch is proportional to the maximum round trip time of all the VCs through the link, and is independent of the number of sources (or VCs). The proportionality factor depends on the switch algorithm. In further work, in [9], the authors introduce various patterns of VBR background trac. The VBR background trac introduces variations in the ABR capacity and the TCP trac introduces variations in the ABR demand. In [3], the authors study the eect of ATM/ABR control on the throughput and fairness of running large unidirectional le transfer applications on TCP-Tahoe and TCP- Reno (see [15]) with a single bottleneck link with a static service rate. The authors in [12] study the performance of TCP over ATM with multiple connections, but with a static bottleneck link. The paper reports a simulation study of the relative performances of the ATM ABR and UBR service categories in transporting TCP/IP ows through an edge-to-edge ATM (i.e., the host nodes are not ATM endpoints) network. Their summary conclusion is that there does not seem to be strong evidence that for TCP/IP workloads 4

5 the greater complexity of ABR pays o in better TCP throughputs. Their results are, however, for edge-to-edge ABR; they do not comment on end-to-end (i.e., the hosts have an ATM NIC) ATM which is what we study in this paper. All the studies above are primarily simulation studies, and analytical work on TCP over ABR does not seem to exist in the literature. In this paper, we make the following contributions: (i) We develop an analytical model for a TCP connection over explicit rate ABR when there is a single bottleneck link with time varying available capacity. In the analytical model we assume that the explicit rate feedback is based on the short term average available capacity; we think of this as instantaneous capacity feedback, and we call the approach INSTCAP feedback. (ii) We use a test-bed to validate the analytical results. This test-bed implements a hybrid simulation comprising actual Linux TCP code, and a network emulation/simulation implemented in the IP loopback code in the Linux kernel. (iii) We then develop an explicit rate feedback that is based on a longer term history of the bottleneck rate process. The computation is motivated from (though it is not the same as) the well known concept of eective capacity (derived from large deviations analysis of the bottleneck queue process). We call this EFFCAP feedback. EFFCAP is more eective in preventing loss at the bottleneck buers. Since the resulting model is hard to analyse, the results for EFFCAP feedback are all obtained from the hybrid simulator mentioned above. Our results show that dierent types of bottleneck bandwidth feedbacks are needed for slowly varying bottleneck bandwidth, rapidly varying bottleneck bandwidth and the intermediate regime. EFFCAP feedback adapts itself to the rate of bandwidth variation. We then develop guidelines for choosing two parameters that arise in the on-line calculations of EFFCAP. (iv) Finally, we study the performance of two TCP connections that pass through the same bottleneck link, but have dierent round trip propagation delays. Our objective here is to determine whether TCP over ABR is fairer than TCP alone, and under what circumstances. In this study we only use EFFCAP feedback. The paper is organized as follows. In Section 2, we describe the network model under study. In Section 3 we develop the analysis of TCP over ABR with INSTCAP 5

6 HOST COMPUTER TCP Source ABR adaptive rate server bottleneck link Segmentation Buffer TCP feedback Rate Feedback Figure 3: The segmentation buer of the system under study is in the host NIC card and extends into the host's main memory. The rate feedback from the bottleneck link is delayed by one round trip delay. feedback, and of TCP alone. In Section 4, we develop the EFFCAP algorithm; TCP over ABR with EFFCAP feedback is only amenable to simulation. In Section 5, we present analysis results for INSTCAP feedback, and simulation results for INSTCAP and EFFCAP. The performance of INSTCAP and EFFCAP feedbacks are compared. In Section 6, we study the choice of two parameters that arise in EFFCAP feedback. In Section 7 we provide simulation results for two TCP connections over ABR with EFFCAP feedback. Finally, in Section 8, we summarise the observations from our work. 2 The Network Model Consider a system consisting of a TCP connection between a source and destination node connected by a network with a large propagation delay as shown in Figure 1. The TCP congestion control does not implement fast-retransmit, and hence must time out for loss recovery. We assume that only one link (called the bottleneck link) causes signicant queueing delays in this connection, the delays due to the other links being xed (i.e., only xed propagation delays are introduced due to the other links). A more detailed model of this is shown in Figure 3. The TCP packets are converted into ATM cells and are forwarded to the ABR segmentation buer. This buer is in the network interface card (NIC) and extends into the main memory of the computer. Hence, we can look upon this as an innite buer. The segmentation buer server (also called the ABR source) gets rate feedback from the network. The ABR source service rate adapts to this rate 6

7 feedback. The bottleneck link buer represents either an ABR output buer in an ATM switch (in case of TCP over ABR), or a router buer (in case of TCP alone). The network carries other trac (CBR/VBR) which causes the bottleneck link capacity (as seen by the connection of interest) to vary with time. The bottleneck link buer is nite which can result in packet loss due to buer overow when rate mismatch between the source rate and the link service rate occurs. In our model, we will assume that a portion of the link capacity is reserved for best-eort trac, and hence is always available to the TCP connection. In the ATM/ABR case such a reservation would be made by using the Minimum Cell Rate (MCR) feature of ABR, and would be implemented by an appropriate link scheduling mechanism. Thus when guaranteed service trac is backlogged at this link, then the TCP connection gets only the bandwidth reserved for best-eort trac, otherwise it gets the full bandwidth. Hence a two state model suces for the available link rate. In the rst part of our study, we assume that the ABR feedback is an instantaneous rate feedback scheme; i.e., the bottleneck link periodically feeds back its short term average free capacity to the ABR source. This feedback reaches after one round trip propagation delay. The ABR source adapts to this value and transmits the cells at this rate. 3 TCP/ABR with Instantaneous Capacity Feedback At time t, the cells in the ATM segmentation buer at the source are transmitted at a time dependent rate S t?1 which depends on the ABR rate feedback (i.e., S t is the service time of a packet at time t). The bottleneck has a nite buer B max and has time dependent service rate R t?1 packets=sec which is a function of an independent Markov chain. In our analysis, we assume that there is a 2 state Markov chain modulating the channel. In each state, the bottleneck link capacity is deterministic. If the buer is full when a cell arrives to it, the cell is dropped. In addition, we assume that all cells corresponding to that TCP packet are dropped. This assumption allows us to work with full TCP packets only; it is akin to the Partial Packet Discard proposed in [13]. If the packet is not lost, it gets serviced at rate R t?1 (assumed constant over the service time of the packet), and reaches the destination after some deterministic delay. The destination ATM layer reassembles the packet and delivers it to the TCP receiver. The TCP receiver responds with an ACK 7

8 TCP/ABR Transmitter Bottleneck Link Propagation Delay S t Bmax R t A t B t D t Figure 4: Queueing model of TCP over end-to-end ABR (acknowledgement) which, after some delay (propagation + processing delay) reaches the source. The TCP source responds by increasing the window size. The TCP window evolution can be modeled in several ways (see [11], [10]). In this study, we model the TCP window adjustments in the congestion avoidance phase (for the original TCP algorithm as proposed in [4] by Van Jacobson) probabilistically as follows: every time a non-duplicate ACK (an acknowledgement that requests for a packet that has not been acknowledged earlier) arrives at the source, the window size W t increases by one with probability 1 W t. W t + = ( 1 Wt + 1 with probability W t W t otherwise (1) On the other hand, if a packet is lost at the bottleneck link buer, the ACK packets for any subsequently received packets continue to carry the sequence number of the lost packet. Eventually, the source window becomes empty, timeout begins and at the expiry of the timeout, the threshold window Wt th is set to half the maximum congestion window achieved after the loss, and the next slow start begins. 3.1 Queueing Network Model Figure 4 is a closed queueing network representation of the TCP over ABR session. We model the TCP connection during the data transfer phase; hence the data packets are assumed to be of xed length. The buer of the segmentation queue at the source host is assumed to be innite in size. There are as many packets in this buer as the number of untransmitted packets in the window. The service time at this buer models the time taken to transmit an entire TCP packet worth of ATM cells. Owing to the feedback 8

9 rate control, the service rate follows the rate of the bottleneck link. We assume that the rate does not change during the transmission of the cells from a single TCP packet. The service time (or equivalently, the service rate) follows the bottleneck link service rate with a delay of units of time, being the round trip (xed) propagation delay. The bottleneck link is modeled as a nite buer queue with deterministic packet service time with the service time (or rate) Markov modulated by an independent Markov chain on two states 0 and 1; the service rate is higher in state 0. The round trip propagation delay is modeled by an innite server queue with service time. Notice that various propagation delays in the network (the source-bottleneck link delay, bottleneck link-destination delay and the destination-source return path delay) have been lumped into a single delay element (See Figure 4). This can be justied from the fact that even if the source adapts itself to the change in link capacity earlier than one round trip time, the eect of that change will be seen only after a round trip time at the bottleneck link. With \packets" being read as \full TCP packets", let A t be the number of packets in the segmentation buer at the host at time t B t be the number of packets in the bottleneck link buer at time t D t be the number of packets in the propagation queue at time t R t be the service time of a packet at the bottleneck link; R t 2 fr 0 ; r 1 g. We take r 0 = 1 and r 1 > r 0. Thus, all times are normalized to the bottleneck link packet service time at the higher service rate. S t be the service time of a packet at the source link. S t follows R t with delay, the round trip propagation delay, i.e., S t = R t?, and S t 2 fr 0 ; r 1 g. Since the instantaneous rate of the bottleneck link is fed back, we call this the instantaneous rate feedback scheme. (Note that, in practice, the instantaneous rate is really the average rate over a small window; that is how instantaneous rate feedback is modelled in our simulations to be discussed later; we will call this feedback INSTCAP.) 3.2 Analysis of the Queueing Model Consider the vector process fz t ; t 0g := f(a t ; B t ; D t ; R t ; S t ); t 0g (2) 9

10 1 round trip propagation delay Coarse timeout occurs ( X, T ) j-1 j-1 ( X, T ) j j ( X, T ) j+1 j+1 k-1 no loss k k+1 k+2 k+3 Loss epoch Window reaches w & ceases to grow Slow start phase k+3 + log 2 w 2 Figure 5: The embedded process f(x j ; T j ); j 0g This process is hard to analyze directly. Instead, we study an embedded process, which with suitable approximations, turns out to be analytically tractable. Dene t k := k; k 0. Now, consider the embedded process f ~Z k ; k 0g = fz tk ; k 0g (3) with ~Z 0 = (1; 0; 0; r 0 ; r 0 ). We will use the obvious notation ~Z k = (A k ; B k ; D k ; R k ; S k ). In the following analysis, we will make the following assumptions : (i) We assume that the rate modulating Markov chain is embedded at the epochs (t 0 ; t 1 ; : : :). (ii) The source adapts immediately to the explicit rate feedback that it receives. This is true for the actual ABR source behaviour (as specied by the ATM Forum [1]) if the rate decreases. In the actual ABR source behaviour, an increase in the explicit rate results in an exponential growth of the source rate and not a sudden jump. We, however, assume that even an increase in the source rate takes place with a sudden jump. (iii) There is no loss in the slow start phase of TCP. In [11], the authors show that loss will occur in the slow start phase if Bmax < 1 even if no rate change occurs in r the slow start phase. However, for the case of TCP over ABR, as the source and bottleneck link rates match, no loss will occur in this phase as long as rate changes do not occur during slow-start. Hence, this assumption is valid for the case of TCP alone only if Bmax r 0 +1 >

11 Observe that packets in the propagation delay queue (see Figure 4) at t k will have departed from the queue by t k+1. This follows as the service time is deterministic, equal to, and t k+1? t k =. Further, any new packet arriving to the propagation delay queue during (t k ; t k+1 ) will still be present in that queue at t k+1. On the other hand, if loss occurs due to buer overow at the bottleneck link in (t k ; t k+1 ), we proceed as follows. Figure 5 shows a packet loss epoch in the interval (t k ; t k+1 ). This is the rst loss since the last time that TCP went through a timeout and recovery. At this loss epoch, there are packets in the bottleneck buer, and some ACKs \in ight" back to the transmitter. These ACKs and packets form an unbroken sequence, and hence will all contribute to the window increase algorithm at the transmitter (we assume that there is no ACK loss in the reverse path). The transmitter will continue transmitting until the window is exhausted and then will start a coarse timer. We assume that this timeout will occur in the interval (t k+2 ; t k+3 ) (see Figure 5), and that recovery starts at the embedded epoch t k+3. Thus, when the rst loss (after recovery) occurs in an interval then, in our model, it takes two more intervals to start recovery. At time t k, let ~Z k = (a; b; d; r; s). Note that, since no loss has occurred (since last recovery) until t k, therefore, the TCP window at t k is a + b + d. Now, given ~Z k, and assuming that (i) packet transmissions do not straddle the embedded epochs, and (ii) packets arrive back-to-back into the segmentation buer during any interval (t k ; t k+1 ). (This leads to a conservative estimate of TCP throughput. See the discussion following Figure 8 below.) we can nd the probability that a loss occurs during (t k ; t k+1 ), and the distribution of the TCP window at the time that timeout starts. Suppose this window is w, then the congestion avoidance threshold in the next recovery cycle will be m := d w 2 e. It will take approximately dlog 2 me round trip times (each of length ) to reach the congestion avoidance threshold. Assuming that no loss occurs during the slow start phase (this is true if B max is not too small [11]), at k 0 = k dlog 2 me, we can determine the distribution of ~Z k 0. With the above description in mind, dene T 0 = t 0 = 0 and X 0 = ~Z 0 = (1; 0; 0; r 0 ; r 0 ) (4) 11

12 For k 1, T k = 8>< >: T k?1 + T k?1 + (3 + dlog 2 w 2 if no loss occurs in (T k?1 ; T k?1 + ) e) if loss occurs in (T k?1 ; T k?1 + )and the loss window is w (5) and X k = ~Z Tk. For a particular realization of X k, we will write X k = x where x = (a; b; d; r; s). Dene p(x) = Prfbottleneck buer loss occurs during (T k ; T k + ) j X k = xg (6) and U k = (T k+1? T k ) for k 0 (7) Recalling the evolution of ft k ; k 0g described above, given X k = x, we have U k = 1 w.p. 1? p(x) (8) We now proceed to analyze the evolution of fx k ; k 0g. The bottleneck link modulating process, as mentioned earlier, is a two state Markov chain embedded at ft k ; k 0g taking values in fr 0 ; r 1 g. Let p 01 ; p 10 be the transition probabilities of the Markov chain. Notice that S k = R k?1, hence (R k ; S k ) is also a Discrete time Markov chain (DTMC). Let Q be the transition probability matrix for (R k ; S k ). Then, Q n (i 1 j 1 ; i 2 j 2 ) =PrfR k+n = i 2 ; S k+n = j 2 j R k = i 1 ; S k = j 1 g, where i 1 ; j 1 ; i 2 ; j 2 2 fr 0 ; r 1 g. is As explained above, given X k = (A k ; B k ; D k ; R k ; S k ), the TCP congestion window W k = A k + B k + D k (9) For particular X k = (a; b; c; r; s), X k+1 can be determined using the probabilistic model of window evolution during the congestion avoidance phase. Consider the evolution of A k, the segmentation buer queue process. If no loss occurs in (T k ; T k+1 ), A k+1 = (a + d + N k? s )+ (10) where N k is the increment in the TCP window in the interval, and is characterized as follows: During (T k ; T k+1 ), for each ACK arriving at the source (say, at time t), the 12

13 window size increases by one with probability 1 W t. However, we further assume that the window size increases by one with probability 1 W k (where W k = a + b + d), i.e., the probability does not change after every arrival but, instead, we use the window at T k. Then, with this assumption, due to d arrivals to the source queue, the window size increases by the random amount N k. We see that for d ACKs, the maximum increase 1 in window size is d. Let us dene ~N k such that ~N k Binomial(d; ). Then, a+b+d N k = min( ~N k ; W max? (a + b + d)). We can similarly get recursive relations for B k+1 and D k+1 [14]. Consider an example for explaining the evolution of X k to X k+1. Let X k = (a; b; d; 2; 1), i.e., the source service rate is twice that of the bottleneck link server, and loss can take place. Further, d packets are in ight. These d packets arrive at the source queue, increase the window size by N k, and hence, min(a + d + N k ; ) packets are transmitted into the bottleneck buer (at most packets can be transmitted at rate 1 during an interval of length ). If 2(B max? b) < min((a + d + N k ); ) (11) then, loss will occur. For a given b and d, we can compute the range of N k for which Equation 11 is satised. Suppose that loss occurs for N k x( 0). Then, p(x) = X ix PrfN k = ig (12) Let us dene (x; w) = Prfwindow achieved is w j X k = x; loss occurs in (T k ; T k + )g (13) We can compute this quantity in a manner similar to that outlined for the computation of p(x). When no loss occurs, U k is given by Equation 8. When loss occurs, given X k = x = (a; b; c; i 1 ; j 1 ), the next cycle begins after the recovery from loss which includes the next slow start phase. Suppose that the window was 2m when loss occured. Then, the next congestion avoidance phase will begin when the TCP window size in the slow start phase after loss recovery reaches m. This will take dlog 2 me cycles. At the end of this period, the state of various queues is given by (A k ; B k ; D k ) = (0; 0; m). The channel state at the start of the next cycle can be described by the transition probability matrix of the modulating Markov chain. Hence, U k = 3 + dlog 2 me with probability p(x):(x; 2m) (14) 13

14 and X k+1 = (0; 0; m; i 2 ; j 2 ) with probability p(x):(x; 2m):Q (3+log 2 m) (i 1 ; j 1 ; i 2 ; j 2 ) (15) From the above discussion, it is clear that given X k, the distribution of X k+1 can be computed without any knowledge of its past history. Hence, fx k ; k 0g is a Markov chain. Further, given T k and X k, the distribution of T k+1 can be computed without any knowledge of its past history. Hence, the process f(x k ; T k ); k 0g is a Markov Renewal Process (MRP) (See [17]). It is this MRP that is our model for TCP/ABR. 3.3 Computation of Throughput Given the Markov Renewal Process f(x k ; T k ); k 0g, we associate with the kth cycle (T k ; T k+1 ) a \reward" V k that accounts for the successful transmission of packets. Let (x) denote the stationary probability distribution of the Markov chain fx k ; k 0g. Denote by T CP=ABR, the throughput of TCP over ABR. Then, by the Markov renewal-reward theorem ([17]), we have T CP=ABR = E V E U where E () denotes the expectation w.r.t. the stationary distribution (x). We have (16) The distribution (x) is obtained from the transition probabilities in Section 3.2. E V = X x (x)v (x) (17) where V (x) is the expected reward in a cycle that begins with X = x. Denote by A(x), B(x) and D(x) the values of A, B and D in the state x. Then, in an interval (T k ; T k +) where no loss occurs, we take V (x) = D(x) with probability 1? p(x) (18) Thus for lossless intervals the reward is the number of acknowledgements returned to the source; note that this actually accounts for packets successfully received by the receiver in previous intervals. Loss occurs only if the ABR source is sending at the high rate and the link is transmitting at the low rate. When loss occurs in (T k ; T k + ), we need to account for the reward in the interval starting from T k until T k+1 when slow-start ends. Note that at T k the congestion window is A(x) + B(x) + D(x). The rst component of the 14

15 reward is D(x); all the B(x) buered packets will result in ACKs, causing the left edge of the TCP window to advance. Since the link rate is half the source rate, loss will occur when 2(B max? B(x)) packets enter the link buer from the ABR source; these packets succeed and cause the left edge of the window to further advance. Further, we assume that the window grows by 1 in this process; hence, following the lost packet, at most A(x) + B(x) + D(x) + 1 packets can be sent. Thus we bound the reward before timeout occurs by D(x) + B(x) + 2(B max? B(x)) + A(x) + B(x) + D(x) + 1 = A(x)+2D(x)+2B max +1. After loss and timeout, the ensuing slow-start phase successfully transfers some packets (as described earlier). Hence, an upper bound on the \reward" when loss occurs is A(x) + 2D(x) + 2B max S slowstart (x), where S slowstart (x) = X w (x; w)(2 log 2 w 2? 1) (19) the summation index w being over all window sizes. Actually, this is an optimistic reward as some of the packets will be transmitted again in the next cycle even though they have successfully reached receiver. We could also have a conservative accounting, where we assume that if loss occurs, all the packets transmitted in that cycle are retransmitted in future cycles. In the numerical results, we shall compare the throughputs with these two bounds. It follows that E V = X x (x)((1?p(x))d(x)+p(x)(a(x)+2d(x)+2b max +1+ X w (x; w)(2 log 2 ( w 2 )?1))) Similarly we have E U = X x (20) (x)u(x) (21) where U(x) is the mean cycle length when X = x at the beginning of the cycle. From the analysis in Section 3.2, it follows that U(x) = ( 1 w.p. 1? p(x) Pw(3 + dlog 2 w 2 e)(x; w) otherwise (22) Hence, E U = X x (x)((1? p(x)) + p(x) X w w (x; w)(3 + dlog 2 e)) (23) TCP without ATM/ABR Without the ABR rate control, the source host would transmit at the full rate of its link; we assume that this link is much faster than the bottleneck link and model it as 15

16 Constant Rate Arrival Process B max C t C t Figure 6: Single server queue with time varying service capacity, being fed by a constant rate source. t innitely fast. The system model is then very similar to the previous case, the only dierence being that we have eliminated the segmentation buer. The assumptions we make in this analysis, however, lead to an optimistic estimate of the throughput. The analysis is analogous to that provided above. 4 TCP/ABR with Eective Capacity Feedback We now develop another kind of rate feedback. To motivate this approach, consider a nite buer single server queue with a stationary ergodic service process (see Figure 6). Suppose that the ABR source sent packets at a constant rate. Then, we would like to nd that rate which maximizes TCP throughput. Hence, let the input process to this queue be a constant rate deterministic arrival process. Given the buer size B max and a desired Quality of Service (QoS) (say a cell loss probability ), we would like to know the maximum rate of the arrival process such that the QoS guarantee is met. We look at a discrete time approach to this problem (see [16]); in practice, the discrete time approach is adequate as the rate feedback is only updated at multiples of some basic measurement interval. Consider a slotted time queueing model where we can service C i packets in slot i and the buer can hold B max packets. fc i g is a stationary and ergodic process; let EC be the mean of the process and C min be the minimum number of packets that can be served per slot. A constant number of packets (denoted by ) arrive in each slot. We would like to nd max such that the desired QoS (cell loss probability ) is achieved. In [16], the following asymptotic condition is considered. If X is a random variable that represents the stationary queue length, then, with > 0 1, 1 lim log P (X > B max ) <? (24) B max!1 B max i.e., for large B max the loss probability is better then e?bmax. It is shown that this 1 All logarithms are taken to the base e 16

17 performance objective is met if <?1 lim n!1 1 n + 1 log Ee?P n i=0 C i (25) For the desired QoS we need =?log B max. Let us denote the expression on the right hand side of Equation 25 as? eff. Then,? eff can be called the eective capacity of the server. If! 1, then? eff! EC and as! 0,? eff! C min which is what we intuitively expect. For all other values of,? eff 2 (C min ; EC). Let us apply this eective capacity approach to our problem. Let the ABR source (see Figure 3) adapt to the eective bandwidth of the bottleneck link server. In our analysis, we have assumed a Markov modulated bottleneck link capacity, changes occurring at most once every units of time, being the round trip propagation delay. Hence, we have a discrete time model with being the number of packet arrivals to the bottleneck link in units of time and C i being the number of packets served in that interval. We will compute the eective capacity of the bottleneck link server using Equation 25. However, before we can do this, we still need to determine the desired QOS, i.e, or equivalently,. To nd, we conduct the following experiment. We let the ABR source transmit at some constant rate, say ; 2 (EC; C min ). For a given Markov modulating process, we nd that which maximizes TCP throughput. We will assume that this is the eective capacity of the bottleneck link. Now, using Equation 25, we can nd the smallest that results in an eective capacity of this. If the value of so obtained turns out to be consistent for a wide range of Markov modulating processes, then we will use this value of as the QoS requirement for TCP over ABR. The above discrete time queueing model for TCP over ABR can be analyzed in a manner analogous to that in Section 3.2. We nd from the analysis that for several sets of parameters, the value of which maximizes TCP throughput is consistently very large (about 60-70). This is as expected since TCP performance is very sensitive to loss. 4.1 Algorithm for Eective Capacity Computation In practice, we do not know a priori the statistics of the modulating process. Hence, we need an on-line method of computing the eective bandwidth. In this section, we develop an algorithm for computing the eective capacity of a time varying bottleneck link carrying TCP trac. The idea is based on Equation 25, and the observation at the end of the previous section that is very large. 17

18 time M Samples N Averages Figure 7: Schematic of the windows used in the computation of the eective capacity bsed rate feedback. We take the measurement interval to be s time units; s is also the update interval of the rate feedback. We shall approximate the expression for eective bandwidth in Equation 25 by replacing n! 1 by a large nite M.? eff?1 M log Ee?P M i=1 C i (26) What we now have is an eective capacity computation performed over Ms units of time. We will assume that the process is ergodic and stationary. Hence, we approximate the expectation by the average of N sets of samples, each set taken over Ms units of time. Note that since the process is stationary and ergodic, the N intervals need not be disjoint for the following argument to work. Then, denoting C ij as the ith link capacity value (i 2 f1; Mg) in the jth block of M intervals (j 2 f1; Ng), we have? eff?1 M log 1 N NX j=1 =?1 M log 1 N? 1 As motivated above, we now take to be large. This yields?p M e i=1 C ij (27) NX?P M log M e i=1 C ij (28) j=1 P?1 M i=1!1 log e?(min j2n C ij ) M = min j2n 1 M MX i=1 (29) (30) C ij (31) We notice that this essentially means that we average capacities over N sliding blocks, each block representing Ms units of time, and feed back the minimum of these values (see Figure 7). The formula that has been obtained (Equation 31) has a particularly simple form. The above derivation should be viewed more as a motivation for this formula. 18 The

19 formula, however, has independent intuitive appeal; see below. In the derivation it was required that M and N should be large. We can, however, study the eect of the choice of M and N (large or small) on the performance of eective capacity feedback. This is done in Section 6, where we also provide guidelines for selecting values of M and N under various situations. The formula in Equation 31 is intuitively satisfying; we will call it EFFCAP feedback. Consider the case when the network changes are very slow. Then, all N values of the average capacity will be the same, and each one will be equal to the capacity of the bottleneck link. Hence, the rate that is fed back to the ABR source will be the instantaneous free capacity of the bottleneck link; i.e., in this situation EFFCAP is the same as INSTCAP. When the network variations are very fast, EFFCAP will be the mean capacity of the bottleneck link which is what should be done to get the best throughput. Hence, EFFCAP behaves like INSTCAP for slow network changes and adapts to the mean bottleneck link capacity for fast changes. For intermediate rates of changes, EFFCAP is (necessarily) conservative and feeds back the minimum link rate. There is another benet we could get by using EFFCAP. As EFFCAP assumes a large value of, this means the the cell loss probability is very small. This implies that the TCP throughput will essentially be the ABR throughput. Thus EFFCAP when used along with the Minimum Cell Rate feature of the ABR service can guarantee a minimum throughput to TCP connections. Some of our simulation results to be presented demonstrate this. 5 Numerical and Simulation Results In this section, we rst compare our analytical results for the throughput of TCP, without ABR and with ABR with INSTCAP feedback, with simulation results from a hybrid TCP simulator involving actual TCP code, and a model for the network implemented in the loopback driver of a Linux Pentium machine. We show that the performance of TCP improves when ABR is used for end-to-end data transport below TCP. We then study the performance of the EFFCAP scheme and compare it with the INSTCAP scheme. 19

20 Efficiency 0.7 "Conservative Analysis, 10 packets" "Conservative Analysis, 12 packets" "Optimistic Analysis, 10 packets" "Optimistic Analysis, 12 packets" "Testbed results, 10 packets" "Testbed results, 12 packets" Mean time per state (rtd) Figure 8: Analysis and Simulation results: INSTCAP feedback. Throughput of TCP over ABR: The round trip propagation delay is 40 time units. The bottleneck link buers are either 10 or 12 packets. Notice that, for > 80, the lowest two curves are test-bed results, the uppermost two curves are the optimistic analysis, and the middle two curves are the conservative analysis. 5.1 Instantaneous Rate Feedback Scheme We recall from the previous section that the bottleneck link is Markov modulated. In our analysis, we have assumed that the modulating chain has two states which we call the high state and the low state. In the low state, with some link capacity being used by higher priority trac, the link capacity is some fraction of the link capacity in the high state (where the full link rate is available). In the set of results that we present in this section, we will assume that this fraction is. Further, we will also assume that the mean time in each state is the same, i.e., the Markov chain is symmetric. We denote the mean time in each state by, and denote the mean time in each state normalized to by, i.e., :=. For example, if is 200msec, then = 2 means that the mean time per state is 400msec. Note that our analysis only applies to > 1. A large value of means that the network changes are slow compared to the round trip propagation delay (rtd), whereas << 1 means that the network transients occur several times per round trip time. In the Linux kernel implementation of our network simulator, the Markov chain can make transitions at most once every 30msec. Hence we take this also to be the measurement interval, and the explicit rate feedback interval (i.e., s = 30ms). We denote one packet transmission time at the bottleneck link in the high rate 20

21 state as one time unit. Thus, in all the results presented here, the packet transmission time in the low rate state is 2 time units. We plot the bottleneck link eciency vs. mean time that it spends in each state (i.e., ). We dene eciency as the throughput as a fraction of the mean capacity of the bottleneck link. We include the TCP/IP headers in the throughput, but account for ATM headers as overhead. We use the words throughput and eciency interchangeably. With the modulating Markov chain spending the same time in each state, the mean capacity of the link is Figure 8 shows the throughput of TCP over ABR with the INSTCAP scheme 2. Here, we compare an optimistic analysis, a conservative one (see Section 3.3), and the test-bed (i.e., simulation) results for dierent buer sizes. In our analysis, the processes are embedded at multiples of one round trip propagation delay, and the feedback from the bottleneck link is sent once every rtd. This feedback reaches the ABR source after one round trip propagation delay. In the simulations, however, feedback is sent to the ABR source every 30msec. This reaches the ABR source after one round trip propagation delay. In Figure 8, we can see that, except for very small, the analysis and the simulations match to within a few percent. Both the analyses are less than the observed throughputs by about 10-20% for small. This can be explained if we note that in our model, we assume that packets arrive (leave) back to back to (from) the ABR source. When a rate change occurs at the bottleneck link, as the packets arrive back to back, and the source sends at twice the rate of the bottleneck link (in our example); for every two packets arriving to the bottleneck link, one gets queued. However, in reality, the packets need not arrive back to back and hence, the queue buildup is slower. This means that the probability that packet loss occurs at the bottleneck link buer is actually lower than in our analytical model. This eect becomes more and more signicant as the rate of bottleneck link variations increase. However, we observe from the simulations that this eect is not signicant for most values of. Figure 9 shows the throughput of TCP without ABR. We can see that the simulation results give a throughput of upto 20% less than the analytical ones. This occurs due to two reasons. 2 Even if! 1, the throughput of TCP over ABR will not go to 1 because of ATM overheads. For every 53 bytes transmitted, there are 5 bytes of ATM headers. Hence, the asymptotic throughput is approximately 90%. 21

22 Efficiency "TCP Analysis, 10 packets" "TCP Analysis, 12 packets" "TCP testbed, 10 packets" "TCP testbed, 12 packets" Mean time per state (rtd) Figure 9: Analysis and Simulation results; throughput of TCP without ABR : The round trip propagation delay is 40 time units. The bottleneck link buers are either 10 or 12 packets. We observe that TCP is sensitive to bottleneck link buer size changes. (i) We assumed in our analysis that no loss occurs in the slow-start phase. It has been shown in [11] that if the bottleneck link buer is less than 1 of the bandwidth-delay 3 product (which corresponds to about 13 packets or 6500 byte buer), loss will occur in the slow-start phase. (ii) We optimistically compute the throughput of TCP by using an upper bound on the \reward" in the loss cycle. We see from Figures 8 and 9 that ABR makes TCP throughput insensitive to buer size variations. However, with TCP alone, there is a signicant worsening of throughput with buer reduction. This can be explained by the fact that once the ABR control loop has converged, the buer size is immaterial as no loss takes place when source and bottleneck link rate are the same. However, without ABR, TCP loses packets even when no transients occur. It is useful to observe that since the times in the above curves are normalized to the packet transmission time (in the high rate state), results for several dierent ranges of parameters can be read o these curves. To give an example, if the link has a capacity of 155Mbps during its high rate state, and TCP packets have a size of 500 bytes each, then one time unit is 25:8sec. The round trip propagation delay () is 40 25:8sec = 1:032msec. Then, = 100 means that changes in link bandwidth occur 22

23 "TCP Alone, 8 packets" "TCP Alone, 10 packets" "TCP Alone, 12 packets" "TCP over ABR, 8 packets" "TCP over ABR, 10 packets" "TCP over ABR, 12 packets" Efficiency Mean time per state (rtd) Figure 10: Simulation results for TCP with and without ABR (INSTCAP feedback) for small values of. The rtd is 40 time units. on an average, once every 103.2msec. Consider another example where the link capacity is 2Mbps during the high rate period. Let the packet size by 1000bytes. Then, the delay here corresponding to 40 time units is 160msec. = 100 here corresponds to changes occurring once every 16 seconds. These two examples illustrate the fact the the curves are normalized and can be used to read o numbers for many scenarios. 3 From Figures 8 and 9, we can see that the performance of TCP improves by about 20% when ABR is employed for data transport, INSTCAP feedback is used, and the changes in link rate are slow. This improvement in performance with ABR is due to the fact that ABR pushes the congestion to the network edge. After the TCP window grows beyond the point where the pipe gets full, i.e., W t > B +, the packets start r queueing in the ABR segmentation buer, which being on the end system, is very large. The window size increases till the maximum window size advertised by the receiver is reached. No loss occurs in the network as the ABR source sends at just the right rate to the bottleneck link. Hence, we end up with the TCP window size xed at the maximum window size and the pipe is always full. The assumptions in our analysis render it inapplicable for very small. Figure 10 compares the simulation results for TCP with ABR (INSTCAP feedback) and without ABR for various buer sizes. These results are for starting with = 10 and going 3 Note however that is an absolute parameter in these curves, since it governs the round trip \pipe". Thus, although is normalized to, the curves do not yield values for xed and varying. 23

24 down to = 0:16. We note that even though the throughput improvement due to ABR is not great for small, the performance of TCP does not signicantly worsen due to ABR. In the next section, we will see how a better rate feedback in ABR can result in a distinct improvement of TCP/ABR throughput even for this range of. We see from Figure 10 that when becomes less than 1, the throughput of TCP increases. This can be explained by the fact that the rate mismatch occurs for an interval of time less than one round trip propagation delay. As a result, the buer size required to handle the overload becomes less. As becomes very small, each packet is sent at a dierent rate and hence, the ABR source eectively sends at the mean capacity. Then loss very rarely occurs as the buers can handle almost all rate mismatches and hence, the throughput increases. 5.2 Comparison of EFFCAP and INSTCAP Performance: Simulation Results Efficiency "Effective Capacity, 8 packets" "Effective Capacity, 10 packets" "Effective Capacity, 12 packets" "Instantaneous rate feedback, 8 packets" "Instantaneous rate feedback, 10 packets" "Instantaneous rate feedback, 12 packets" Mean time per state (rtd) Figure 11: Simulation results; Comparison of the EFFCAP and INSTCAP feedback schemes for TCP over ABR for various bottleneck link buers (8-12 packets). As before, the rtd is 40 time units. Here, N = 49 and M = 7 (see Figure 7). In this gure, we compare their performances for relatively large. In Figure 11, we use results from the test-bed to compare the relative performances of the EFFCAP and INSTCAP feedback schemes for ABR. Recall that the EFFCAP algorithm has two parameters, namely M, the number of samples used for each block average, and N, the number of blocks of M samples over which the minimum is taken. 24

Performance of TCP Congestion Control with Explicit Rate Feedback: Rate Adaptive TCP (RATCP)

Performance of TCP Congestion Control with Explicit Rate Feedback: Rate Adaptive TCP (RATCP) Performance of TCP Congestion Control with Explicit Rate Feedback: Rate Adaptive TCP (RATCP) Aditya Karnik and Anurag Kumar Dept. of Electrical Communication Engg. Indian Institute of Science, Bangalore,

More information

perform well on paths including satellite links. It is important to verify how the two ATM data services perform on satellite links. TCP is the most p

perform well on paths including satellite links. It is important to verify how the two ATM data services perform on satellite links. TCP is the most p Performance of TCP/IP Using ATM ABR and UBR Services over Satellite Networks 1 Shiv Kalyanaraman, Raj Jain, Rohit Goyal, Sonia Fahmy Department of Computer and Information Science The Ohio State University

More information

ECE 333: Introduction to Communication Networks Fall 2001

ECE 333: Introduction to Communication Networks Fall 2001 ECE 333: Introduction to Communication Networks Fall 2001 Lecture 28: Transport Layer III Congestion control (TCP) 1 In the last lecture we introduced the topics of flow control and congestion control.

More information

Transmission Control Protocol. ITS 413 Internet Technologies and Applications

Transmission Control Protocol. ITS 413 Internet Technologies and Applications Transmission Control Protocol ITS 413 Internet Technologies and Applications Contents Overview of TCP (Review) TCP and Congestion Control The Causes of Congestion Approaches to Congestion Control TCP Congestion

More information

different problems from other networks ITU-T specified restricted initial set Limited number of overhead bits ATM forum Traffic Management

different problems from other networks ITU-T specified restricted initial set Limited number of overhead bits ATM forum Traffic Management Traffic and Congestion Management in ATM 3BA33 David Lewis 3BA33 D.Lewis 2007 1 Traffic Control Objectives Optimise usage of network resources Network is a shared resource Over-utilisation -> congestion

More information

ECE 610: Homework 4 Problems are taken from Kurose and Ross.

ECE 610: Homework 4 Problems are taken from Kurose and Ross. ECE 610: Homework 4 Problems are taken from Kurose and Ross. Problem 1: Host A and B are communicating over a TCP connection, and Host B has already received from A all bytes up through byte 248. Suppose

More information

CS321: Computer Networks Congestion Control in TCP

CS321: Computer Networks Congestion Control in TCP CS321: Computer Networks Congestion Control in TCP Dr. Manas Khatua Assistant Professor Dept. of CSE IIT Jodhpur E-mail: manaskhatua@iitj.ac.in Causes and Cost of Congestion Scenario-1: Two Senders, a

More information

Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015

Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015 Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015 I d like to complete our exploration of TCP by taking a close look at the topic of congestion control in TCP. To prepare for

More information

Answers to Sample Questions on Transport Layer

Answers to Sample Questions on Transport Layer Answers to Sample Questions on Transport Layer 1) Which protocol Go-Back-N or Selective-Repeat - makes more efficient use of network bandwidth? Why? Answer: Selective repeat makes more efficient use of

More information

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks Hybrid Control and Switched Systems Lecture #17 Hybrid Systems Modeling of Communication Networks João P. Hespanha University of California at Santa Barbara Motivation Why model network traffic? to validate

More information

Problem 7. Problem 8. Problem 9

Problem 7. Problem 8. Problem 9 Problem 7 To best answer this question, consider why we needed sequence numbers in the first place. We saw that the sender needs sequence numbers so that the receiver can tell if a data packet is a duplicate

More information

Chapter III. congestion situation in Highspeed Networks

Chapter III. congestion situation in Highspeed Networks Chapter III Proposed model for improving the congestion situation in Highspeed Networks TCP has been the most used transport protocol for the Internet for over two decades. The scale of the Internet and

More information

ADVANCED COMPUTER NETWORKS

ADVANCED COMPUTER NETWORKS ADVANCED COMPUTER NETWORKS Congestion Control and Avoidance 1 Lecture-6 Instructor : Mazhar Hussain CONGESTION CONTROL When one part of the subnet (e.g. one or more routers in an area) becomes overloaded,

More information

******************************************************************* *******************************************************************

******************************************************************* ******************************************************************* ATM Forum Document Number: ATM_Forum/96-0518 Title: Performance of TCP over UBR and buffer requirements Abstract: We study TCP throughput and fairness over UBR for several buffer and maximum window sizes.

More information

CS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP

CS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP CS 5520/ECE 5590NA: Network Architecture I Spring 2008 Lecture 13: UDP and TCP Most recent lectures discussed mechanisms to make better use of the IP address space, Internet control messages, and layering

More information

Design Issues in Traffic Management for the ATM UBR+ Service for TCP over Satellite Networks: Report II

Design Issues in Traffic Management for the ATM UBR+ Service for TCP over Satellite Networks: Report II Design Issues in Traffic Management for the ATM UBR+ Service for TCP over Satellite Networks: Report II Columbus, OH 43210 Jain@CIS.Ohio-State.Edu http://www.cis.ohio-state.edu/~jain/ 1 Overview! Statement

More information

Source 1. Destination 1. Bottleneck Link. Destination 2. Source 2. Destination N. Source N

Source 1. Destination 1. Bottleneck Link. Destination 2. Source 2. Destination N. Source N WORST CASE BUFFER REQUIREMENTS FOR TCP OVER ABR a B. Vandalore, S. Kalyanaraman b, R. Jain, R. Goyal, S. Fahmy Dept. of Computer and Information Science, The Ohio State University, 2015 Neil Ave, Columbus,

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2015 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

Congestion control in TCP

Congestion control in TCP Congestion control in TCP If the transport entities on many machines send too many packets into the network too quickly, the network will become congested, with performance degraded as packets are delayed

More information

Network Management & Monitoring

Network Management & Monitoring Network Management & Monitoring Network Delay These materials are licensed under the Creative Commons Attribution-Noncommercial 3.0 Unported license (http://creativecommons.org/licenses/by-nc/3.0/) End-to-end

More information

Dynamics of an Explicit Rate Allocation. Algorithm for Available Bit-Rate (ABR) Service in ATM Networks. Lampros Kalampoukas, Anujan Varma.

Dynamics of an Explicit Rate Allocation. Algorithm for Available Bit-Rate (ABR) Service in ATM Networks. Lampros Kalampoukas, Anujan Varma. Dynamics of an Explicit Rate Allocation Algorithm for Available Bit-Rate (ABR) Service in ATM Networks Lampros Kalampoukas, Anujan Varma and K. K. Ramakrishnan y UCSC-CRL-95-54 December 5, 1995 Board of

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2014 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

A Delayed Vacation Model of an M/G/1 Queue with Setup. Time and its Application to SVCC-based ATM Networks

A Delayed Vacation Model of an M/G/1 Queue with Setup. Time and its Application to SVCC-based ATM Networks IEICE TRANS. COMMUN., VOL. 0, NO. 0 1996 1 PAPER Special Issue on Telecommunications Network Planning and Design A Delayed Vacation Model of an M/G/1 Queue with Setup Time and its Application to SVCCbased

More information

TCP/IP over ATM using ABR, UBR, and GFR Services

TCP/IP over ATM using ABR, UBR, and GFR Services TCP/IP over ATM using ABR, UBR, and GFR Services Columbus, OH 43210 Jain@CIS.Ohio-State.Edu http://www.cis.ohio-state.edu/~jain/ 1 Overview Why ATM? ABR: Binary and Explicit Feedback ABR Vs UBR TCP/IP

More information

EE122 MIDTERM EXAM: Scott Shenker, Ion Stoica

EE122 MIDTERM EXAM: Scott Shenker, Ion Stoica EE MITERM EXM: 00-0- Scott Shenker, Ion Stoica Last name Student I First name Login: ee- Please circle the last two letters of your login. a b c d e f g h i j k l m n o p q r s t u v w x y z a b c d e

More information

Advanced Computer Networks

Advanced Computer Networks Advanced Computer Networks Congestion control in TCP Contents Principles TCP congestion control states Congestion Fast Recovery TCP friendly applications Prof. Andrzej Duda duda@imag.fr http://duda.imag.fr

More information

RICE UNIVERSITY. Analysis of TCP Performance over ATM. Networks. Mohit Aron. A Thesis Submitted. in Partial Fulfillment of the

RICE UNIVERSITY. Analysis of TCP Performance over ATM. Networks. Mohit Aron. A Thesis Submitted. in Partial Fulfillment of the RICE UNIVERSITY Analysis of TCP Performance over ATM Networks by Mohit Aron A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree Master of Science Approved, Thesis Committee: Dr.

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2014 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2015 1 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

Improving TCP Throughput over. Two-Way Asymmetric Links: Analysis and Solutions. Lampros Kalampoukas, Anujan Varma. and.

Improving TCP Throughput over. Two-Way Asymmetric Links: Analysis and Solutions. Lampros Kalampoukas, Anujan Varma. and. Improving TCP Throughput over Two-Way Asymmetric Links: Analysis and Solutions Lampros Kalampoukas, Anujan Varma and K. K. Ramakrishnan y UCSC-CRL-97-2 August 2, 997 Board of Studies in Computer Engineering

More information

******************************************************************* *******************************************************************

******************************************************************* ******************************************************************* ATM Forum Document Number: ATM_Forum/96-0517 Title: Buffer Requirements for TCP over ABR Abstract: In our previous study [2], it was shown that cell loss due to limited buffering may degrade throughput

More information

Transmission Control Protocol (TCP)

Transmission Control Protocol (TCP) TETCOS Transmission Control Protocol (TCP) Comparison of TCP Congestion Control Algorithms using NetSim @2017 Tetcos. This document is protected by copyright, all rights reserved Table of Contents 1. Abstract....

More information

Toward a Time-Scale Based Framework for ABR Trac Management. Tamer Dag Ioannis Stavrakakis. 409 Dana Research Building, 360 Huntington Avenue

Toward a Time-Scale Based Framework for ABR Trac Management. Tamer Dag Ioannis Stavrakakis. 409 Dana Research Building, 360 Huntington Avenue Toward a Time-Scale Based Framework for ABR Trac Management Tamer Dag Ioannis Stavrakakis Electrical and Computer Engineering Department 409 Dana Research Building, 360 Huntington Avenue Northeastern University,

More information

Flow Control. Flow control problem. Other considerations. Where?

Flow Control. Flow control problem. Other considerations. Where? Flow control problem Flow Control An Engineering Approach to Computer Networking Consider file transfer Sender sends a stream of packets representing fragments of a file Sender should try to match rate

More information

TCP. CSU CS557, Spring 2018 Instructor: Lorenzo De Carli (Slides by Christos Papadopoulos, remixed by Lorenzo De Carli)

TCP. CSU CS557, Spring 2018 Instructor: Lorenzo De Carli (Slides by Christos Papadopoulos, remixed by Lorenzo De Carli) TCP CSU CS557, Spring 2018 Instructor: Lorenzo De Carli (Slides by Christos Papadopoulos, remixed by Lorenzo De Carli) 1 Sources Fall and Stevens, TCP/IP Illustrated Vol. 1, 2nd edition Congestion Avoidance

More information

Communication Networks

Communication Networks Communication Networks Spring 2018 Laurent Vanbever nsg.ee.ethz.ch ETH Zürich (D-ITET) April 30 2018 Materials inspired from Scott Shenker & Jennifer Rexford Last week on Communication Networks We started

More information

Congestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources

Congestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources Congestion Source 1 Source 2 10-Mbps Ethernet 100-Mbps FDDI Router 1.5-Mbps T1 link Destination Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets

More information

Introduction: Two motivating examples for the analytical approach

Introduction: Two motivating examples for the analytical approach Introduction: Two motivating examples for the analytical approach Hongwei Zhang http://www.cs.wayne.edu/~hzhang Acknowledgement: this lecture is partially based on the slides of Dr. D. Manjunath Outline

More information

CS268: Beyond TCP Congestion Control

CS268: Beyond TCP Congestion Control TCP Problems CS68: Beyond TCP Congestion Control Ion Stoica February 9, 004 When TCP congestion control was originally designed in 1988: - Key applications: FTP, E-mail - Maximum link bandwidth: 10Mb/s

More information

6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long

6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long 6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long Please read Chapter 19 of the 6.02 book for background, especially on acknowledgments (ACKs), timers,

More information

Transport Layer PREPARED BY AHMED ABDEL-RAOUF

Transport Layer PREPARED BY AHMED ABDEL-RAOUF Transport Layer PREPARED BY AHMED ABDEL-RAOUF TCP Flow Control TCP Flow Control 32 bits source port # dest port # head len sequence number acknowledgement number not used U A P R S F checksum Receive window

More information

15-744: Computer Networking TCP

15-744: Computer Networking TCP 15-744: Computer Networking TCP Congestion Control Congestion Control Assigned Reading [Jacobson and Karels] Congestion Avoidance and Control [TFRC] Equation-Based Congestion Control for Unicast Applications

More information

Topics. TCP sliding window protocol TCP PUSH flag TCP slow start Bulk data throughput

Topics. TCP sliding window protocol TCP PUSH flag TCP slow start Bulk data throughput Topics TCP sliding window protocol TCP PUSH flag TCP slow start Bulk data throughput 2 Introduction In this chapter we will discuss TCP s form of flow control called a sliding window protocol It allows

More information

THE INCREASING popularity of wireless networks

THE INCREASING popularity of wireless networks IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 3, NO. 2, MARCH 2004 627 Accurate Analysis of TCP on Channels With Memory and Finite Round-Trip Delay Michele Rossi, Member, IEEE, Raffaella Vicenzi,

More information

Issues in Traffic Management on Satellite ATM Networks

Issues in Traffic Management on Satellite ATM Networks Issues in Traffic Management on Satellite ATM Networks Columbus, OH 43210 Jain@CIS.Ohio-State.Edu http://www.cis.ohio-state.edu/~jain/ 1 Overview Why ATM? ATM Service Categories: ABR and UBR Binary and

More information

TCP Congestion Control

TCP Congestion Control 1 TCP Congestion Control Onwutalobi, Anthony Claret Department of Computer Science University of Helsinki, Helsinki Finland onwutalo@cs.helsinki.fi Abstract This paper is aimed to discuss congestion control

More information

Outline. CS5984 Mobile Computing

Outline. CS5984 Mobile Computing CS5984 Mobile Computing Dr. Ayman Abdel-Hamid Computer Science Department Virginia Tech Outline Review Transmission Control Protocol (TCP) Based on Behrouz Forouzan, Data Communications and Networking,

More information

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4 Principles of reliable data transfer 3.5 Connection-oriented transport: TCP segment

More information

ATM Quality of Service (QoS)

ATM Quality of Service (QoS) ATM Quality of Service (QoS) Traffic/Service Classes, Call Admission Control Usage Parameter Control, ABR Agenda Introduction Service Classes and Traffic Attributes Traffic Control Flow Control Special

More information

CSE 461. TCP and network congestion

CSE 461. TCP and network congestion CSE 461 TCP and network congestion This Lecture Focus How should senders pace themselves to avoid stressing the network? Topics Application Presentation Session Transport Network congestion collapse Data

More information

TCP Congestion Control

TCP Congestion Control TCP Congestion Control Lecture material taken from Computer Networks A Systems Approach, Third Ed.,Peterson and Davie, Morgan Kaufmann, 2003. Computer Networks: TCP Congestion Control 1 TCP Congestion

More information

Congestion Control End Hosts. CSE 561 Lecture 7, Spring David Wetherall. How fast should the sender transmit data?

Congestion Control End Hosts. CSE 561 Lecture 7, Spring David Wetherall. How fast should the sender transmit data? Congestion Control End Hosts CSE 51 Lecture 7, Spring. David Wetherall Today s question How fast should the sender transmit data? Not tooslow Not toofast Just right Should not be faster than the receiver

More information

Traffic Management of Internet Protocols over ATM

Traffic Management of Internet Protocols over ATM Traffic Management of Internet Protocols over ATM Columbus, OH 43210 Jain@CIS.Ohio-State.Edu http://www.cis.ohio-state.edu/~jain/ 1 Overview Why ATM? ATM Service Categories: ABR and UBR Binary and Explicit

More information

Fast Retransmit. Problem: coarsegrain. timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission

Fast Retransmit. Problem: coarsegrain. timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission Fast Retransmit Problem: coarsegrain TCP timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission Packet 1 Packet 2 Packet 3 Packet 4 Packet 5 Packet 6 Sender Receiver

More information

UNIT IV -- TRANSPORT LAYER

UNIT IV -- TRANSPORT LAYER UNIT IV -- TRANSPORT LAYER TABLE OF CONTENTS 4.1. Transport layer. 02 4.2. Reliable delivery service. 03 4.3. Congestion control. 05 4.4. Connection establishment.. 07 4.5. Flow control 09 4.6. Transmission

More information

a) Adding the two bytes gives Taking the one s complement gives

a) Adding the two bytes gives Taking the one s complement gives EE33- solutions for homework #3 (3 problems) Problem 3 Note, wrap around if overflow One's complement = To detect errors, the receiver adds the four words (the three original words and the checksum) If

More information

CSE 123A Computer Networks

CSE 123A Computer Networks CSE 123A Computer Networks Winter 2005 Lecture 14 Congestion Control Some images courtesy David Wetherall Animations by Nick McKeown and Guido Appenzeller The bad news and the good news The bad news: new

More information

Performance Analysis of TCP Variants

Performance Analysis of TCP Variants 102 Performance Analysis of TCP Variants Abhishek Sawarkar Northeastern University, MA 02115 Himanshu Saraswat PES MCOE,Pune-411005 Abstract The widely used TCP protocol was developed to provide reliable

More information

CS4700/CS5700 Fundamentals of Computer Networks

CS4700/CS5700 Fundamentals of Computer Networks CS4700/CS5700 Fundamentals of Computer Networks Lecture 15: Congestion Control Slides used with permissions from Edward W. Knightly, T. S. Eugene Ng, Ion Stoica, Hui Zhang Alan Mislove amislove at ccs.neu.edu

More information

Reliable Transport II: TCP and Congestion Control

Reliable Transport II: TCP and Congestion Control Reliable Transport II: TCP and Congestion Control Brad Karp UCL Computer Science CS 3035/GZ01 31 st October 2013 Outline Slow Start AIMD Congestion control Throughput, loss, and RTT equation Connection

More information

Overview. TCP congestion control Computer Networking. TCP modern loss recovery. TCP modeling. TCP Congestion Control AIMD

Overview. TCP congestion control Computer Networking. TCP modern loss recovery. TCP modeling. TCP Congestion Control AIMD Overview 15-441 Computer Networking Lecture 9 More TCP & Congestion Control TCP congestion control TCP modern loss recovery TCP modeling Lecture 9: 09-25-2002 2 TCP Congestion Control Changes to TCP motivated

More information

The Transport Control Protocol (TCP)

The Transport Control Protocol (TCP) TNK092: Network Simulation - Nätverkssimulering Lecture 3: TCP, and random/short sessions Vangelis Angelakis Ph.D. The Transport Control Protocol (TCP) Objectives of TCP and flow control Create a reliable

More information

Exercises TCP/IP Networking With Solutions

Exercises TCP/IP Networking With Solutions Exercises TCP/IP Networking With Solutions Jean-Yves Le Boudec Fall 2009 3 Module 3: Congestion Control Exercise 3.2 1. Assume that a TCP sender, called S, does not implement fast retransmit, but does

More information

Analysis of Link-Layer Backoff Algorithms on Point-to-Point Markov Fading Links: Effect of Round-Trip Delays

Analysis of Link-Layer Backoff Algorithms on Point-to-Point Markov Fading Links: Effect of Round-Trip Delays Analysis of Link-Layer Backoff Algorithms on Point-to-Point Markov Fading Links: Effect of Round-Trip Delays A. Chockalingam and M. Zorzi Department of ECE, Indian Institute of Science, Bangalore 560012,

More information

MDP Routing in ATM Networks. Using the Virtual Path Concept 1. Department of Computer Science Department of Computer Science

MDP Routing in ATM Networks. Using the Virtual Path Concept 1. Department of Computer Science Department of Computer Science MDP Routing in ATM Networks Using the Virtual Path Concept 1 Ren-Hung Hwang, James F. Kurose, and Don Towsley Department of Computer Science Department of Computer Science & Information Engineering University

More information

CS3600 SYSTEMS AND NETWORKS

CS3600 SYSTEMS AND NETWORKS CS3600 SYSTEMS AND NETWORKS NORTHEASTERN UNIVERSITY Lecture 24: Congestion Control Prof. Alan Mislove (amislove@ccs.neu.edu) Slides used with permissions from Edward W. Knightly, T. S. Eugene Ng, Ion Stoica,

More information

Bandwidth Allocation & TCP

Bandwidth Allocation & TCP Bandwidth Allocation & TCP The Transport Layer Focus Application Presentation How do we share bandwidth? Session Topics Transport Network Congestion control & fairness Data Link TCP Additive Increase/Multiplicative

More information

PERFORMANCE COMPARISON OF THE DIFFERENT STREAMS IN A TCP BOTTLENECK LINK IN THE PRESENCE OF BACKGROUND TRAFFIC IN A DATA CENTER

PERFORMANCE COMPARISON OF THE DIFFERENT STREAMS IN A TCP BOTTLENECK LINK IN THE PRESENCE OF BACKGROUND TRAFFIC IN A DATA CENTER PERFORMANCE COMPARISON OF THE DIFFERENT STREAMS IN A TCP BOTTLENECK LINK IN THE PRESENCE OF BACKGROUND TRAFFIC IN A DATA CENTER Vilma Tomço, 1 Aleksandër Xhuvani 2 Abstract: The purpose of this work is

More information

TCP Performance. EE 122: Intro to Communication Networks. Fall 2006 (MW 4-5:30 in Donner 155) Vern Paxson TAs: Dilip Antony Joseph and Sukun Kim

TCP Performance. EE 122: Intro to Communication Networks. Fall 2006 (MW 4-5:30 in Donner 155) Vern Paxson TAs: Dilip Antony Joseph and Sukun Kim TCP Performance EE 122: Intro to Communication Networks Fall 2006 (MW 4-5:30 in Donner 155) Vern Paxson TAs: Dilip Antony Joseph and Sukun Kim http://inst.eecs.berkeley.edu/~ee122/ Materials with thanks

More information

Congestion Control. Andreas Pitsillides University of Cyprus. Congestion control problem

Congestion Control. Andreas Pitsillides University of Cyprus. Congestion control problem Congestion Control Andreas Pitsillides 1 Congestion control problem growing demand of computer usage requires: efficient ways of managing network traffic to avoid or limit congestion in cases where increases

More information

Lecture 14: Congestion Control"

Lecture 14: Congestion Control Lecture 14: Congestion Control" CSE 222A: Computer Communication Networks Alex C. Snoeren Thanks: Amin Vahdat, Dina Katabi Lecture 14 Overview" TCP congestion control review XCP Overview 2 Congestion Control

More information

Appendix B. Standards-Track TCP Evaluation

Appendix B. Standards-Track TCP Evaluation 215 Appendix B Standards-Track TCP Evaluation In this appendix, I present the results of a study of standards-track TCP error recovery and queue management mechanisms. I consider standards-track TCP error

More information

R1 Buffer Requirements for TCP over ABR

R1 Buffer Requirements for TCP over ABR 96-0517R1 Buffer Requirements for TCP over ABR, Shiv Kalyanaraman, Rohit Goyal, Sonia Fahmy Saragur M. Srinidhi Sterling Software and NASA Lewis Research Center Contact: Jain@CIS.Ohio-State.Edu http://www.cis.ohio-state.edu/~jain/

More information

TCP: Overview RFCs: 793, 1122, 1323, 2018, 2581

TCP: Overview RFCs: 793, 1122, 1323, 2018, 2581 TCP: Overview RFCs: 793, 1122, 1323, 2018, 2581 ocket door point-to-point: one sender, one receiver reliable, in-order byte steam: no message boundaries pipelined: TCP congestion and flow control set window

More information

Performance Comparison Between AAL1, AAL2 and AAL5

Performance Comparison Between AAL1, AAL2 and AAL5 The University of Kansas Technical Report Performance Comparison Between AAL1, AAL2 and AAL5 Raghushankar R. Vatte and David W. Petr ITTC-FY1998-TR-13110-03 March 1998 Project Sponsor: Sprint Corporation

More information

Introduction to Protocols

Introduction to Protocols Chapter 6 Introduction to Protocols 1 Chapter 6 Introduction to Protocols What is a Network Protocol? A protocol is a set of rules that governs the communications between computers on a network. These

More information

Congestion Control in Communication Networks

Congestion Control in Communication Networks Congestion Control in Communication Networks Introduction Congestion occurs when number of packets transmitted approaches network capacity Objective of congestion control: keep number of packets below

More information

Congestion Control. Principles of Congestion Control. Network-assisted Congestion Control: ATM. Congestion Control. Computer Networks 10/21/2009

Congestion Control. Principles of Congestion Control. Network-assisted Congestion Control: ATM. Congestion Control. Computer Networks 10/21/2009 Congestion Control Kai Shen Principles of Congestion Control Congestion: informally: too many sources sending too much data too fast for the network to handle results of congestion: long delays (e.g. queueing

More information

Wireless TCP Performance Issues

Wireless TCP Performance Issues Wireless TCP Performance Issues Issues, transport layer protocols Set up and maintain end-to-end connections Reliable end-to-end delivery of data Flow control Congestion control Udp? Assume TCP for the

More information

Congestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University

Congestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University Congestion Control Daniel Zappala CS 460 Computer Networking Brigham Young University 2/25 Congestion Control how do you send as fast as possible, without overwhelming the network? challenges the fastest

More information

Networks. Wu-chang Fengy Dilip D. Kandlurz Debanjan Sahaz Kang G. Shiny. Ann Arbor, MI Yorktown Heights, NY 10598

Networks. Wu-chang Fengy Dilip D. Kandlurz Debanjan Sahaz Kang G. Shiny. Ann Arbor, MI Yorktown Heights, NY 10598 Techniques for Eliminating Packet Loss in Congested TCP/IP Networks Wu-chang Fengy Dilip D. Kandlurz Debanjan Sahaz Kang G. Shiny ydepartment of EECS znetwork Systems Department University of Michigan

More information

CS 421: COMPUTER NETWORKS SPRING FINAL May 16, minutes

CS 421: COMPUTER NETWORKS SPRING FINAL May 16, minutes CS 4: COMPUTER NETWORKS SPRING 03 FINAL May 6, 03 50 minutes Name: Student No: Show all your work very clearly. Partial credits will only be given if you carefully state your answer with a reasonable justification.

More information

A Hybrid Systems Modeling Framework for Fast and Accurate Simulation of Data Communication Networks. Motivation

A Hybrid Systems Modeling Framework for Fast and Accurate Simulation of Data Communication Networks. Motivation A Hybrid Systems Modeling Framework for Fast and Accurate Simulation of Data Communication Networks Stephan Bohacek João P. Hespanha Junsoo Lee Katia Obraczka University of Delaware University of Calif.

More information

Resource allocation in networks. Resource Allocation in Networks. Resource allocation

Resource allocation in networks. Resource Allocation in Networks. Resource allocation Resource allocation in networks Resource Allocation in Networks Very much like a resource allocation problem in operating systems How is it different? Resources and jobs are different Resources are buffers

More information

ENRICHMENT OF SACK TCP PERFORMANCE BY DELAYING FAST RECOVERY Mr. R. D. Mehta 1, Dr. C. H. Vithalani 2, Dr. N. N. Jani 3

ENRICHMENT OF SACK TCP PERFORMANCE BY DELAYING FAST RECOVERY Mr. R. D. Mehta 1, Dr. C. H. Vithalani 2, Dr. N. N. Jani 3 Research Article ENRICHMENT OF SACK TCP PERFORMANCE BY DELAYING FAST RECOVERY Mr. R. D. Mehta 1, Dr. C. H. Vithalani 2, Dr. N. N. Jani 3 Address for Correspondence 1 Asst. Professor, Department of Electronics

More information

Reliable Transport II: TCP and Congestion Control

Reliable Transport II: TCP and Congestion Control Reliable Transport II: TCP and Congestion Control Stefano Vissicchio UCL Computer Science COMP0023 Recap: Last Lecture Transport Concepts Layering context Transport goals Transport mechanisms and design

More information

TCP/IP over ATM over Satellite Links

TCP/IP over ATM over Satellite Links TCP/IP over ATM over Satellite Links Seong-Cheol Kim Samsung Electronics Co. Ltd. http://www.cis.ohio-state.edu/~jain/ 1 Overview TCP over ABR over Satellites TCP over UBR over Satellites Improving TCP

More information

CSCD 330 Network Programming Winter 2015

CSCD 330 Network Programming Winter 2015 CSCD 330 Network Programming Winter 2015 Lecture 11a Transport Layer Reading: Chapter 3 Some Material in these slides from J.F Kurose and K.W. Ross All material copyright 1996-2007 1 Chapter 3 Sections

More information

The Transport Layer Congestion control in TCP

The Transport Layer Congestion control in TCP CPSC 360 Network Programming The Transport Layer Congestion control in TCP Michele Weigle Department of Computer Science Clemson University mweigle@cs.clemson.edu http://www.cs.clemson.edu/~mweigle/courses/cpsc360

More information

Goals of Today s Lecture! Congestion Control! Course So Far.! Congestion Control Overview! It s Not Just The Sender & Receiver! Congestion is Natural!

Goals of Today s Lecture! Congestion Control! Course So Far.! Congestion Control Overview! It s Not Just The Sender & Receiver! Congestion is Natural! Goals of Today s Lecture! Congestion Control! EE 22: Intro to Communication Networks Fall 200 (MW 4-5:30 in 0 Barker) Scott Shenker TAs: Sameer Agarwal, Sara Alspaugh, Igor Ganichev, Prayag Narula http://inst.eecs.berkeley.edu/~ee22/

More information

Congestion Control. Principles of Congestion Control. Network assisted congestion. Asynchronous Transfer Mode. Computer Networks 10/23/2013

Congestion Control. Principles of Congestion Control. Network assisted congestion. Asynchronous Transfer Mode. Computer Networks 10/23/2013 Congestion Control Kai Shen Principles of Congestion Control Congestion: Informally: too many sources sending too much data too fast for the network to handle Results of congestion: long delays (e.g. queueing

More information

Studying Fairness of TCP Variants and UDP Traffic

Studying Fairness of TCP Variants and UDP Traffic Studying Fairness of TCP Variants and UDP Traffic Election Reddy B.Krishna Chaitanya Problem Definition: To study the fairness of TCP variants and UDP, when sharing a common link. To do so we conduct various

More information

Congestion Control 3/16/09

Congestion Control 3/16/09 Congestion Control Outline Resource Allocation Queuing TCP Congestion Control Spring 009 CSE3064 Issues Two sides of the same coin pre-allocate resources so at to avoid congestion control congestion if

More information

Department of EECS - University of California at Berkeley EECS122 - Introduction to Communication Networks - Spring 2005 Final: 5/20/2005

Department of EECS - University of California at Berkeley EECS122 - Introduction to Communication Networks - Spring 2005 Final: 5/20/2005 Name: SID: Department of EECS - University of California at Berkeley EECS122 - Introduction to Communication Networks - Spring 2005 Final: 5/20/2005 There are 10 questions in total. Please write your SID

More information

Performance of UMTS Radio Link Control

Performance of UMTS Radio Link Control Performance of UMTS Radio Link Control Qinqing Zhang, Hsuan-Jung Su Bell Laboratories, Lucent Technologies Holmdel, NJ 77 Abstract- The Radio Link Control (RLC) protocol in Universal Mobile Telecommunication

More information

FB(9,3) Figure 1(a). A 4-by-4 Benes network. Figure 1(b). An FB(4, 2) network. Figure 2. An FB(27, 3) network

FB(9,3) Figure 1(a). A 4-by-4 Benes network. Figure 1(b). An FB(4, 2) network. Figure 2. An FB(27, 3) network Congestion-free Routing of Streaming Multimedia Content in BMIN-based Parallel Systems Harish Sethu Department of Electrical and Computer Engineering Drexel University Philadelphia, PA 19104, USA sethu@ece.drexel.edu

More information

Protocol Overview. TCP/IP Performance. Connection Types in TCP/IP. Resource Management. Router Queues. Control Mechanisms ITL

Protocol Overview. TCP/IP Performance. Connection Types in TCP/IP. Resource Management. Router Queues. Control Mechanisms ITL Protocol Overview TCP/IP Performance E-Mail HTTP (WWW) Remote Login File Transfer TCP UDP ITL IP ICMP ARP RARP (Auxiliary Services) ATM Ethernet, X.25, HDLC etc. 2/13/06 Hans Kruse & Shawn Ostermann, Ohio

More information

TCP/IP Performance ITL

TCP/IP Performance ITL TCP/IP Performance ITL Protocol Overview E-Mail HTTP (WWW) Remote Login File Transfer TCP UDP IP ICMP ARP RARP (Auxiliary Services) Ethernet, X.25, HDLC etc. ATM 4/30/2002 Hans Kruse & Shawn Ostermann,

More information

Congestion in Data Networks. Congestion in Data Networks

Congestion in Data Networks. Congestion in Data Networks Congestion in Data Networks CS420/520 Axel Krings 1 Congestion in Data Networks What is Congestion? Congestion occurs when the number of packets being transmitted through the network approaches the packet

More information

Chapter 24 Congestion Control and Quality of Service 24.1

Chapter 24 Congestion Control and Quality of Service 24.1 Chapter 24 Congestion Control and Quality of Service 24.1 Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 24-1 DATA TRAFFIC The main focus of congestion control

More information