NEW TRANSFER METHODS FOR CLIENT-SERVER TRAFFIC IN ATM NETWORKS

Size: px
Start display at page:

Download "NEW TRANSFER METHODS FOR CLIENT-SERVER TRAFFIC IN ATM NETWORKS"

Transcription

1 NEW TRANSFER METHODS FOR CLIENT-SERVER TRAFFIC IN ATM NETWORKS David Lecumberri Kai-Yeung Siu* Paolo Narv6ez Massachusetts Institute of Technology Masayoshi Nabeshima Naoaki Yamanaka NlT Network Service Systems Laboratories Abstract This paper presents new transfer schemes for clientserver traffic over ATM networks, with the objective of achieving a bounded transmission time of about one second for as many connection requests as possible. We evaluate via simulations the performance of these new schemes in terms of network utilization, cell loss ratio, and transmission time. The simulation results show that these schemes provide effective means of supporting clientserver traffic with quasi-real time requirements. 1 Introduction Asynchronous Transfer Mode (ATM) is a promising technology for supporting future multimedia services. (e.g., voice, video, and data). Multiple service classes for ATM have been already described in the literature and developed specifically to carry traffic with different service requirements. Available bit rate (ABR), unspecified bit rate (UBR), variable bit rate (VBR) and ATM block transfer (ABT) service classes are some examples [l], [2], 131. The recent growth of the Internet due to the popularity of the World Wide Web has fueled a lot of research studies on how ATM can provide support for upper layer applications. In fact, most Internet traffic is web-related [4]. Traffic generated from web-related applications (simply referred to as cliendserver traffic in this paper) has some important characteristics. However, there has been little research on data transmission methods taking into account the general characteristics of cliendserver traffic. The motivation of our work is to develop better transfer methods to support such cliendserver traffic in ATM networks. In a normal web transaction, a user enters an address on the browser, i.e. it requests some data from a server. The server then transmits the requested document to the client. As opposed to conventional studies that considered each of these two flows (client to server and server to client) independent of each other, we consider them as coupled and not to be treated separately. It is also worth noticing that typically, the client sends the server a very small * Please direct all correspondence to K-Y. Siu, Rm C 77 Massachusetts Ave., MU, Cambridge, MA 02139, USA. siu@sunny.mit.edu amount of information (usually HTTP requests are around 50 bytes long), whereas the data transmitted from the server to the client is much larger (up to several megabytes). This asymmetry in the amount of information sent on each direction would be a factor in how congestion may affect network performance. Since the data sent from the client to the server is much smaller than what is sent from the server to the client, the latter traffic is what may cause congestion on the switches. Therefore, we consider that clienthewer traffic is coupled and asymmetric. We will evaluate here several new schemes that will take into account the characteristics of client-server traffic. We will first describe the general behavior of client, server and switch, and then, discuss the differences between the proposed methods. 2 New methods of transferring data We consider that traffic generated from future multimedia applications can be classified into three forms: Non-Real time communication (NRT), is store-andforward traffic like current Internet traffic. Real time communication (RT), is stream traffic like Internet telephony. Quasi-Real time communication (QRT), is a new concept and it refers to transfers of large amounts of data such as 10 Mbytes within an amount of time that is in the order of one second. Our proposed schemes address QRT communication, and their purpose is to keep the transmission time of each connection within the order of one second. Moreover, our schemes will keep high utilization of the network and a low cell loss probability. We will first describe the operation of the client. When a client sends a server data, an additional OAM (operation and maintenance) cell is sent ahead of the data. The OAM cell has a traffic field that indicates whether the data following is sent from client to server (CLIENT) or from server to client (SERVER). It also has a bandwidth field, which will be set to the maximum rate at which the client can receive data. This bandwidth field, as we will see below, will be updated along the path through the network from the client to the server. We assume the client could be a single computer but Global Telecommunications Conference - Globecom /99/$ IEEE 1345 I.H&.,

2 also a cluster of computers. These clients generate HTTP requests according to a given statistical distribution. The traffic pattern for HTTP requests can be decomposed into human generated requests and subsequent machine generated requests needed to recover included documents in a given web document, such as inlined images, etc. For the particular Quasi-Real Time (QRT) communication that we consider, we will model the generation of HTTP requests considering only the human generated requests. We employ a Poisson process in which HTTP requests are generated according to an exponential distribution, parameterized with the mean interarrival time. We claim this model is accurate for our simulations because of the human nature of HTTP requests and also because of the aggregate nature traffic when we consider the client to be a cluster of computers. Next, let us describe the behavior of the switch when receiving data from the client to the server. When the OAM cell arrives at an ATM switch, the switch recognizes that the data following is sent from client to server because the traffic field of the OAM cell is set to CLIENT. The switch monitors the value of the residual bandwidth on the reverse link where the coupled flow traverses, and it then compares the monitored value with the value in the bandwidth field of the OAM cell. It will then set the bandwidth field to the minimum value of the two, e.g. if the residual bandwidth monitored is higher than the value in the OAM cell, it will not modify that value. The residual bandwidth is monitored by maintaining a table of connections that are active at any given time on each switch. The three proposed schemes differ in the operation of the switch when receiving an OAM cell: 0 Scheme 1: Under Scheme 1, the switch will not take any further action, other than monitoring the residual bandwidth available in the reverse link. 0 Scheme2: Under Scheme 2, the switch, after monitoring the residual bandwidth available in the reverse link, will reserve it, therefore decreasing the residual bandwidth available for other requests. When the data from the server to the client arrives at the switch, it will extract from the OAM cell the rate at which the server is actually transmitting. The switch will then compare this value with the bandwidth previously reserved for that session, and it will release the extra bandwidth reserved, if any. This more conservative approach will prevent cell loss, but may keep the link underutilized when a request reserves more bandwidth than it will actually use. aggravated by the fact that most connections are likely to last for about one second, during which the queue is going to be filled and will become full very quickly. As soon as a connection releases the bandwidth, and the switch advertises this value, it is highly probable that under high load a new connection is going to use it, and again the incoming traffic is going to approach or exceed the outgoing capacity. Therefore, it will be very difficult for the queue length to be reduced, and in the event of having another collision, cell loss is likely to occur soon after the new connection gets established. The key idea of this scheme consists in releasing the bandwidth in a progressive manner, based on the available queue capacity. If the queue is almost full, then we should try to drain it before actually accepting more traffic that could fill it up again. Scheme 3 is based upon Scheme 1 in the sense that we do not reserve the bandwidth. More specifically, the switch, upon arrival of an OAM from the client, will monitor the residual bandwidth of the reverse link, R. Then it will compute the value current queue length P= maximumqueue length based on which it will adjust the value of the residual bandwidth, B, that is advertised to the server, according to the following algorithm: if (PILB) B=R else if (P2HB) B=O else B=R.- HB- P HB - LB endi f endi f where LB is the lower bound below which the switch gives all the residual bandwidth available, and HB is the upper bound above which, even though there might be some residual bandwidth, the switch will not advertise any, in order to let the queue drain. Between these two bounds, the advertised residual bandwidth will be proportional to the remaining queue capacity. Finally, let us describe the behavior of the server. Since the server may know the size of the data to transmit', it 0 Scheme 3: We realize that in Scheme 1, once there is ' This is not always the case, particularly on web transactions. a collision and the link load is over 100% (i.e., the There are an increasing number of dynamic contents on the web, incoming traffic is more than the bottleneck link Can where the documents are generated on-the-fly. For these handle), the queue is likely to overflow. This is applications, the server does not know in advance the size of the document to be transmitted. Global Telecommunications Conference - Globecorn'99

3 . High Speed Networks calculates the value D (cellds), which is the size of the requested data divided by the targeted transfer time, such as one second. The server also obtains the value of the residual bandwidth R (cellsls) available in the reverse path from the bandwidth field of the OAM cell, The server can determine the transmission rate for sending data to the client using the minimum value of R and D (or the remaining bandwidth at which the server can transmit). A multiplicative parameter a (set to 0.9) is also applied to the actual bandwidth to be used. In this way, the server will only see a times the bandwidth that the network advertises. It will also determine that the connection cannot be established if the available rate R falls below a lower bound, which we specify as a fraction of the desired transmission rate D. Upon deciding the rate at which it is going to send data to the client, the server will then send an OAM cell ahead of the data. This cell will have its traffic field set to SERVER, and in the bandwidth field, the rate at which the server is actually sending. It will also send an additional OAM cell following the last ATM data cell for every transmission, with the traffic field set to CLOSE. This additional OAM cell imposes a negligible overhead to the server; its purpose is to help the switches compute the active (i.e. currently consumed) and residual bandwidth, qd allow the clients to close and collect statistics about a connection. Two important parameters to set at the server are the data size distribution and the lower bound on the available bandwidth below which the server will declare a backoff and will not send any data at all. For the data size distribution, we have chosen a very heavy tailed distribution ranging from 200 Kbytes to 10 Mbytes, with a mean data size of 4.92 Mbytes, i.e Mbits. For the backoff lower bound, we declare a backoff when the transmission time is going to be more than five times the target time for a particular transmission, i.e. if the transmission is going to last more than 5 seconds. 3 Expected performance, preliminary analysis We will employ a very simple configuration, in which three clients share the same bottleneck link in their connections to three servers. This simple configuration, because of its symmetry, will provide information about the fairness of the scheme and easily separate the influence of each parameter on the performance of each scheme. The bandwidth contention (i.e. load) on the network is determined by four parameters. Three of them are fixed at the network level and the remaining (client request frequency) will be fixed at the simulation level: Round-trip times: Propagation delays are set to 1 ms for the links connecting either clients or servers to the switches and 5 ms for the link connecting the two switches, therefore having a round-trip time of 14 ms. Link speeds and buffer space: all links in the network model are identical duplex links, transmitting at 100 Mbps. This is going to create a bottleneck in the link connecting the two switches, over which we can test the performance of the three schemes. For this link speed, our choice of buffer size is 5,000 cells per outgoing link at each switch. Size of the data to be transmitted: As it was mentioned in the server description, we use a heavy tailed distribution of sizes, with' a mean value of 4.92 Mbytes of data to send per request. Frequency at which clients request data from the servers. We want to test the behavior of the three schemes and compare their performance under different parameters. We want to put a very heavy load on the network, so we can test how the three schemes differ. For the particular topology that we are using, the load on the network is going to be determined by the frequency at which clients request data from the servers. We have a bottleneck link with a capacity of 100 Mbps, three clientlserver pairs and a mean rate desired by the server of 4.92 Mbyteslsec per connection, that is Mbpslsec or a 39.36% of the link capacity. Therefore, we need to have at least one request per client per second to maintain a moderate load. Each of the parameters will a have a different impact on the performance of the three schemes. The main purpose of the three schemes is to keep the transmission time close to a target time of one second. Therefore, with all transmissions lasting a time period in the order of one second, the ratio between the link capacity and the mean size of the data will determine the number of simultaneous transmissions a particular link can handle. If the data size is large compared with the link capacity, we are allowing just a few transmissions on the link, forcing other transmission requests to backoff. This may cause unfairness over a short period of time (in the order of a few seconds) because a connection eventually could obtain and maintain a large amount of bandwidth. Under these circumstances, having a higher request frequency at the client is going to translate into a larger number of backoffs, but the transmission time of connections that get established is not going to be dramatically affected, specially in Scheme 2. Therefore, the ratio between the client request frequency and the round-trip time is going to be the most important factor determining the performance. As intuition may indicate, the difference between the three schemes will depend on how frequently the requests are sent from the clients. If the mean interarrival time for two consecutive requests is much larger than a round-trip - Global Telecommunications Conference - Globecom' _ ~

4 time, then the performance perceived by the users will be quite similar for the three schemes. If, however, the roundtrip time and interarrival times are comparable (or the interanival times are smaller than the round-trip time), switches using Schemes 1 and 3 will experience more cell loss, as it may happen that many servers attempt to use the same residual bandwidth and therefore overwhelm the switch. On the other hand, switches using Scheme 2 will be much more conservative, as they will reserve capacity on a link just for the first attempted connection. Although this scheme will not cause any cell loss, it will result in a larger number of sessions being rejected and underutilization of the link bandwidth. 4 Simulation results and discussion The main purpose of the simulations is to verify the correctness of the analysis and expected performance. In order to do that we need to stress the network in such a way that the differences of behavior of the three schemes can be shown. It is also necessary in order to understand how the three proposed schemes will work. We define a notion of collision window, which is the time elapsed since an OAM cell carrying a request for a connection from a client passed through a switch until it returns to the switch, followed by the actual data. In general, this window will vary from switch to switch and it will be equal to a round-trip time from the switch to the server. In Scheme2, during this time, the residual bandwidth of the link will be reserved but not used. In Scheme 1 and 3, if another request anives from a client during this time, it may lead to an overload of the link, since the two connections will think they have the same residual bandwidth available. We will define this event as a collision. In the event that there is no collision, all schemes will behave identically, since the state of the switch seen by the connection request will be the same regardless of the scheme used. 4.1 Performance under moderate load We first prepared a set of simulations in which the 3 clients were performing 4 requests per second. That means having 12 requests per second overall and an average traffic load per second of 4.92 Mbyteslsec x 8 bitshyte x 12 requests = 472 Mbps. This rate is 4 times more bandwidth than what the bottleneck link can handle. This corresponds to having a request through the switch every 83 ms on average, when the round-trip time (i.e. the collision window), from the last switch to the server is 2 ms. This implies that the possibility of having two requests coming during the collision window is unlikely. Therefore, the behavior of the three methods under these conditions should be quite similar. From the simulation results, we verify that our intuition of the similar performance of the three schemes is correct. The utilization of the network is the same for the three methods (92.2% on average). The cell loss is zero in all three cases, which is also consistent with the fact that no two connections collided over the duration of the simulation. However, even though the traffic requests exceed the link capacity, the link is still under-utilized. The link remains partially utilized for some time, even though there is also a large number of connections that get backed off (72%). This happens, again, because the number of requests per second is small. While the link is fully utilized, many connections have to be backed off; however, when a connection finishes, therefore increasing the residual bandwidth, it takes some time until a new connection request for that bandwidth is initiated by a client. 4.2 Performance under high load We also performed another set of simulations, in this case increasing the request frequency of the clients up to 100 requests per second. This means having a request at the bottleneck link every 3.33 ms. Since the collision window is 2 ms, we expect to see differences in the behavior of the three proposed methods Scheme 1 For Scheme 1, we start seeing collisions, i.e. more than one connection got established with the perception that a given amount of residual bandwidth is available to each connection. Since there is no mechanism to lower the transmission rate of the servers even when the switch is experiencing cell loss, the servers will keep sending at the same speed, once the connection is established. This will cause, as we see in Figure 1, the queue in the bottleneck link to overflow and, therefore, experience cell loss. Eventually, when the competing connections are finished, and if there were no further collisions, queues would drain Queue Size Time (5) Figure 1: Queue sizes for each scheme,1348 Global Telecommunications Conference - Globecom 99

5 60 Cell Loss Probsbillty Distrlbutlon ae -.+-.Scheme 1 U ---Scheme 3 n Cell Loss Probability (%) Figure 2: Cell loss probability distribution However, since the request frequency is very high, the queue will not be drained and will keep experiencing cell loss. However, the link is fully utilized, thus maximizing the transmission over the bottleneck link. The overall cell loss ratio (total cells lost over total cells sent) is 4.53%. We also provide a distribution of cell loss on a per connection basis, shown in Figure 2. We can see from that figure that the loss ratio is not evenly distributed and most connections either have no cell loss or experience around 8% cell loss. This is explained by the fact that cell loss is not happening continuously, but only for transmissions in which a collision occurred. Therefore we should expect this kind of bimodal distribution Scheme 2 From the simulation for Scheme 2, we see how this more conservative scheme, in which we guarantee that the residual bandwidth advertised is always available, prevents cell loss. This is not surprising since the switch guarantees that even when a collision occurs, only the first connection arriving will get the residual bandwidth on the OAM cell. Utilization of the link in Scheme 2 is slightly lower than in Scheme 1, although it is very high (98.3%) due to the very high frequency of client requests. In Figure 1, we show how the queue remains almost emptg, supporting the fact that there is no cell loss at all. The backoff probability is similar to Scheme 1, and since the number of requests accepted is so small with respect to the number of requests attempted, we cannot extract any conclusion from the comparison of backoff ratios for Scheme 1 and Scheme 2. Analysis and intthtion indicate that backoff in Scheme 2 should be higher than Scheme 1, but this claim cannot be verified with our simulations. Concerning the distribution of total transmission time, similarly to Scheme 1, most of the connections get * The reason why there is a nonempty queue is that O M cells use up some bandwidth, which is not accounted for. transmitted within the target time. However, given that in Scheme 2 there is no cell loss, and the link remains highly utilized, we conclude that under extremely high load, Scheme 2 outperforms Scheme Scheme 3 We performed a new set of simulations under the same high load conditions and we observed the expected improvement over Scheme 1. We set LB to 0.2 and HB to 0.5, i.e. when the queue occupancy is below 20%, we will release all the residual bandwidth available and when it is above 50% we will advertise zero residual bandwidth, even though we could eventually have some. The link load is quite similar to what was obtained for Scheme 1. We still see overloading of the bottleneck link. However, with the progressive releasing of the bandwidth, we are allowing the queues to drain before over-utilization occurs again. The difference with Scheme 1 can be clearly seen in Figure 1. Cell loss occurs only when there is queue overflow. Comparing this with the queue size for Scheme 1, we can see how cell loss occurs much less often in Scheme 3. The absolute cell loss ratio is 2.11%. Moreover, since the queue is never empty, we are guaranteeing that the link is fully utilized. Therefore, there is an improvement in performance, since cell loss is reduced and link utilization keeps being 100%. We also point out that, in addition to the fact that the overall cell loss probability is reduced, having the queues drained before the possibility of having losses again decreases the cell loss ratio on a per connection basis. As we can see from Figure 2, the cell loss distribution does not have the same bimodal characteristic as in Scheme 1, since most of the connections are experiencing a cell loss ratio close to zero. Finally, the transmission times are similar to those in Scheme 1 and 2. However, we expect Scheme 3 to be slightly better than Scheme 1, because having the queues less populated will tend to decrease the queuing delay. Hence, the transmission time would be slightly less. References [I] The ATM Forum, Traffic Management Specification Version 4.0, April [2] ITU-T 1.371, Traffic control and congestion control in B-ISDN, Geneva, Switzerland, Aug [3] B. Vandalore, S. Kalyanaraman, R. Jain, R. Goyanl, and S. Fahamy, Performance of Bursty World Wide Web (WWW) Sources over ABR, WebNet 97, Toronto, Nov [4] K. Thompson, G. J. Miller, and R. Wilder, Wide- Area Internet Traffic Patterns and Characteristics, IEEE Network, Nov./Dec Global Telecommunications Conference - Globecom

******************************************************************* *******************************************************************

******************************************************************* ******************************************************************* ATM Forum Document Number: ATM_Forum/96-0518 Title: Performance of TCP over UBR and buffer requirements Abstract: We study TCP throughput and fairness over UBR for several buffer and maximum window sizes.

More information

different problems from other networks ITU-T specified restricted initial set Limited number of overhead bits ATM forum Traffic Management

different problems from other networks ITU-T specified restricted initial set Limited number of overhead bits ATM forum Traffic Management Traffic and Congestion Management in ATM 3BA33 David Lewis 3BA33 D.Lewis 2007 1 Traffic Control Objectives Optimise usage of network resources Network is a shared resource Over-utilisation -> congestion

More information

Performance Consequences of Partial RED Deployment

Performance Consequences of Partial RED Deployment Performance Consequences of Partial RED Deployment Brian Bowers and Nathan C. Burnett CS740 - Advanced Networks University of Wisconsin - Madison ABSTRACT The Internet is slowly adopting routers utilizing

More information

DiffServ Architecture: Impact of scheduling on QoS

DiffServ Architecture: Impact of scheduling on QoS DiffServ Architecture: Impact of scheduling on QoS Abstract: Scheduling is one of the most important components in providing a differentiated service at the routers. Due to the varying traffic characteristics

More information

Traffic Management Tools for ATM Networks With Real-Time and Non-Real-Time Services

Traffic Management Tools for ATM Networks With Real-Time and Non-Real-Time Services Traffic Management Tools for ATM Networks With Real-Time and Non-Real-Time Services Kalevi Kilkki Helsinki University of Technology e-mail: kalevi.kilkki@hut.fi Abstract This presentation considers some

More information

Communication Networks

Communication Networks Communication Networks Chapter 3 Multiplexing Frequency Division Multiplexing (FDM) Useful bandwidth of medium exceeds required bandwidth of channel Each signal is modulated to a different carrier frequency

More information

On TCP friendliness of VOIP traffic

On TCP friendliness of VOIP traffic On TCP friendliness of VOIP traffic By Rashmi Parthasarathy WSU ID # 10975537 A report submitted in partial fulfillment of the requirements of CptS 555 Electrical Engineering and Computer Science Department

More information

ATM Quality of Service (QoS)

ATM Quality of Service (QoS) ATM Quality of Service (QoS) Traffic/Service Classes, Call Admission Control Usage Parameter Control, ABR Agenda Introduction Service Classes and Traffic Attributes Traffic Control Flow Control Special

More information

Congestion control in TCP

Congestion control in TCP Congestion control in TCP If the transport entities on many machines send too many packets into the network too quickly, the network will become congested, with performance degraded as packets are delayed

More information

perform well on paths including satellite links. It is important to verify how the two ATM data services perform on satellite links. TCP is the most p

perform well on paths including satellite links. It is important to verify how the two ATM data services perform on satellite links. TCP is the most p Performance of TCP/IP Using ATM ABR and UBR Services over Satellite Networks 1 Shiv Kalyanaraman, Raj Jain, Rohit Goyal, Sonia Fahmy Department of Computer and Information Science The Ohio State University

More information

******************************************************************* *******************************************************************

******************************************************************* ******************************************************************* ATM Forum Document Number: ATM_Forum/96-0517 Title: Buffer Requirements for TCP over ABR Abstract: In our previous study [2], it was shown that cell loss due to limited buffering may degrade throughput

More information

ATM Traffic Control Based on Cell Loss Priority and Performance Analysis

ATM Traffic Control Based on Cell Loss Priority and Performance Analysis ENSC 833-3 : NETWORK PROTOCOLS AND PERFORMANCE FINAL PROJECT PRESENTATION Spring 2001 ATM Traffic Control Based on Cell Loss Priority and Performance Analysis Shim, Heung-Sub Shim_hs@hanmail.net Contents

More information

Wireless Networks. Communication Networks

Wireless Networks. Communication Networks Wireless Networks Communication Networks Types of Communication Networks Traditional Traditional local area network (LAN) Traditional wide area network (WAN) Higher-speed High-speed local area network

More information

Rohit Goyal 1, Raj Jain 1, Sonia Fahmy 1, Shobana Narayanaswamy 2

Rohit Goyal 1, Raj Jain 1, Sonia Fahmy 1, Shobana Narayanaswamy 2 MODELING TRAFFIC MANAGEMENT IN ATM NETWORKS WITH OPNET Rohit Goyal 1, Raj Jain 1, Sonia Fahmy 1, Shobana Narayanaswamy 2 1. The Ohio State University, Department of Computer and Information Science, 2015

More information

Rohit Goyal 1, Raj Jain 1, Sonia Fahmy 1, Shobana Narayanaswamy 2

Rohit Goyal 1, Raj Jain 1, Sonia Fahmy 1, Shobana Narayanaswamy 2 MODELING TRAFFIC MANAGEMENT IN ATM NETWORKS WITH OPNET Rohit Goyal 1, Raj Jain 1, Sonia Fahmy 1, Shobana Narayanaswamy 2 1. The Ohio State University, Department of Computer and Information Science, 2015

More information

Fairness in bandwidth allocation for ABR congestion avoidance algorithms

Fairness in bandwidth allocation for ABR congestion avoidance algorithms Fairness in bandwidth allocation for ABR congestion avoidance algorithms Bradley Williams, Neco Ventura Dept of Electrical Engineering, University of Cape Town, Private Bag, Rondebosch, South Africa {bwillia,

More information

Performance of a Switched Ethernet: A Case Study

Performance of a Switched Ethernet: A Case Study Performance of a Switched Ethernet: A Case Study M. Aboelaze A Elnaggar Dept. of Computer Science Dept of Electrical Engineering York University Sultan Qaboos University Toronto Ontario Alkhod 123 Canada

More information

The ERICA ALGORITHM for ABR TRAFFIC in ATM NETWORKS

The ERICA ALGORITHM for ABR TRAFFIC in ATM NETWORKS The ERICA ALGORITHM for ABR TRAFFIC in ATM NETWORKS Ibrahim Koçyigit Department of Electronics, Faculty of Engineering, Uludag University, Görükle, Bursa TURKEY E-mail: kocyigit@uludag.edu.tr Emrah Yürüklü

More information

TCP and BBR. Geoff Huston APNIC. #apricot

TCP and BBR. Geoff Huston APNIC. #apricot TCP and BBR Geoff Huston APNIC The IP Architecture At its heart IP is a datagram network architecture Individual IP packets may be lost, re-ordered, re-timed and even fragmented The IP Architecture At

More information

DiffServ Architecture: Impact of scheduling on QoS

DiffServ Architecture: Impact of scheduling on QoS DiffServ Architecture: Impact of scheduling on QoS Introduction: With the rapid growth of the Internet, customers are demanding multimedia applications such as telephony and video on demand, to be available

More information

Performance Evaluation of Scheduling Mechanisms for Broadband Networks

Performance Evaluation of Scheduling Mechanisms for Broadband Networks Performance Evaluation of Scheduling Mechanisms for Broadband Networks Gayathri Chandrasekaran Master s Thesis Defense The University of Kansas 07.31.2003 Committee: Dr. David W. Petr (Chair) Dr. Joseph

More information

Transmission Control Protocol. ITS 413 Internet Technologies and Applications

Transmission Control Protocol. ITS 413 Internet Technologies and Applications Transmission Control Protocol ITS 413 Internet Technologies and Applications Contents Overview of TCP (Review) TCP and Congestion Control The Causes of Congestion Approaches to Congestion Control TCP Congestion

More information

Standardizing Information and Communication Systems

Standardizing Information and Communication Systems Standard ECMA-261 June 1997 Standardizing Information and Communication Systems Broadband Private Integrated Services Network (B-PISN) - Service Description - Broadband Connection Oriented Bearer Services

More information

CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS

CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS 28 CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS Introduction Measurement-based scheme, that constantly monitors the network, will incorporate the current network state in the

More information

Communication Networks

Communication Networks Communication Networks Spring 2018 Laurent Vanbever nsg.ee.ethz.ch ETH Zürich (D-ITET) April 30 2018 Materials inspired from Scott Shenker & Jennifer Rexford Last week on Communication Networks We started

More information

Performance of UMTS Radio Link Control

Performance of UMTS Radio Link Control Performance of UMTS Radio Link Control Qinqing Zhang, Hsuan-Jung Su Bell Laboratories, Lucent Technologies Holmdel, NJ 77 Abstract- The Radio Link Control (RLC) protocol in Universal Mobile Telecommunication

More information

Congestion in Data Networks. Congestion in Data Networks

Congestion in Data Networks. Congestion in Data Networks Congestion in Data Networks CS420/520 Axel Krings 1 Congestion in Data Networks What is Congestion? Congestion occurs when the number of packets being transmitted through the network approaches the packet

More information

Congestion Control in Communication Networks

Congestion Control in Communication Networks Congestion Control in Communication Networks Introduction Congestion occurs when number of packets transmitted approaches network capacity Objective of congestion control: keep number of packets below

More information

TCP and BBR. Geoff Huston APNIC

TCP and BBR. Geoff Huston APNIC TCP and BBR Geoff Huston APNIC The IP Architecture At its heart IP is a datagram network architecture Individual IP packets may be lost, re-ordered, re-timed and even fragmented The IP Architecture At

More information

Comparing the bandwidth and priority Commands of a QoS Service Policy

Comparing the bandwidth and priority Commands of a QoS Service Policy Comparing the and priority s of a QoS Service Policy Contents Introduction Prerequisites Requirements Components Used Conventions Summary of Differences Configuring the Configuring the priority Which Traffic

More information

Source 1. Destination 1. Bottleneck Link. Destination 2. Source 2. Destination N. Source N

Source 1. Destination 1. Bottleneck Link. Destination 2. Source 2. Destination N. Source N WORST CASE BUFFER REQUIREMENTS FOR TCP OVER ABR a B. Vandalore, S. Kalyanaraman b, R. Jain, R. Goyal, S. Fahmy Dept. of Computer and Information Science, The Ohio State University, 2015 Neil Ave, Columbus,

More information

CS 556 Advanced Computer Networks Spring Solutions to Midterm Test March 10, YOUR NAME: Abraham MATTA

CS 556 Advanced Computer Networks Spring Solutions to Midterm Test March 10, YOUR NAME: Abraham MATTA CS 556 Advanced Computer Networks Spring 2011 Solutions to Midterm Test March 10, 2011 YOUR NAME: Abraham MATTA This test is closed books. You are only allowed to have one sheet of notes (8.5 11 ). Please

More information

Homework 1. Question 1 - Layering. CSCI 1680 Computer Networks Fonseca

Homework 1. Question 1 - Layering. CSCI 1680 Computer Networks Fonseca CSCI 1680 Computer Networks Fonseca Homework 1 Due: 27 September 2012, 4pm Question 1 - Layering a. Why are networked systems layered? What are the advantages of layering? Are there any disadvantages?

More information

Master Course Computer Networks IN2097

Master Course Computer Networks IN2097 Chair for Network Architectures and Services Prof. Carle Department of Computer Science TU München Master Course Computer Networks IN2097 Prof. Dr.-Ing. Georg Carle Christian Grothoff, Ph.D. Stephan Günther

More information

Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015

Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015 Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015 I d like to complete our exploration of TCP by taking a close look at the topic of congestion control in TCP. To prepare for

More information

Simulation Study for a Broadband Multimedia VSAT Network

Simulation Study for a Broadband Multimedia VSAT Network Simulation Study for a Broadband Multimedia Yi Qian, Rose Hu, and Hosame Abu-Amara Nortel s 2201 Lakeside Blvd., Mail Stop 992-02-E70 Richardson, Texas 75082, USA Phone: 972-685-7264 Fax: 972-685-3463

More information

Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes

Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes Zhili Zhao Dept. of Elec. Engg., 214 Zachry College Station, TX 77843-3128 A. L. Narasimha Reddy

More information

QUALITY of SERVICE. Introduction

QUALITY of SERVICE. Introduction QUALITY of SERVICE Introduction There are applications (and customers) that demand stronger performance guarantees from the network than the best that could be done under the circumstances. Multimedia

More information

Introduction to ATM Traffic Management on the Cisco 7200 Series Routers

Introduction to ATM Traffic Management on the Cisco 7200 Series Routers CHAPTER 1 Introduction to ATM Traffic Management on the Cisco 7200 Series Routers In the latest generation of IP networks, with the growing implementation of Voice over IP (VoIP) and multimedia applications,

More information

Performance Analysis of Cell Switching Management Scheme in Wireless Packet Communications

Performance Analysis of Cell Switching Management Scheme in Wireless Packet Communications Performance Analysis of Cell Switching Management Scheme in Wireless Packet Communications Jongho Bang Sirin Tekinay Nirwan Ansari New Jersey Center for Wireless Telecommunications Department of Electrical

More information

: GFR -- Providing Rate Guarantees with FIFO Buffers to TCP Traffic

: GFR -- Providing Rate Guarantees with FIFO Buffers to TCP Traffic 97-0831: GFR -- Providing Rate Guarantees with FIFO Buffers to TCP Traffic Rohit Goyal,, Sonia Fahmy, Bobby Vandalore, Shivkumar Kalyanaraman Sastri Kota, Lockheed Martin Telecommunications Pradeep Samudra,

More information

Current Issues in ATM Forum Traffic Management Group

Current Issues in ATM Forum Traffic Management Group Current Issues in ATM Forum Traffic Management Group Columbus, OH 43210 Jain@CIS.Ohio-State.Edu http://www.cis.ohio-state.edu/~jain/ 1 Overview Effect of VS/VD GFR Virtual Paths ITU vs ATMF CDV Accumulation

More information

Introduction to Real-Time Communications. Real-Time and Embedded Systems (M) Lecture 15

Introduction to Real-Time Communications. Real-Time and Embedded Systems (M) Lecture 15 Introduction to Real-Time Communications Real-Time and Embedded Systems (M) Lecture 15 Lecture Outline Modelling real-time communications Traffic and network models Properties of networks Throughput, delay

More information

TCP and BBR. Geoff Huston APNIC

TCP and BBR. Geoff Huston APNIC TCP and BBR Geoff Huston APNIC Computer Networking is all about moving data The way in which data movement is controlled is a key characteristic of the network architecture The Internet protocol passed

More information

B. Bellalta Mobile Communication Networks

B. Bellalta Mobile Communication Networks IEEE 802.11e : EDCA B. Bellalta Mobile Communication Networks Scenario STA AP STA Server Server Fixed Network STA Server Upwnlink TCP flows Downlink TCP flows STA AP STA What is the WLAN cell performance

More information

What Is Congestion? Computer Networks. Ideal Network Utilization. Interaction of Queues

What Is Congestion? Computer Networks. Ideal Network Utilization. Interaction of Queues 168 430 Computer Networks Chapter 13 Congestion in Data Networks What Is Congestion? Congestion occurs when the number of packets being transmitted through the network approaches the packet handling capacity

More information

Appendix B. Standards-Track TCP Evaluation

Appendix B. Standards-Track TCP Evaluation 215 Appendix B Standards-Track TCP Evaluation In this appendix, I present the results of a study of standards-track TCP error recovery and queue management mechanisms. I consider standards-track TCP error

More information

Teletraffic theory (for beginners)

Teletraffic theory (for beginners) Teletraffic theory (for beginners) samuli.aalto@hut.fi teletraf.ppt S-38.8 - The Principles of Telecommunications Technology - Fall 000 Contents Purpose of Teletraffic Theory Network level: switching principles

More information

Priority Traffic CSCD 433/533. Advanced Networks Spring Lecture 21 Congestion Control and Queuing Strategies

Priority Traffic CSCD 433/533. Advanced Networks Spring Lecture 21 Congestion Control and Queuing Strategies CSCD 433/533 Priority Traffic Advanced Networks Spring 2016 Lecture 21 Congestion Control and Queuing Strategies 1 Topics Congestion Control and Resource Allocation Flows Types of Mechanisms Evaluation

More information

Congestion Control Open Loop

Congestion Control Open Loop Congestion Control Open Loop Muhammad Jaseemuddin Dept. of Electrical & Computer Engineering Ryerson University Toronto, Canada References 1. A. Leon-Garcia and I. Widjaja, Communication Networks: Fundamental

More information

Lecture 4 Wide Area Networks - Congestion in Data Networks

Lecture 4 Wide Area Networks - Congestion in Data Networks DATA AND COMPUTER COMMUNICATIONS Lecture 4 Wide Area Networks - Congestion in Data Networks Mei Yang Based on Lecture slides by William Stallings 1 WHAT IS CONGESTION? congestion occurs when the number

More information

Quality Control Scheme for ATM Switching Network

Quality Control Scheme for ATM Switching Network UDC 621.395.345: 621.395.74 Quality Control Scheme for ATM Switching Network VMasafumi Katoh VTakeshi Kawasaki VSatoshi Kakuma (Manuscript received June 5,1997) In an ATM network, there are many kinds

More information

Flow Control. Flow control problem. Other considerations. Where?

Flow Control. Flow control problem. Other considerations. Where? Flow control problem Flow Control An Engineering Approach to Computer Networking Consider file transfer Sender sends a stream of packets representing fragments of a file Sender should try to match rate

More information

CS519: Computer Networks

CS519: Computer Networks Lets start at the beginning : Computer Networks Lecture 1: Jan 26, 2004 Intro to Computer Networking What is a for? To allow two or more endpoints to communicate What is a? Nodes connected by links Lets

More information

Designing Efficient Explicit-Rate Switch Algorithm with Max-Min Fairness for ABR Service Class in ATM Networks

Designing Efficient Explicit-Rate Switch Algorithm with Max-Min Fairness for ABR Service Class in ATM Networks Designing Efficient Explicit-Rate Switch Algorithm with Max-Min Fairness for ABR Service Class in ATM Networks Hiroyuki Ohsaki, Masayuki Murata and Hideo Miyahara Department of Informatics and Mathematical

More information

Computer Networking. Queue Management and Quality of Service (QOS)

Computer Networking. Queue Management and Quality of Service (QOS) Computer Networking Queue Management and Quality of Service (QOS) Outline Previously:TCP flow control Congestion sources and collapse Congestion control basics - Routers 2 Internet Pipes? How should you

More information

QoS Guarantees. Motivation. . link-level level scheduling. Certain applications require minimum level of network performance: Ch 6 in Ross/Kurose

QoS Guarantees. Motivation. . link-level level scheduling. Certain applications require minimum level of network performance: Ch 6 in Ross/Kurose QoS Guarantees. introduction. call admission. traffic specification. link-level level scheduling. call setup protocol. reading: Tannenbaum,, 393-395, 395, 458-471 471 Ch 6 in Ross/Kurose Motivation Certain

More information

R1 Buffer Requirements for TCP over ABR

R1 Buffer Requirements for TCP over ABR 96-0517R1 Buffer Requirements for TCP over ABR, Shiv Kalyanaraman, Rohit Goyal, Sonia Fahmy Saragur M. Srinidhi Sterling Software and NASA Lewis Research Center Contact: Jain@CIS.Ohio-State.Edu http://www.cis.ohio-state.edu/~jain/

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2015 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

Congestion Control. Principles of Congestion Control. Network-assisted Congestion Control: ATM. Congestion Control. Computer Networks 10/21/2009

Congestion Control. Principles of Congestion Control. Network-assisted Congestion Control: ATM. Congestion Control. Computer Networks 10/21/2009 Congestion Control Kai Shen Principles of Congestion Control Congestion: informally: too many sources sending too much data too fast for the network to handle results of congestion: long delays (e.g. queueing

More information

Chapter 6: Congestion Control and Resource Allocation

Chapter 6: Congestion Control and Resource Allocation Chapter 6: Congestion Control and Resource Allocation CS/ECPE 5516: Comm. Network Prof. Abrams Spring 2000 1 Section 6.1: Resource Allocation Issues 2 How to prevent traffic jams Traffic lights on freeway

More information

CS244a: An Introduction to Computer Networks

CS244a: An Introduction to Computer Networks Name: Student ID #: Campus/SITN-Local/SITN-Remote? MC MC Long 18 19 TOTAL /20 /20 CS244a: An Introduction to Computer Networks Final Exam: Thursday February 16th, 2000 You are allowed 2 hours to complete

More information

The War Between Mice and Elephants

The War Between Mice and Elephants The War Between Mice and Elephants Liang Guo and Ibrahim Matta Computer Science Department Boston University 9th IEEE International Conference on Network Protocols (ICNP),, Riverside, CA, November 2001.

More information

CS 421: COMPUTER NETWORKS SPRING FINAL May 16, minutes

CS 421: COMPUTER NETWORKS SPRING FINAL May 16, minutes CS 4: COMPUTER NETWORKS SPRING 03 FINAL May 6, 03 50 minutes Name: Student No: Show all your work very clearly. Partial credits will only be given if you carefully state your answer with a reasonable justification.

More information

Pessimistic Backoff for Mobile Ad hoc Networks

Pessimistic Backoff for Mobile Ad hoc Networks Pessimistic Backoff for Mobile Ad hoc Networks Saher S. Manaseer Department of computing science Glasgow University saher@dcs.gla.ac.uk Muneer Masadeh Department of Computer Science Jordan University of

More information

2. Modelling of telecommunication systems (part 1)

2. Modelling of telecommunication systems (part 1) 2. Modelling of telecommunication systems (part ) lect02.ppt S-38.45 - Introduction to Teletraffic Theory - Fall 999 2. Modelling of telecommunication systems (part ) Contents Telecommunication networks

More information

From ATM to IP and back again: the label switched path to the converged Internet, or another blind alley?

From ATM to IP and back again: the label switched path to the converged Internet, or another blind alley? Networking 2004 Athens 11 May 2004 From ATM to IP and back again: the label switched path to the converged Internet, or another blind alley? Jim Roberts France Telecom R&D The story of QoS: how to get

More information

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup Chapter 4 Routers with Tiny Buffers: Experiments This chapter describes two sets of experiments with tiny buffers in networks: one in a testbed and the other in a real network over the Internet2 1 backbone.

More information

Regulation of TCP Flow in Heterogeneous Networks Using Packet Discarding Schemes

Regulation of TCP Flow in Heterogeneous Networks Using Packet Discarding Schemes Regulation of TCP Flow in Heterogeneous Networks Using Packet Discarding Schemes by Yu-Shiou Flora Sun B.S. Electrical Engineering and Computer Science (1997) Massachusetts Institute of Technology Submitted

More information

Fair and Efficient TCP Access in the IEEE Infrastructure Basic Service Set

Fair and Efficient TCP Access in the IEEE Infrastructure Basic Service Set Fair and Efficient TCP Access in the IEEE 802.11 Infrastructure Basic Service Set 1 arxiv:0806.1089v1 [cs.ni] 6 Jun 2008 Feyza Keceli, Inanc Inan, and Ender Ayanoglu Center for Pervasive Communications

More information

ADVANCED COMPUTER NETWORKS

ADVANCED COMPUTER NETWORKS ADVANCED COMPUTER NETWORKS Congestion Control and Avoidance 1 Lecture-6 Instructor : Mazhar Hussain CONGESTION CONTROL When one part of the subnet (e.g. one or more routers in an area) becomes overloaded,

More information

There are 10 questions in total. Please write your SID on each page.

There are 10 questions in total. Please write your SID on each page. Name: SID: Department of EECS - University of California at Berkeley EECS122 - Introduction to Communication Networks - Spring 2005 to the Final: 5/20/2005 There are 10 questions in total. Please write

More information

Unit 2 Packet Switching Networks - II

Unit 2 Packet Switching Networks - II Unit 2 Packet Switching Networks - II Dijkstra Algorithm: Finding shortest path Algorithm for finding shortest paths N: set of nodes for which shortest path already found Initialization: (Start with source

More information

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 Question 344 Points 444 Points Score 1 10 10 2 10 10 3 20 20 4 20 10 5 20 20 6 20 10 7-20 Total: 100 100 Instructions: 1. Question

More information

Testing Policing in ATM Networks

Testing Policing in ATM Networks Testing Policing in ATM Networks Policing is one of the key mechanisms used in ATM (Asynchrous Transfer Mode) networks to avoid network congestion. The HP E4223A policing and traffic characterization test

More information

Streaming Video and TCP-Friendly Congestion Control

Streaming Video and TCP-Friendly Congestion Control Streaming Video and TCP-Friendly Congestion Control Sugih Jamin Department of EECS University of Michigan jamin@eecs.umich.edu Joint work with: Zhiheng Wang (UofM), Sujata Banerjee (HP Labs) Video Application

More information

Report on Transport Protocols over Mismatched-rate Layer-1 Circuits with 802.3x Flow Control

Report on Transport Protocols over Mismatched-rate Layer-1 Circuits with 802.3x Flow Control Report on Transport Protocols over Mismatched-rate Layer-1 Circuits with 82.3x Flow Control Helali Bhuiyan, Mark McGinley, Tao Li, Malathi Veeraraghavan University of Virginia Email: {helali, mem5qf, taoli,

More information

SELECTION OF METRICS (CONT) Gaia Maselli

SELECTION OF METRICS (CONT) Gaia Maselli SELECTION OF METRICS (CONT) Gaia Maselli maselli@di.uniroma1.it Computer Network Performance 2 Selecting performance metrics Computer Network Performance 3 Selecting performance metrics speed Individual

More information

Traffic and Congestion Control in ATM Networks Using Neuro-Fuzzy Approach

Traffic and Congestion Control in ATM Networks Using Neuro-Fuzzy Approach Traffic and Congestion Control in ATM Networks Using Neuro-Fuzzy Approach Suriti Gupta College of Technology and Engineering Udaipur-313001 Vinod Kumar College of Technology and Engineering Udaipur-313001

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2014 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

TCP START-UP BEHAVIOR UNDER THE PROPORTIONAL FAIR SCHEDULING POLICY

TCP START-UP BEHAVIOR UNDER THE PROPORTIONAL FAIR SCHEDULING POLICY TCP START-UP BEHAVIOR UNDER THE PROPORTIONAL FAIR SCHEDULING POLICY J. H. CHOI,J.G.CHOI, AND C. YOO Department of Computer Science and Engineering Korea University Seoul, Korea E-mail: {jhchoi, hxy}@os.korea.ac.kr

More information

Internet Traffic Characteristics. How to take care of the Bursty IP traffic in Optical Networks

Internet Traffic Characteristics. How to take care of the Bursty IP traffic in Optical Networks Internet Traffic Characteristics Bursty Internet Traffic Statistical aggregation of the bursty data leads to the efficiency of the Internet. Large Variation in Source Bandwidth 10BaseT (10Mb/s), 100BaseT(100Mb/s),

More information

CS 421: COMPUTER NETWORKS SPRING FINAL May 21, minutes

CS 421: COMPUTER NETWORKS SPRING FINAL May 21, minutes CS 421: COMPUTER NETWORKS SPRING 2015 FINAL May 21, 2015 150 minutes Name: Student No: Show all your work very clearly. Partial credits will only be given if you carefully state your answer with a reasonable

More information

Overview. Lecture 22 Queue Management and Quality of Service (QoS) Queuing Disciplines. Typical Internet Queuing. FIFO + Drop tail Problems

Overview. Lecture 22 Queue Management and Quality of Service (QoS) Queuing Disciplines. Typical Internet Queuing. FIFO + Drop tail Problems Lecture 22 Queue Management and Quality of Service (QoS) Overview Queue management & RED Fair queuing Khaled Harras School of Computer Science niversity 15 441 Computer Networks Based on slides from previous

More information

(Refer Slide Time: 2:20)

(Refer Slide Time: 2:20) Data Communications Prof. A. Pal Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur Lecture -23 X.25 and Frame Relay Hello and welcome to today s lecture on X.25 and

More information

Latency on a Switched Ethernet Network

Latency on a Switched Ethernet Network Page 1 of 6 1 Introduction This document serves to explain the sources of latency on a switched Ethernet network and describe how to calculate cumulative latency as well as provide some real world examples.

More information

************************************************************************ Distribution: ATM Forum Technical Working Group Members (AF-TM) *************

************************************************************************ Distribution: ATM Forum Technical Working Group Members (AF-TM) ************* ************************************************************************ ATM Forum Document Number: ATM_Forum/97-0617 ************************************************************************ Title: Worst

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2014 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

CS457 Transport Protocols. CS 457 Fall 2014

CS457 Transport Protocols. CS 457 Fall 2014 CS457 Transport Protocols CS 457 Fall 2014 Topics Principles underlying transport-layer services Demultiplexing Detecting corruption Reliable delivery Flow control Transport-layer protocols User Datagram

More information

Design Issues in Traffic Management for the ATM UBR+ Service for TCP over Satellite Networks: Report II

Design Issues in Traffic Management for the ATM UBR+ Service for TCP over Satellite Networks: Report II Design Issues in Traffic Management for the ATM UBR+ Service for TCP over Satellite Networks: Report II Columbus, OH 43210 Jain@CIS.Ohio-State.Edu http://www.cis.ohio-state.edu/~jain/ 1 Overview! Statement

More information

Latency on a Switched Ethernet Network

Latency on a Switched Ethernet Network FAQ 07/2014 Latency on a Switched Ethernet Network RUGGEDCOM Ethernet Switches & Routers http://support.automation.siemens.com/ww/view/en/94772587 This entry is from the Siemens Industry Online Support.

More information

Exercises TCP/IP Networking With Solutions

Exercises TCP/IP Networking With Solutions Exercises TCP/IP Networking With Solutions Jean-Yves Le Boudec Fall 2009 3 Module 3: Congestion Control Exercise 3.2 1. Assume that a TCP sender, called S, does not implement fast retransmit, but does

More information

Lecture 11. Transport Layer (cont d) Transport Layer 1

Lecture 11. Transport Layer (cont d) Transport Layer 1 Lecture 11 Transport Layer (cont d) Transport Layer 1 Agenda The Transport Layer (continue) Connection-oriented Transport (TCP) Flow Control Connection Management Congestion Control Introduction to the

More information

Which Service for TCP/IP Traffic on ATM: ABR or UBR?

Which Service for TCP/IP Traffic on ATM: ABR or UBR? Which Service for TCP/IP Traffic on ATM: ABR or UBR? Standby Guaranteed Joy Riders Confirmed Columbus, OH 43210-1277 Contact: Jain@CIS.Ohio-State.Edu http://www.cis.ohio-state.edu/~jain/ 2 1 Overview Service

More information

Lecture (08, 09) Routing in Switched Networks

Lecture (08, 09) Routing in Switched Networks Agenda Lecture (08, 09) Routing in Switched Networks Dr. Ahmed ElShafee Routing protocols Fixed Flooding Random Adaptive ARPANET Routing Strategies ١ Dr. Ahmed ElShafee, ACU Fall 2011, Networks I ٢ Dr.

More information

RED behavior with different packet sizes

RED behavior with different packet sizes RED behavior with different packet sizes Stefaan De Cnodder, Omar Elloumi *, Kenny Pauwels Traffic and Routing Technologies project Alcatel Corporate Research Center, Francis Wellesplein, 1-18 Antwerp,

More information

Performance Analysis & QoS Guarantee in ATM Networks

Performance Analysis & QoS Guarantee in ATM Networks P a g e 131 Global Journal of Computer Science and Technology Performance Analysis & QoS Guarantee in ATM Networks Parag Jain!, Sandip Vijay!!, S. C. Gupta!!!! Doctoral Candidate, Bhagwant Univ. & Professor,

More information

INTERNATIONAL TELECOMMUNICATION UNION

INTERNATIONAL TELECOMMUNICATION UNION INTERNATIONAL TELECOMMUNICATION UNION TELECOMMUNICATION STANDARDIZATION SECTOR STUDY PERIOD 21-24 English only Questions: 12 and 16/12 Geneva, 27-31 January 23 STUDY GROUP 12 DELAYED CONTRIBUTION 98 Source:

More information

***************************************************************** *****************************************************************

***************************************************************** ***************************************************************** ***************************************************************** ATM Forum Document Number: ATM_Forum/97-0858 ***************************************************************** Title: Factors affecting

More information

Congestion Control. Principles of Congestion Control. Network assisted congestion. Asynchronous Transfer Mode. Computer Networks 10/23/2013

Congestion Control. Principles of Congestion Control. Network assisted congestion. Asynchronous Transfer Mode. Computer Networks 10/23/2013 Congestion Control Kai Shen Principles of Congestion Control Congestion: Informally: too many sources sending too much data too fast for the network to handle Results of congestion: long delays (e.g. queueing

More information