A Survey of Congestion Control Schemes for ABR Services in ATM Networks Hyun mee Choi Ronald J. Vetter Payoff
|
|
- Cornelius Fitzgerald
- 6 years ago
- Views:
Transcription
1 Previous screen A Survey of Congestion Control Schemes for ABR Services in ATM Networks Hyun mee Choi Ronald J. Vetter Payoff This article summarizes the advantages and disadvantages of various rate-based congestion control schemes for available bit rate (ABR) services in ATM networks. It is important that network designers realize the limitations of the various approaches in order to make informed decisions when considering ATM for their enterprise networks. Introduction Broadband integrated services digital network(b-isdn) efforts are driven by the emerging needs for high-speed communications and enabling technologies to support new integrated services. Among the available technologies, asynchronous transfer mode (ATM) has emerged as a standard for supporting B-ISDN. ATM uses short, fixed-size cells consisting of 48 bytes of payload and 5 bytes of header in order to transmit information. The fixed size allows fast processing of cells and reduces the variance of delay, making the network suitable for integrated traffic consisting of voice, video, and data. Providing the desired QOS for these various traffic types is much more complex than what is done in data networks today. Available Bit Rate (ABR) Traffic Management The capability of ATM networks to provide large bandwidth and multiple quality of service (QOS) guarantees can be realized when equipped with effective traffic management mechanisms. Traffic management includes congestion control, cell admission control, and virtual path/virtual channel(vp/vc) routing. Proper traffic management helps ensure efficient and fair operation of the network in spite of constantly varying demand. This is particularly important for data traffic that has very little predictability and therefore cannot reserve resources in advance. Traffic management ensures that users will get their desired quality of service. However, it is difficult during periods of heavy load, especially if the traffic demands cannot be predicted in advance. This is why congestion control is the most essential aspect of traffic management. Mixing Low-Speed and High-Speed Networks Very high-speed networks using ATM pose a new set of challenges in congestion control. This results partly from the fact that the huge legacy of low-speed networks will continue to coexist with emerging high-speed links for quite some time. This heterogeneity of networks and the resulting mismatch of link speeds aggravate the congestion problem, which happens whenever the input rate is more than the available link capacity. More challenging issues arise in the congestion control of ATM networks for available bit rate (ABR) traffic data traffic because it cannot be predicted in advance. The objective of ABR service is to use the unused capacity of the network. Therefore, congestion control for ABR service must consider certain requirements:
2 Previous screen ABR traffic should never compromise the cell loss probability of Constant Bit Rate or Variable Bit Rate traffic. (CBR traffic is voice; variable bit rate (VBR) traffic is compressed video.) The network should never refuse a guaranteed traffic (CBR, variable bit rate (VBR)) connection because of the ABR traffic it is supporting. All users of the ABR service should have equal access to the bandwidth that is available. To satisfy these requirements for ABR traffic, several congestion control schemes have been proposed in the ATM Forum. However, each of these schemes cannot completely satisfy these requirements because of their own advantages and disadvantages. Therefore, more research is required to develop an efficient congestion control scheme for ATM networks using the ABR service class. ATM Concepts and Protocol Structure To understand how congestion control operates within an Asynchronous Transfer Mode network, it is necessary to first review basic ATM concepts and protocol structure. ATM was chosen as the switching and multiplexing technique for B-ISDN in ATM is based on a fixed-size virtual connection-oriented packet or cell switching methodology. A cell consists of 5 bytes of header information and a 48-byte payload. ATM breaks all traffic into these 53-byte cells. The header contains control information such as identification, cell loss priority, routing, and switching information. Exhibit 1 illustrates the ATM cell structure. Exhibit 1. ATM Cell Structure Each field in the ATM cell header defines the functionality of the ATM layer. The general flow control (GFC) field is used by the user network interface (UNI) to control the amount of traffic entering the network. The virtual channel identifier/virtual path identifier (VCI/VPI) fields are used for channel identification and simplification of the multiplexing process. The payload type indicator (PTI) field distinguishes between user cells and control cells. The cell loss priority field indicates whether a cell may be discarded by a switch during periods of network congestion. In connection-oriented ATM networks, communication from higher layers is adapted to the lower Asynchronous Transfer Mode defined layers, which in turn pass the information on to the physical layer for transmission over a selected physical medium. The protocol reference model is divided into three layers: the ATM adaptation layer (AAL), the ATM layer, and the physical layer. The physical layer defines a transport method for ATM cells between two ATM entities. The ATM layer mainly performs switching and multiplexing functions. The AAL defines a set of service classes to fit the needs of different user requests and converts incoming user requests for services into ATM cells for transport. Exhibit 2 illustrates ATM protocol structure, which is based on standards by the International Telecommunications Union.
3 Quality of Service (QOS) Attributes Two end systems connected to an Asynchronous Transfer Mode network must inform all intermediate switches about their service requirements and traffic parameters before they can communicate across the network. This is similar to telephone networks where a circuit is set up from the calling party to the called party before any conversation can take place. In ATM networks, such circuits are called virtual circuit or virtual connections (VCs). The connections allow the network to guarantee the QOS by limiting the number of VCs. Congestion control ensures that network resources are divided fairly and efficiently among competing connections. When setting up a connection on an ATM network, a user declares key service requirements and traffic parameters. These parameters as related to the desired QOS are explained as follows: Peak cell rate (PCR) is the maximum instantaneous rate at which each source will transmit cells. PCR is the inverse of the minimum inter-cell interval and is illustrated in Exhibit 3. Minimum cell rate (MCR) is the minimum rate desired by each source. Most VCs will ask for a low MCR a value close to zero. Otherwise, the connection request may be denied if sufficient bandwidth is not available. Cell transfer delay (CTD) is experienced by a cell between network entry and exit points. It includes queueing delays at various intermediate switches, service times at queueing points, and propagation delay. Sustained cell rate (SCR) is the average rate as measured over a long interval. Burst tolerance (BT) is the maximum burst size that can be sent at the peak cell rate. Cell delay variation (CDV) is a measure of variance of cell transfer delay (CTD). High variation implies larger buffering for delay sensitive traffic such as voice and video. Cell loss ratio (CLR) indicates the percentage of cells that are lost in the network due to error and congestion. During congestion, the network first drops cells that have a cell loss priority bit set in the ATM header because the loss of a CLP=0 cell is more harmful to the operation of the application. The cell loss ratio can be calculated using the following equation: a = 1 T ij = V i a V j a Types of Traffic and Congestion Control Strategies ATM was designed to support a number of different types of traffic including voice, video, and data. The traffic has been categorized according to its behavior and falls into two major classifications: guaranteed traffic and best-effort traffic.
4 Previous screen Exhibit 3. Peak Cell Rate and Inter-cell Time Guaranteed and Best-Effort Traffic. Guaranteed traffic includes Variable Bit Rate and Constant Bit Rate. Best-effort traffic includes available bit rate. variable bit rate (VBR) traffic is characterized by compressed video such as Motion Picture Expert Group (MPEG) video. These compression schemes involve sending an initial frame containing all the data, and then sending updates to that frame. The updates contain only the information that has been changed since the initial frame was sent. CBR traffic is used for emulating circuit switching. An example application that can use CBR is voice traffic. ABR traffic is characterized by applications that are not as sensitive to delay as voice and video. Although the standard does not require the cell transfer delay and cell loss ratio to be guaranteed or minimized, it is desirable for switches to minimize the delay and loss as much as possible. For ABR traffic, a source is required to control its rate depending on the congestion state of the network. Most of today s computer applications fall into this category. Applications that can use ABR include file transfer and electronic mail. According to these various traffic types, two congestion control strategies for Asynchronous Transfer Mode networks have been proposed in the ATM Forum: open-loop control and closed-loop control. In open-loop control, each connection s usable bandwidth is limited by the notion of a traffic contract. Open- and Closed-Loop Control. To assign bandwidth to the connection, each end system has to declare its traffic parameters to the network at connection setup time. Once the connection request is admitted, its QOS is guaranteed throughout the session. Because a lack of network resources may cause a newly requested connections to be rejected, open-loop control is sometimes referred to as preventive congestion control. This sort of congestion control scheme can be applied to CBR and VBR traffic. Open-loop control, however, is insufficient for data communications because each connection can never emit cells exceeding its negotiated rate, not even when there is unused bandwidth in the network. Furthermore, the bandwidth requirements for data traffic are not likely to be known at connection setup time. Instead, the cell transmission rate can be adjusted using a feedback information that indicates the congestion status of the network. These are the reasons that closed-loop rate control has been chosen for data traffic and is being applied to ABR service in the ATM Forum. Closed-loop control dynamically regulates the cell emission process of each connection by using feedback information from the network and, therefore, it is especially suitable for data transfer service. It can also be called reactive congestion control. The various types of traffic, their applications, and different congestion control strategies according to the specific traffic type are further illustrated in Exhibit 4. Exhibit 4. Types of Traffic, Application, and Congestion Control Strategies Types of Traffic Application Class of Congestion Control Strategy Closed-loop Control Best Effort ABR Data Feedback mechanism
5 Previous screen Reactive congestion control VBR Video Open-loop Control No feedback mechanism Guaranteed Voice Preventive congestion control CBR Key: CBR constant bit VBR variable bit rate ABR available bit rate Congestion Control Schemes A congestion control loop between the network and the user is required to support ABR service. In closed-loop control, the network requires a feedback mechanism to tell each source how much data to send. The feedback information occurs through the headers of data cells, and some through control cells. The data and control traffic of a typical closedloop congestion control scheme is illustrated in Exhibit 5. Exhibit 5. Typical Closed-loop Congestion Control Scheme In this example, two virtual connections (VC1 and VC2) share the link between switches 1 and 2. For implementation of closed-loop control, two kinds of schemes have been debated in the Asynchronous Transfer Mode Forum. They are classified as credit-based or rate-based. Previous Congestion Control Schemes To design an efficient congestion control scheme, it is helpful to understand previous approaches and their limitations. In this section, congestion control is briefly described by classifying protocols as open-loop or closed-loop. Before discussing the two main approaches seriously considered by the ATM Forum, other methods that were proposed to solve the congestion control problem will be examined. Fast Resource Management. A fast resource management method was proposed by France Telecom. Before actually sending data cells, a source sends a resource management (RM) cell in order to request the desired bandwidth. When a switch receives an RM cell from the source, it passes the RM cell on to the next switch if it can satisfy the request. A switch simply drops the RM cell if it cannot grant the request. The source then resends a request when it times out. Upon receiving an RM cell, the destination returns the RM cell back to the source, which can then
6 Previous screen transmit the data cell. Therefore, the data cell has to wait for at least one round trip delay at the source even if the network is idle. To avoid this delay, an immediate transmission (IT) mode was proposed in which a data cell is transmitted immediately following an RM cell. If a switch cannot satisfy the request, it simply drops the RM cell and data cell. Then, the switch sends an indication to the source. The main problem in this method is excessive delay during normal operation or excessive loss during congestion. Early Packet Discard. The early packet discard method is based on the observation that a packet consists of several cells. The method uses a bit in the cell header to indicate end of message (EOM). A switch looks for the EOM marker and drops all future cells of the VC until the EOM marker is seen again when its queues start getting full because it is better to drop all cells of one packet than randomly drop cells belonging to different packets. This method can be used without any standardization because it does not require any interswitch or source-switch communication. However, this method may not be fair in the sense that the cell arriving at a full buffer may not belong to the VC causing the congestion. Delay-based Rate Control. A delay-based rate control method requires the source to monitor the round trip delay in order to control the congestion. In this method, a source periodically sends an RM cell that contains a timestamp. When a destination receives an RM cell from the source, it returns it to the source. Upon receiving an RM cell from the destination, the source uses the timestamp in the RM cell to measure the round trip delay and to deduce the level of congestion. This approach has the advantage of no explicit feedback from the network. Therefore, it will work even if the path contains non-atm networks. Link Window with End-to-End Binary Rate. The link window with end-to-end binary rate method uses window flow control on every link and explicit forward congestion indication (EFCI)-based binary end-to-end rate control. The method is a merger of a rate-based scheme with a credit-based scheme. It is scaleable in terms of number of VCs because the window control is per-link and not per- VC. It also guarantees zero cell loss as in credit-based scheme. However, neither the creditbased nor the rate-based camp found it acceptable because it contained elements from both camps. Fair Queueing with Rate and Buffer Feedback. The fair queueing with rate and buffer feedback method requires that the switches compute a fair share of VCs and monitor each VC s queue length. A source periodically sends an RM cell to determine the bandwidth and buffer usage at its bottleneck. Upon receiving an RM cell from the source, a switch computes a fair share of VCs, which is computed as the inverse of the interval between the cell arrival and its transmission. Then, the switch assigns the minimum of the fair share and monitors each VC s queue length and assigns the maximum of queue length. Thus, each switch implements fair queueing, which consists of maintaining a separate queue for each VC and computing the time at which the cell would finish transmission if the queue were to be served round-robin, one bit at a time. The cells are scheduled to transmit in this computed time order. The main problem is that
7 Previous screen the method requires per-vc (i.e.,fair) queueing in the switches, which is considered too expensive with current hardware technology. Credit-based Flow Control The credit scheme proposed to the ATM Forum sends information about the available buffer space independently on each link of the network and is thus a link-by-link window flow control approach. There are two phases in flow controlling a VC: a buffer allocation phase and a credit control phase. In the buffer allocation phase, a certain number of cell buffers are reserved on each link for each VC at the receiving end of the link. In the credit control phase, the sending end of each link maintains a nonnegative credit balance to ensure no overflow of the allocated buffer in the receiving end of the link and deduces its current credit balance for the VC by the one in every data cell transmit. Specifically, the sending end of each link needs to receive credits for the VC from the receiving end of the link before forwarding any data cell over the link. At various times, the receiving end sends credits to the sending end indicating availability of buffer space for receiving data cells of the VC, thus avoiding congestion. After having received credits, the sending end is eligible to forward some number of data cells of the VC to the receiving end according to the received credit information. Therefore, if a VC runs out of credit on a particular link, it will stop transmitting cells. As cells are removed from the buffer at the receiving end, credits are returned on each VC to the sending end. Exhibit 6 illustrates a credit scheme in a network with one congested queue where VC1 has run out of credit on the inter-switch link. Each VC has a separate buffer at each switch. Exhibit 6. A Credit Scheme with Per-VC Buffering Switches The per-vc link-by-link flow control mechanism can prevent cell loss because connection cannot send cells unless it has credit. Among other advantages, such as maximal link utilization and fairness, it also relieves the transient congestion effectively using a link-bylink fast feedback mechanism. However, the per-vc link-by-link flow control requires per-vc buffering at the switches, which results in considerable hardware complexity. In addition, the requirements imposed on the switch architectures by the credit schemes also limit flexibility in vendor implementation. These are the main reasons why the ATM Forum chose a rate-based scheme for the Asynchronous Transfer Mode standards instead of a credit-based scheme for the support of ABR service. Rate-based Flow Control A rate-based scheme uses feedback information from the network to specify the maximum rate at which each source can emit cells into the network on every VC. The idea of directly controlling the rate of a traffic source was first introduced into the realm of protocols for data networking in the American National Standards Institute frame relay standard. Two bits in the header of a Frame Relay packet are optionally used by the network to indicate congestion. To adjust window sizes, the DECnet protocol uses a rate-based analog of the Ramakrishnan-Jain algorithm. In this algorithm, a binary bit is used to make a decision whether to increase the current window size by a fixed amount or to decrease it by an amount proportional to the current window size at fixed intervals. This results in a linear increase or exponential decrease of the window size as a function of time. When a network
8 Previous screen is congested, it signals the congestion in the forward direction through a single bit congestion indicator in each packet. Upon receiving a single bit congestion indicator, the destination copies it into acknowledgment packets and sends them to the source. A source uses this information to adjust window sizes. Therefore, the Ramakrishnan-Jain algorithm is an end-to-end feedback loop. In a rate-based scheme, there are three kinds of feedback mechanisms: negative polarity feedback, positive polarity feedback, and bipolar feedback. A negative polarity feedback requires sending RM cells for decrease but not on increase. Conversely, a positive polarity feedback requires sending RM cells for increase but not on decrease. If RM cells are sent for both increase and decrease, the algorithm would be called bipolar feedback. The following sections discuss several key developments of the rate-based scheme for the support of the ABR service, including their advantages and disadvantages. Forward Explicit Congestion Notification (FECN). Forward explicit congestion notification (FECN) is end-to-end congestion control in which most of the computational complexity resides in the end system. To convey congestion information in the forward direction, the feedback mechanism uses explicit forward congestion indication (EFCI), which is carried in the payload type indicator field of an ATM cell. A source sends EFCI = 0 for every data cell transmitted. If the queue length of a switch exceeds a certain threshold, the switch is considered congested. When a switch becomes congested, it marks EFCI = 1 in all cells on each VC in the forward direction to indicate the congestion. Upon receiving a data cell with EFCI = 1, the destination generates and returns an RM cell to inform the source of the congestion status. The source reduces its cell rate when it receives the RM cell from the destination. The data and control traffic of FECN in a congested network with output buffer switches are illustrated in Exhibit 7. Exhibit 7. FECN with Output Buffer Switches Switch 2 has a queue experiencing congestion, and it marks the EFCI state of all cells on VC1 to indicate the congestion status. Destination 1 returns an RM cell to notify source 1 of the congestion. Upon receiving the RM cell, source 1 starts to reduce its cell rate. As in the case of the absence of the RM cell within a predetermined time period, the source will increase its cell rate until it reaches the peak cell rate. A time interval, the RM interval (RMI), is defined at the destination, and only one RM cell is allowed to be sent in an RMI. The source is also provided with an interval timer known as the update interval (UI). This scheme uses a negative polarity feedback and assumes an interval-based approach. This may result in overall network congestion collapse if the congestion notification cells returning to the source in the backward traffic experience extreme congestion because every source will attempt to reach the peak cell rate and overload the queue when the timer expires without receiving an RM cell from the destination. Backward Explicit Congestion Notification (BECN). Backward explicit congestion notification (BECN) returns the congestion notification directly from the point of congestion to the source. In this scheme, a source sends EFCI = 0 for every data cell transmitted. When a switch becomes congested, it marks EFCI = 1, and returns the Backward Explicit Congestion Notification RM cell to inform the congestion to the source. The source reduces its cell rate on the receipt of the RM cell. If no
9 Previous screen BECN RM cell is received within a predetermined time period, the cell rate will increase until it reaches the peak. This scheme also uses a negative polarity feedback and assumes an interval-based approach as in FECN. Therefore, the scheme still has the overall network congestion collapse problem. An advantage of BECN is the faster response to congestion than that of FECN. In addition, BECN is more robust against faulty or noncompliant end systems because the network itself generates the congestion notification. However, BECN requires more hardware in the switches, not only to generate the BECN RM cells, but also to filter the congestion information. This filtering process is necessary to prevent generation of excessive RM cells. Exhibit 8 illustrates the data and control traffic of Backward Explicit Congestion Notification in a congested network, where congestion occurs on VC1 at switch 2. Exhibit 8. BENC with Output Buffer Switches Proportional Rate Control Algorithm (PRCA). The proportional rate control algorithm (PRCA) is intended to remedy the problem of network congestion collapse with two major modifications: the positive polarity feedback approach and the counter-based approach (i.e., no interval timers). In PRCA, the feedback mechanism uses the EFCI state of the data cells. A source marks the EFCI bit in all data cells except for the first of every N RM cells and continually decreases its cell rate for every data cell transmitted. The parameter N is predetermined and will affect the response time to congestion and backward link utilization. When a destination receives a data cell with EFCI = 0, it instantly sends an RM cell to the source. The destination takes no action when the EFCI bit is set by an intermediate switch because of the congestion. A source only increases its cell rate for a VC when it receives an RM cell from the destination. Otherwise, it will continually reduce its cell rate because no source can increase its cell rate unless it has an RM cell from the destination. The increments and decrements in the cell rate of each VC are proportional to the current cell rate, thus eliminating the need for timers and timer value selection in previous rate-based proposal. PRCA allows both FECN-like operation and BECN-like operation because a switch experiencing congestion can change the state of EFCI = 0 to EFCI = 1 or remove the RM cells in the backward direction. The problem of network congestion collapse associated with the FECN and BECN schemes is solved in PRCA. Exhibit 9 illustrates the data and control traffic of PRCA in a congested network where VC1 is congested at switch 2. Certain problems, however, remain even in PRCA. For example, PRCA requires a considerable amount of buffers when there is a large number of active connections because the decreasing of aggregated input rate at the switch is too slow. Exhibit 9. PRCA with Output Buffer Switches In an ACR beat down problem, a VC goes through more congested links and its data cells are marked more often than those of other VCs going through fewer congested links. Consequently, such a VC will have a lowered cell rate (ACR) than others. For example, if p is the probability of bit being set on one hop, then the probability of it being set for an n- hop VC is p n. Therefore, the undesirable effect of VC starvation is proportional to the
10 number of congested links on which a VC transmits its cells. The ACR beat down problem causes extreme unfairness among VCs in PRCA. Explicit Rate Control Algorithm (ERCA). An explicit rate control algorithm (ERCA) was proposed to address the problems of previous binary feedback schemes. In binary feedback schemes, a single-bit is used only to tell the source whether it should go up or down. It was designed in 1986 for connectionless networks in which the intermediate nodes had no knowledge of flows or their demands. In connection-oriented ATM, however, the switches know exactly who is using the resources and flow paths. The binary feedback schemes were also designed for window-based controls and are too slow for rate-based controls in high-speed networks. In window-based control, a slight difference between the current window and optimal window will show up as a slight increase in queue length. In rate-based control, a slight difference in current rate and optimal rate will show up as continuously increasing queue length. The reaction times should be fast. ERCA can ensure the source gets to the optimal operating point within a few round trips. ERCA uses a positive feedback approach, and it is based on counter-based approach as in PRCA. In ERCA, each source periodically sends an RM cell containing its current cell rate (CCR), desired rate (DR), and a reduced (R) bit. When a switch receives an RM cell from the source, it monitors the VC s rate and computes a fair share using an iterative RM format procedure. The fair share is computed as follows: Fair share = Link Bandwidth Bandwidth of Underloading VCs Number of VCs Number of Underloading VCs If a VC s DR is more than the fair share, the switch will reduce the DR field and set a reduced bit in the RM cell. However, any VC can grant the DR if its DR is less than the fair share. Upon receiving an RM cell from the source, the destination returns the RM cell to the source. A source then adjusts its rate to that indicated in the RM cell. If the reduced bit is clear, the source could demand a higher desired rate in the next RM cell. If the bit is set, the source uses the current rate as the desired rate in the next RM cell. The data and control traffic of ERCA in a congested network are illustrated in Exhibit 10. The VC1 s DR is more than the fair share at switch 2. In this scenario, ERCA has several advantages: Policing is straightforward because the entry switches can monitor the returning RM cells and use the rate directly in their policing algorithm. The system reaches the optimal operating point quickly because of fast convergence time the initial rate has less impact. ERCA is robust against errors or loss of RM cells because the next correct RM cell will bring the system to the correct operating point. However, ERCA still employs per-vc accounting, which is considered very expensive with current hardware technology. Enhanced Proportional Rate Control Algorithm (EPRCA). An enhanced proportional rate control algorithm (EPRCA) is intended to solve the ACR beat down problem and to combine the previous separated rate-based schemes with two enhancements: intelligent marking and explicit rate setting.
11 This scheme is a merger of PRCA with ERCA. The scheme adapts intelligent marking to achieve better fairness among connections without a need of per-vc queueing or accounting because per- VC accounting requires an additional control complexity at the switch, even though the fairness could be achieved if each connection is maintained separately at the switch. It also adapts explicit rate setting, which reduces the rate of each connection explicitly. Exhibit 11 illustrates EPRCA with output buffer switches. Therefore, the switch can have a responsibility for determining the cell transmission rate of a selected connection. In EPRCA, a source sends a data cell with EFCI set to zero. After every n data cells, it sends an RM cell that consists of its current cell rate (CCR), explicit rate (ER), and a congestion indication (CI) bit. The source initializes the ER field to its peak cell rate (PCR) and sets the CI bit to zero. When a switch receives an RM cell from the source, an intermediate switch (congested or noncongested) computes a mean allowed cell rate (MACR) and a fair share, which is a fraction of the MACR average. When a congested switch receives an RM cell from the destination, it reduces ER filed to the fair share in the RM cell. MACR and a fair share are computed as follows: MACR = ( 1 α) MACXR + αccr Here, a is the exponential averaging factor and SW_DPF is a multiplier (called switch down pressure factor) set close to, but below, 1. The suggested values of a and SW_DPF are 1/16 and 7/8, respectively. The destination monitors the EFCI bit in data cell if the last seen data cell had EFCI bit set, it marks the CI bit in the RM cell. In addition to setting the explicit rate, the switch can also set the CI bit in the returning RM cells if its queue length is more than a certain threshold. A source continuously decreases its cell rate by computing: ACR = ACR * RDF Fair share = SW_DPF * MACR RDF is the reduction factor in this case. When a source receives an RM from the destination it increases its rate by an amount additive increase rate (AIR), if the CI bit is clear. For example: If CI = 0 Then New ACR = Min (ACR + AIR, ER, PCR) On the other hand, its ACR is not changed if CI bit is set. EPRCA allows EFCI switches, binaryfeedback switches, and the explicit feedback switches on the path. Therefore, EPRCA provides considerable flexibility to network vendors. The main problem in EPRCA is the switch congestion detection algorithm, which is based on queue length threshold. If the queue length exceeds a certain threshold, the switch is said to be congested. If it exceeds another higher threshold, it said to be very highly congested. This method of congestion detection was shown to result in unfairness. Basically, a source that started late was found to get lower throughput than one that started early. The Intelligent Congestion Control Algorithm. The intelligent congestion control algorithm was proposed to resolve the ACR beat down problem. The key idea of this scheme is for each congested switch to estimate the optimal
12 Previous screen cell rate on each VC with a small number of computations and without the need of per-vc queuing or accounting. This estimated rate is used to adjust the cell rates of the sources using positive feedback mechanisms. More specifically, each source periodically sends an RM cell containing its current allowed cell rate (ACR) and explicit rate (ER), which is the maximum allowed cell rate of the source. For every data cell transmitted, a source continually decreases its ACR by additive decrease rate (ADR) until it receives an RM cell from the destination. A variable modified allowed cell rate (MACR) is defined in order to contain the value of the estimated optimal cell rate for each queue of a switch. When a noncongested switch receives an RM (ACR, ER) cell from the source, it replaces MACR by MACR + α * (ACR MACR). When a congested switch receives an RM cell from the source, it replaces MACR by MACR β * (ACR MACR), only if ACR is smaller than MACR. Using a first order filter, an intermediate switch that is congested or non-congested iteratively estimates the optimal cell rate for each VC given the ACR of each VC. When a destination receives an RM (ACR, ER) cell, it returns the RM cell to the source. When a congested switch receives an RM (ACR, ER),it takes one of two actions depending on congestion status. One possible action is the switch replacing ER in the RM cell by min (ER, γ * MACR) if the current queue length is greater than a certain threshold. The other possibility is the switch replacing ER in the RM cell by min (ER, MACR) if ACR is greater than MACR. Upon receiving an RM (ACR, ER) cell from the destination, a source computes new ACR according to the rate information in the RM cell. The data and control traffic of the scheme in a congested network, where VC1 is congested at switch 2, are illustrated in Exhibit 12. Exhibit 12. Intelligent Congestion Control with Output Buffer Switches This scheme enhances the knowledge of the switches and provides considerable flexibility to network vendors to select a switch architecture with various cell queueing and traffic management options. It also significantly resolves the ACR beat down problem without the need of per-vc queueing or accounting, thus providing a fairness among all VCs. However, the scheme takes a lot of time to compute MACR at the switches even if the switch is not congested. Also, the response time to congestion is slow. Congestion originates from forwarding an RM cell to the destination and returning to the congested switch without taking any action to relieve the congestion. Conclusion There is a considerable amount of interest surrounding high-speed communications and enabling technologies right now. This article summarizes recent developments in ratebased congestion control for ABR services in Asynchronous Transfer Mode networks, including their advantages and disadvantages. Several congestion control schemes have been proposed in the ATM Forum to provide the requirements of ABR services. However, these proposed schemes are still not well understood and have problems that require additional research effort. It is important for network designers and engineers to realize the limitations of the various approaches in order to make good decisions when considering ATM for their enterprise network. Traffic management for ATM networks is still in a state of flux and development of standards will be incomplete for several years to come.
13 Previous screen Author Biographies Hyun mee Choi is a PhD student in the department of computer science at North Dakota State University (NDSU) in Fargo ND. She received her BS in management information systems from Dong Guk University, South Korea in 1993 and her MS in computer science from NDSU in She can be reached at hchoi@plains.nodak.edu. Ronald J. Vetter is an associate professor in the mathematical science department at the University of North Carolina at Wilmington. He can be reached at vetter@sol.cms.uncwil.edu.
14
15
16
17
18
19
20 Previous screen
21
22
23
24
different problems from other networks ITU-T specified restricted initial set Limited number of overhead bits ATM forum Traffic Management
Traffic and Congestion Management in ATM 3BA33 David Lewis 3BA33 D.Lewis 2007 1 Traffic Control Objectives Optimise usage of network resources Network is a shared resource Over-utilisation -> congestion
More informationATM Quality of Service (QoS)
ATM Quality of Service (QoS) Traffic/Service Classes, Call Admission Control Usage Parameter Control, ABR Agenda Introduction Service Classes and Traffic Attributes Traffic Control Flow Control Special
More informationIntermediate Traffic Management
Intermediate Traffic Management This presentation has been generated by the ATM Forum for the purpose of educating the public on ATM Technology and the ATM Forum s activities. This presentation is the
More informationUnderstanding the Available Bit Rate (ABR) Service Category for ATM VCs
Understanding the Available Bit Rate (ABR) Service Category for ATM VCs Document ID: 10415 Contents Introduction Prerequisites Requirements Components Used Conventions What is ABR? Resource Management
More informationRohit Goyal 1, Raj Jain 1, Sonia Fahmy 1, Shobana Narayanaswamy 2
MODELING TRAFFIC MANAGEMENT IN ATM NETWORKS WITH OPNET Rohit Goyal 1, Raj Jain 1, Sonia Fahmy 1, Shobana Narayanaswamy 2 1. The Ohio State University, Department of Computer and Information Science, 2015
More informationRohit Goyal 1, Raj Jain 1, Sonia Fahmy 1, Shobana Narayanaswamy 2
MODELING TRAFFIC MANAGEMENT IN ATM NETWORKS WITH OPNET Rohit Goyal 1, Raj Jain 1, Sonia Fahmy 1, Shobana Narayanaswamy 2 1. The Ohio State University, Department of Computer and Information Science, 2015
More informationCongestion Control and Traffic Management in ATM Networks Recent Advances and a Survey Raj Jain
Congestion Control and Traffic Management in ATM Networks Recent Advances and a Survey Raj Jain The objective of traffic management is to ensure that each connection gets the quality of service it was
More informationLecture 4 Wide Area Networks - Congestion in Data Networks
DATA AND COMPUTER COMMUNICATIONS Lecture 4 Wide Area Networks - Congestion in Data Networks Mei Yang Based on Lecture slides by William Stallings 1 WHAT IS CONGESTION? congestion occurs when the number
More informationWhat Is Congestion? Computer Networks. Ideal Network Utilization. Interaction of Queues
168 430 Computer Networks Chapter 13 Congestion in Data Networks What Is Congestion? Congestion occurs when the number of packets being transmitted through the network approaches the packet handling capacity
More informationIntroduction to ATM Traffic Management on the Cisco 7200 Series Routers
CHAPTER 1 Introduction to ATM Traffic Management on the Cisco 7200 Series Routers In the latest generation of IP networks, with the growing implementation of Voice over IP (VoIP) and multimedia applications,
More informationCongestion in Data Networks. Congestion in Data Networks
Congestion in Data Networks CS420/520 Axel Krings 1 Congestion in Data Networks What is Congestion? Congestion occurs when the number of packets being transmitted through the network approaches the packet
More informationATM Traffic Management
ATM Traffic Management Professor of Computer and Info. Sciences Columbus, OH 43210-1277 These slides are available at http://www.cis.ohio-state.edu/~jain/cis777-99 1 Dime Sale One Megabit memory, One Megabyte
More informationBROADBAND AND HIGH SPEED NETWORKS
BROADBAND AND HIGH SPEED NETWORKS ATM SERVICE CATEGORIES Service Categories : represent particular combinations of traffic parameters and QoS parameters. These parameters are determined by users for a
More information11. Traffic management in ATM. lect11.ppt S Introduction to Teletraffic Theory Spring 2003
lect11.ppt S-38.145 - Introduction to Teletraffic Theory Spring 2003 1 Contents Introduction ATM technique Service categories and traffic contract Traffic and congestion control in ATM Connection Admission
More informationQOS in ATM Networks. Traffic control in ATM networks. Layered model. Call level. Pag. 1
Traffic control in ATM networks Andrea Bianco Telecommunication Network Group firstname.lastname@polito.it http://www.telematica.polito.it/ QoS Issues in Telecommunication Networks - 1 Layered model Used
More informationWhat Is Congestion? Effects of Congestion. Interaction of Queues. Chapter 12 Congestion in Data Networks. Effect of Congestion Control
Chapter 12 Congestion in Data Networks Effect of Congestion Control Ideal Performance Practical Performance Congestion Control Mechanisms Backpressure Choke Packet Implicit Congestion Signaling Explicit
More informationReal-Time ABR, MPEG2 Streams over VBR, and Virtual Source/Virtual Destination rt-abr switch
Real-Time ABR, MPEG2 Streams over VBR, and Virtual Source/Virtual Destination rt-abr switch Professor of Computer and Information Sciences The Ohio State University Columbus OH 432101-1277 http://www.cis.ohio-state.edu/~jain/
More informationFrame Relay. Frame Relay: characteristics
Frame Relay Andrea Bianco Telecommunication Network Group firstname.lastname@polito.it http://www.telematica.polito.it/ Network management and QoS provisioning - 1 Frame Relay: characteristics Packet switching
More information11. Traffic management in ATM
lect11.ppt S-38.145 - Introduction to Teletraffic Theory - Fall 2000 1 Contents Introduction ATM technique Service categories Traffic contract Traffic and congestion control in ATM Connection Admission
More informationUnit 2 Packet Switching Networks - II
Unit 2 Packet Switching Networks - II Dijkstra Algorithm: Finding shortest path Algorithm for finding shortest paths N: set of nodes for which shortest path already found Initialization: (Start with source
More informationCongestion control and traffic management in ATM networks: Recent advances and a survey
ELSMIER Computer Networks and ISDN Systems 28 (1996) 1723-1738 COMPUTER IJETWORKS ISDN SYSTEMS Congestion control and traffic management in ATM networks: Recent advances and a survey Raj Jain * Department
More informationMaster Course Computer Networks IN2097
Chair for Network Architectures and Services Prof. Carle Department of Computer Science TU München Master Course Computer Networks IN2097 Prof. Dr.-Ing. Georg Carle Christian Grothoff, Ph.D. Stephan Günther
More informationATM. Asynchronous Transfer Mode. (and some SDH) (Synchronous Digital Hierarchy)
ATM Asynchronous Transfer Mode (and some SDH) (Synchronous Digital Hierarchy) Why use ATM? Circuit switched connections: After initial setup no processing in network nodes Fixed bit rates, fixed time delay
More informationTCP/IP over ATM using ABR, UBR, and GFR Services
TCP/IP over ATM using ABR, UBR, and GFR Services Columbus, OH 43210 Jain@CIS.Ohio-State.Edu http://www.cis.ohio-state.edu/~jain/ 1 Overview Why ATM? ABR: Binary and Explicit Feedback ABR Vs UBR TCP/IP
More informationTraffic control in ATM networks
Traffic control in ATM networks Andrea Bianco Tl Telecommunication Nt Network kgroup firstname.lastname@polito.it http://www.telematica.polito.it/ QoS Issues in Telecommunication Networks - 1 Layered model
More informationThe ERICA ALGORITHM for ABR TRAFFIC in ATM NETWORKS
The ERICA ALGORITHM for ABR TRAFFIC in ATM NETWORKS Ibrahim Koçyigit Department of Electronics, Faculty of Engineering, Uludag University, Görükle, Bursa TURKEY E-mail: kocyigit@uludag.edu.tr Emrah Yürüklü
More informationIssues in Traffic Management on Satellite ATM Networks
Issues in Traffic Management on Satellite ATM Networks Columbus, OH 43210 Jain@CIS.Ohio-State.Edu http://www.cis.ohio-state.edu/~jain/ 1 Overview Why ATM? ATM Service Categories: ABR and UBR Binary and
More informationPAPER Adaptive Thresholds of Buffer to Solve the Beat-Down Problem of Rate Control in ATM Networks
362 PAPER Adaptive Thresholds of Buffer to Solve the Beat-Down Problem of Rate Control in ATM Networks Harry PRIHANTO, Student Member and Kenji NAKAGAWA, Member SUMMARY ABR service is currently standardized
More informationChapter 24 Congestion Control and Quality of Service 24.1
Chapter 24 Congestion Control and Quality of Service 24.1 Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 24-1 DATA TRAFFIC The main focus of congestion control
More informationPerformance Analysis & QoS Guarantee in ATM Networks
P a g e 131 Global Journal of Computer Science and Technology Performance Analysis & QoS Guarantee in ATM Networks Parag Jain!, Sandip Vijay!!, S. C. Gupta!!!! Doctoral Candidate, Bhagwant Univ. & Professor,
More informationCurrent Issues in ATM Forum Traffic Management Group
Current Issues in ATM Forum Traffic Management Group Columbus, OH 43210 Jain@CIS.Ohio-State.Edu http://www.cis.ohio-state.edu/~jain/ 1 Overview Effect of VS/VD GFR Virtual Paths ITU vs ATMF CDV Accumulation
More informationDesigning Efficient Explicit-Rate Switch Algorithm with Max-Min Fairness for ABR Service Class in ATM Networks
Designing Efficient Explicit-Rate Switch Algorithm with Max-Min Fairness for ABR Service Class in ATM Networks Hiroyuki Ohsaki, Masayuki Murata and Hideo Miyahara Department of Informatics and Mathematical
More informationBROADBAND AND HIGH SPEED NETWORKS
BROADBAND AND HIGH SEED NETWORKS LAYERS The function and associated information of the planes is as follows: The reference model is composed of the following planes: Control lane manages the call and connection.
More informationNetwork Working Group Request for Comments: 3134 Category: Informational June 2001
Network Working Group Request for Comments: 3134 Category: Informational J. Dunn C. Martin ANC, Inc. June 2001 Status of this Memo Terminology for ATM ABR Benchmarking This memo provides information for
More informationCongestion Control in Communication Networks
Congestion Control in Communication Networks Introduction Congestion occurs when number of packets transmitted approaches network capacity Objective of congestion control: keep number of packets below
More informationUBR Congestion controlled Video Transmission over ATM Eltayeb Omer Eltayeb, Saudi Telecom Company
UBR Congestion controlled Video Transmission over ATM Eltayeb Omer Eltayeb, Saudi Telecom Company ABSTRACT The ATM unspecified bit rate (UBR) class of service some times referred to as best effort service-
More informationConfiguring QoS Policy Actions and Rules
CHAPTER 3 The second step in creating a QoS service policy is to define how you want the router to handle the packets that match the classification rules you defined in Chapter 2, Classifying Traffic.
More informationNetwork Layer Enhancements
Network Layer Enhancements EECS 122: Lecture 14 Department of Electrical Engineering and Computer Sciences University of California Berkeley Today We have studied the network layer mechanisms that enable
More informationAsynchronous Transfer Mode (ATM) ATM concepts
Asynchronous Transfer Mode (ATM) Asynchronous Transfer Mode (ATM) is a switching technique for telecommunication networks. It uses asynchronous time-division multiplexing,[1][2] and it encodes data into
More informationSIMULATION OF PACKET DATA NETWORKS USING OPNET
SIMULATION OF PACKET DATA NETWORKS USING OPNET Nazy Alborz, Maryam Keyvani, Milan Nikolic, and Ljiljana Trajkovic * School of Engineering Science Simon Fraser University Vancouver, British Columbia, Canada
More informationTCP/IP over ATM over Satellite Links
TCP/IP over ATM over Satellite Links Seong-Cheol Kim Samsung Electronics Co. Ltd. http://www.cis.ohio-state.edu/~jain/ 1 Overview TCP over ABR over Satellites TCP over UBR over Satellites Improving TCP
More informationCS 268: Lecture 7 (Beyond TCP Congestion Control)
Outline CS 68: Lecture 7 (Beyond TCP Congestion Control) TCP-Friendly Rate Control (TFRC) explicit Control Protocol Ion Stoica Computer Science Division Department of Electrical Engineering and Computer
More informationR1 Buffer Requirements for TCP over ABR
96-0517R1 Buffer Requirements for TCP over ABR, Shiv Kalyanaraman, Rohit Goyal, Sonia Fahmy Saragur M. Srinidhi Sterling Software and NASA Lewis Research Center Contact: Jain@CIS.Ohio-State.Edu http://www.cis.ohio-state.edu/~jain/
More informationDistributing Bandwidth Between Queues
CHAPTER 5 Developing a queuing strategy is an important step in optimizing network functionality and services. Equally important is ensuring that bandwidth is shared fairly among the competing traffic
More informationQuality of Service (QoS)
Quality of Service (QoS) The Internet was originally designed for best-effort service without guarantee of predictable performance. Best-effort service is often sufficient for a traffic that is not sensitive
More informationNetwork management and QoS provisioning - QoS in ATM Networks
QoS in ATM Networks Layered model In order to define QoS parameters and traffic characterization a layered model is defined; the following classes are introduced:. call level;. burst level;. cell level.
More informationOverview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services
Overview 15-441 15-441 Computer Networking 15-641 Lecture 19 Queue Management and Quality of Service Peter Steenkiste Fall 2016 www.cs.cmu.edu/~prs/15-441-f16 What is QoS? Queuing discipline and scheduling
More informationATM Asynchronous Transfer Mode revisited
ATM Asynchronous Transfer Mode revisited ACN 2007 1 ATM GOAL To establish connections between an arbitrary number of hosts...... over channels that fulfills a certain QoS level. -> ATM networks make it
More informationQoS Policy Parameters
CHAPTER 6 This chapter describes the parameters, both required and optional, for QoS provisioning using the ISC user interface. Service level QoS parameters include all entry fields in the VoIP, Management,
More informationPrinciples of Telecommunications Network Architecture
01_01_32.fm Page 1 Thursday, March 23, 2000 2:53 PM C H A P T E R 1 Principles of Telecommunications Network Architecture A telecommunications network is a collection of nodes and links that communicate
More informationATM Logical Connections: VCC. ATM Logical Connections: VPC
ATM Logical Connections: VCC Logical Connections in ATM are referred to as virtual channel connections (VCCs). Virtual channel (VC) is a generic term used to describe unidirectional transport of ATM cells
More informationTraffic Management of Internet Protocols over ATM
Traffic Management of Internet Protocols over ATM Columbus, OH 43210 Jain@CIS.Ohio-State.Edu http://www.cis.ohio-state.edu/~jain/ 1 Overview Why ATM? ATM Service Categories: ABR and UBR Binary and Explicit
More informationFrame Relay. Frame Relay Information 1 of 18
Frame Relay Information 1 of 18 This document was retrieved from the Web and has been been edited by Thomas Jerry Scott for use in his TCP/IP network classes. Chapter Goals Describe the history of Frame
More informationAsynchronous Transfer Mode
ATM Asynchronous Transfer Mode CS420/520 Axel Krings Page 1 Protocol Architecture (diag) CS420/520 Axel Krings Page 2 1 Reference Model Planes User plane Provides for user information transfer Control
More informationFairness in bandwidth allocation for ABR congestion avoidance algorithms
Fairness in bandwidth allocation for ABR congestion avoidance algorithms Bradley Williams, Neco Ventura Dept of Electrical Engineering, University of Cape Town, Private Bag, Rondebosch, South Africa {bwillia,
More informationTraffic Management. Service Categories CHAPTER
CHAPTER 3 The following traffic management functions are supported on the PNNI node: Asymmetrical traffic requirements. Connection Admission Control (CAC). Qbin for AutoRoute and PNNI Mapping of SVC/SPVC
More informationSharing Bandwidth Fairly During Congestion
CHAPTER 12 When no QoS policies exist, the router serves traffic with best effort service. The router makes no distinction between high and low priority traffic and makes no allowances for the needs of
More informationProtocol Architecture (diag) Computer Networks. ATM Connection Relationships. ATM Logical Connections
168 430 Computer Networks Chapter 11 Asynchronous Transfer Mode Protocol Architecture Similarities between ATM and packet switching Transfer of data in discrete chunks Multiple logical connections over
More informationResource allocation in networks. Resource Allocation in Networks. Resource allocation
Resource allocation in networks Resource Allocation in Networks Very much like a resource allocation problem in operating systems How is it different? Resources and jobs are different Resources are buffers
More informationScheduling. Scheduling algorithms. Scheduling. Output buffered architecture. QoS scheduling algorithms. QoS-capable router
Scheduling algorithms Scheduling Andrea Bianco Telecommunication Network Group firstname.lastname@polito.it http://www.telematica.polito.it/ Scheduling: choose a packet to transmit over a link among all
More informationITU-T I.150. B-ISDN asynchronous transfer mode functional characteristics
INTERNATIONAL TELECOMMUNICATION UNION ITU-T I.150 TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU (02/99) SERIES I: INTEGRATED SERVICES DIGITAL NETWORK General structure General description of asynchronous
More informationWilliam Stallings Data and Computer Communications 7 th Edition. Chapter 11 Asynchronous Transfer Mode
William Stallings Data and Computer Communications 7 th Edition Chapter 11 Asynchronous Transfer Mode Protocol Architecture Similarities between ATM and packet switching Transfer of data in discrete chunks
More informationPerformance and Evaluation of Integrated Video Transmission and Quality of Service for internet and Satellite Communication Traffic of ATM Networks
Performance and Evaluation of Integrated Video Transmission and Quality of Service for internet and Satellite Communication Traffic of ATM Networks P. Rajan Dr. K.L.Shanmuganathan Research Scholar Prof.
More informationChapter 6: Congestion Control and Resource Allocation
Chapter 6: Congestion Control and Resource Allocation CS/ECPE 5516: Comm. Network Prof. Abrams Spring 2000 1 Section 6.1: Resource Allocation Issues 2 How to prevent traffic jams Traffic lights on freeway
More informationFlow Control. Flow control problem. Other considerations. Where?
Flow control problem Flow Control An Engineering Approach to Computer Networking Consider file transfer Sender sends a stream of packets representing fragments of a file Sender should try to match rate
More information(Refer Slide Time: 2:20)
Data Communications Prof. A. Pal Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur Lecture -23 X.25 and Frame Relay Hello and welcome to today s lecture on X.25 and
More informationMPLS AToM Overview. Documentation Specifics. Feature Overview
MPLS AToM Overview This document provides an introduction to MPLS AToM and includes the following sections: Documentation Specifics, page 14 Feature Overview, page 14 Benefits, page 26 What To Do Next,
More informationQuality of Service Commands policy-map. This command has no default behavior or values.
Quality of Service Commands policy-map policy-map To create or modify a policy map that can be attached to one or more interfaces to specify a service policy, use the policy-map global configuration command.
More informationReal-Time Protocol (RTP)
Real-Time Protocol (RTP) Provides standard packet format for real-time application Typically runs over UDP Specifies header fields below Payload Type: 7 bits, providing 128 possible different types of
More informationDefining QoS for Multiple Policy Levels
CHAPTER 13 In releases prior to Cisco IOS Release 12.0(22)S, you can specify QoS behavior at only one level. For example, to shape two outbound queues of an interface, you must configure each queue separately,
More informationPriority Traffic CSCD 433/533. Advanced Networks Spring Lecture 21 Congestion Control and Queuing Strategies
CSCD 433/533 Priority Traffic Advanced Networks Spring 2016 Lecture 21 Congestion Control and Queuing Strategies 1 Topics Congestion Control and Resource Allocation Flows Types of Mechanisms Evaluation
More informationConfiguring QoS CHAPTER
CHAPTER 34 This chapter describes how to use different methods to configure quality of service (QoS) on the Catalyst 3750 Metro switch. With QoS, you can provide preferential treatment to certain types
More informationII. Principles of Computer Communications Network and Transport Layer
II. Principles of Computer Communications Network and Transport Layer A. Internet Protocol (IP) IPv4 Header An IP datagram consists of a header part and a text part. The header has a 20-byte fixed part
More informationVideo Transmission Using the Available Bit Rate Service
Video Transmission Using the Available Bit Rate Service By Ronald Bollow Submitted to the Department of Electrical Engineering - Telecommunication Networks Group - at Berlin University of Technology in
More informationWireless Networks. Communication Networks
Wireless Networks Communication Networks Types of Communication Networks Traditional Traditional local area network (LAN) Traditional wide area network (WAN) Higher-speed High-speed local area network
More informationNetwork Control and Signalling
Network Control and Signalling 1. Introduction 2. Fundamentals and design principles 3. Network architecture and topology 4. Network control and signalling 5. Network components 5.1 links 5.2 switches
More informationLesson 3 Network technologies - Controlling
Lesson 3 Network technologies - Controlling Objectives : Network control or traffic engineering is one of the important techniques in the network. Understanding QoS control, traffic engineering and OAM
More information! Cell streams relating to different media types are multiplexed together on a statistical basis for transmission and switching.
Asynchronous Transfer Mode (ATM) Networks! All source media is first broken down into a stream of fixed sized units known as cells.! Cell streams relating to different media types are multiplexed together
More informationTraffic Management. Service Categories CHAPTER
CHAPTER 3 The following traffic management functions are supported on the PNNI node: Asymmetrical traffic requirements Connection admission control (CAC) Qbin for AutoRoute and PNNI Mapping of SVC/SPVC
More informationPerformance of UMTS Radio Link Control
Performance of UMTS Radio Link Control Qinqing Zhang, Hsuan-Jung Su Bell Laboratories, Lucent Technologies Holmdel, NJ 77 Abstract- The Radio Link Control (RLC) protocol in Universal Mobile Telecommunication
More informationNetwork Support for Multimedia
Network Support for Multimedia Daniel Zappala CS 460 Computer Networking Brigham Young University Network Support for Multimedia 2/33 make the best of best effort use application-level techniques use CDNs
More informationFeedback Consolidation in to-multipoint Connections of ABR Service in ATM Networks
Feedback Consolidation in Point-to to-multipoint Connections of ABR Service in ATM Networks Submitted by: Tamer M. El-Sayed Supervised by: Prof. Dr. M. Nazeeh El-Dereni Prof. Dr. M. Salah Selim Dr. Magdy
More informationModule 10 Frame Relay and ATM
Module 10 Frame Relay and ATM 10.0 INTRODUCTION Multimedia is basically a heterogeneous service which is characterized by different traffic types and variable bandwidths. Specifically Video bandwidth has
More informationChapter -6 IMPROVED CONGESTION CONTROL MECHANISM FOR REAL TIME DATA TRANSMISSION
Chapter -6 IMPROVED CONGESTION CONTROL MECHANISM FOR REAL TIME DATA TRANSMISSION Chapter 6 IMPROVED CONGESTION CONTROL MECHANISM FOR REAL TIME DATA TRANSMISSION 6.1 Introduction Supporting Quality of Service
More informationATM Technology in Detail. Objectives. Presentation Outline
ATM Technology in Detail Professor Richard Harris Objectives You should be able to: Discuss the ATM protocol stack Identify the different layers and their purpose Explain the ATM Adaptation Layer Discuss
More informationIntroduction to IP QoS
Introduction to IP QoS Primer to IP Quality of Service Aspects Queuing, Shaping, Classification Agenda IP QoS Introduction Queue Management Congestion Avoidance Traffic Rate Management Classification and
More informationUnderstanding the Variable Bit Rate Real Time (VBR rt) Service Category for ATM VCs
Understanding the Variable Bit Rate Real Time (VBR rt) Service Category for ATM VCs Document ID: 10414 Contents Introduction Prerequisites Requirements Components Used Conventions What is Variable Bit
More informationThe BANDIT can also concentrate and switch multiple sources of Frame Relay traffic simultaneously.
encor! enetworks TM Version A, March 2008 2013 Encore Networks, Inc. All rights reserved. Routing with Frame Relay This chapter discusses Frame Relay routing. 4.1 Frame Relay You can configure use of synchronous
More informationTraffic and Congestion Control in ATM Networks Using Neuro-Fuzzy Approach
Traffic and Congestion Control in ATM Networks Using Neuro-Fuzzy Approach Suriti Gupta College of Technology and Engineering Udaipur-313001 Vinod Kumar College of Technology and Engineering Udaipur-313001
More informationEC1009 HIGH SPEED NETWORKS (ELECTIVE) (2 marks Questions and answers)
DEPARTMENT OF ECE EC1009 HIGH SPEED NETWORKS (ELECTIVE) (2 marks Questions and answers) FINAL YEAR 7 th SEMESTER UNIT I HIGH SPEED NETWORKS 1) What is common channel signaling? The data s and control signals
More informationTELE Switching Systems and Architecture. Assignment Week 10 Lecture Summary - Traffic Management (including scheduling)
TELE9751 - Switching Systems and Architecture Assignment Week 10 Lecture Summary - Traffic Management (including scheduling) Student Name and zid: Akshada Umesh Lalaye - z5140576 Lecturer: Dr. Tim Moors
More informationIllustration of The Traffic Conformance in ATM Network
Illustration of The Traffic Conformance in ATM Network Tran Cong Hung (Post & Telecommunication Institute of Technology, Viet Nam) E-mail : conghung@ptithcm.edu.vn Pham Minh Ha (Hanoi University of Technology,
More informationCongestion Control and Resource Allocation
Congestion Control and Resource Allocation Lecture material taken from Computer Networks A Systems Approach, Third Edition,Peterson and Davie, Morgan Kaufmann, 2007. Advanced Computer Networks Congestion
More informationFrom ATM to IP and back again: the label switched path to the converged Internet, or another blind alley?
Networking 2004 Athens 11 May 2004 From ATM to IP and back again: the label switched path to the converged Internet, or another blind alley? Jim Roberts France Telecom R&D The story of QoS: how to get
More informationCHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS
28 CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS Introduction Measurement-based scheme, that constantly monitors the network, will incorporate the current network state in the
More informationCore-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks. Congestion Control in Today s Internet
Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks Ion Stoica CMU Scott Shenker Xerox PARC Hui Zhang CMU Congestion Control in Today s Internet Rely
More informationPing-pong flow control for ATM ABR traffic
Computer Standards & Interfaces 25 (2003) 515 528 www.elsevier.com/locate/csi Ping-pong flow control for ATM ABR traffic Cheng-Yuan Ku a, *, Lain-Chyr Hwang b, Hsin-Tzung Wu c a Department of Information
More informationComparing the bandwidth and priority Commands of a QoS Service Policy
Comparing the and priority s of a QoS Service Policy Contents Introduction Prerequisites Requirements Components Used Conventions Summary of Differences Configuring the Configuring the priority Which Traffic
More informationPacket Switching - Asynchronous Transfer Mode. Introduction. Areas for Discussion. 3.3 Cell Switching (ATM) ATM - Introduction
Areas for Discussion Packet Switching - Asynchronous Transfer Mode 3.3 Cell Switching (ATM) Introduction Cells Joseph Spring School of Computer Science BSc - Computer Network Protocols & Arch s Based on
More informationTesting Policing in ATM Networks
Testing Policing in ATM Networks Policing is one of the key mechanisms used in ATM (Asynchrous Transfer Mode) networks to avoid network congestion. The HP E4223A policing and traffic characterization test
More information