Pipelined Sections: A New Buffer Management Discipline for Scalable QoS Provision

Size: px
Start display at page:

Download "Pipelined Sections: A New Buffer Management Discipline for Scalable QoS Provision"

Transcription

1 Pipelined Sections: A New Buffer Management Discipline for Scalable QoS Provision Shun Y. Cheung and Corneliu S. Pencea Department of Mathematics and Computer Science, Emory University, Atlanta, Georgia Abstract The techniques used to provide quality of service in packet switched networks are buffer management and packet scheduling. The first line of defense against abusive flows that transmit excessive number of packets is the buffer manager. When the number of packets of a flow exceeds a threshold, new arrivals from that flow will be rejected. This threshold must be set to a level that is sufficiently high to provide specific guarantees to the flow and it must also be minimal so that the buffer manager will detect abusive behavior at the earliest possible moment. A buffer management technique that provides the same guarantees using a lower threshold will be more discriminating because it recognizes flow violations earlier. We present the new Pipelined Sections buffer management technique that is highly scalable and can provide rate guarantees to leaky bucket constrained flows using very low buffer reservations. Keywords Buffer Management, Queueing, Quality of Service, Rate Guarantee I. INTRODUCTION ROVISION of quality of service guarantees in packet switched networks has become increasingly important due to the rapid commercialization of the Internet. The deployment of quality of service (QoS provisions in the Internet will expedite the deployment of novel commercial services such as Voice over IP which have increasingly diverse requirements and can alleviate the effect of malicious denial of service attacks. To provide service guarantees to flows passing through a switch, the switch must control the amount of resources (e.g., bandwidth, buffers, etc. that each flow or a set of flows is allowed to use. The mechanisms to control the resource consumption consist of (1 buffer management techniques and (2 packet scheduling techniques. A buffer management method is aninput control mechanism that determines whether a packet of a flow is to be admitted based on current buffer occupancy. The basic buffer management method allocates a certain amount of buffer space to each flow and when packets of a flow occupy all the space allocated, new packet arrivals of this flow will be discarded. Other more sophisticated management schemes can assign priority to packets and push out lower priority packets in the buffer to make space for higher priority arrivals [1]. We will limit our investigation to the category of buffer management schemes that involves acceptance decision only at the time of packet arrivals. Buffer management is a crucial mechanism for providing QoS guarantee because the lack of such mechanism will allow abusive flows to capture most if not all of the buffer space, resulting in a denial of service situation. In fact, the effectiveness of any packet scheduling method can be compromised if the buffer manager admits an excessive number of rogue packets from non-conformant flows. A packet scheduling method is an output control mechanism that determines the service order/rate of the packets currently occupying the output buffer. The service order/rate can have a significant impact on packet delay and jitter (variance in the delay. The scheduling discipline used depends on the goal to be achieved. Common goals are meeting a deadline and sharing the bandwidth fairly amount the active flows (a flow is active when there are packets of the flow in the buffer. Canonical packet schedulers for meeting a deadline and for fair sharing are Early Deadline First (EDF [10] and Weighted Fair Queueing (WFQ [2], respectively. The overhead in providing service guarantees consists of two main components: (1 storage and processing of the state information associated with the service guarantees and (2 processing cost of making the per packet admission and scheduling decisions [5]. Because the latter cost is incurred on a per packet basis, it has the most significant impact on the scalability of the QoS provision mechanism. Packet admission decisions made by buffer management methods typically require only a constant amount of processing and state information [5]. Packet scheduling decisions can be computationally intense. For example, WFQlike schedulers [2], [3], [4] maintain an ordered queue and have non-constant time decision operations. Packet scheduling affects the queueing delay. Queueing delay can be significant in lower speed networks, however, it is relatively small in very high-speed links. For example, an OC- 48 link at 2.4 Gbits/sec can empty a 1 MByte buffer in less 1530 IEEE INFOCOM 2001

2 than 3.5 msec. Also, the large number of packets that need to be switched per unit of time can make non-constant time schedulers prohibitively expensive in a high-speed switch and can compromise its scalability. The FIFO scheduler is commonly used because of its efficiency: the computational overhead is constant and independent on the number of packets in the buffer. A recent study by Guerin et. al. [5] has shown that the FIFO scheduler can effectively provide rate guarantees to flows that are rate constrained. The Lion King attraction Movie Viewing Section 1 Puppet Show Section 2 Fig. 1. Example of a Multi-Section Attraction In this paper, we will present a novel buffer management method for scalable QoS provision that uses multiple pipelined sections. This scheme, which will be called Pipelined Sections (PS, has some similarity with the queue management method at the Disney World s theme parks. Some Disney attractions such as the Lion King shown in Figure 1 have multiple pipelined sections with similar seating capacity in each section. Visitors move from one section to the next to complete the entire attraction. They enter into the first section and exit through the last section of the pipeline. The PS buffer management method has the same logical structure but with some modifications to ensure that flows receive the service guarantees they requested: some amount of buffer space is reserved in each section for each flow and a packet from a flow can be entered into any section provided that the reserved buffers in the section has not been exhausted. Only packets in the last section will be transmitted. In PS, arriving packets from a given flow are entered into the last section, provided that reserved space is available. If the reserved space in the last section has been exhausted, then the packets are entered into the section before the last, and so on. The reserved space in the last section will provide a flow with a certain transmission rate guarantee while the space in the other sections will be primarily used to buffer bursty arrivals. Notice that we can use any packet scheduling discipline to transmit the packets in the last section. The PS buffer management can provide similar guarantees as the method in [5], but it uses significantly less amount of buffer reservation. The lower reservation threshold allows the PS buffer manager to detect oversubscription earlier. As a result, packets from non-conformant flows are rejected sooner and the average waiting time in the output queue is reduced significantly. The paper is organized as follows. Section 2 presents an overview of the related work. We present the PS method in Section 3 and study its performance in Section 4. We conclude the paper in Section 5. II. RELATED WORK This paper focuses primarily on buffer management. Due to the fact that the PS method has a built-in scheduling component, we will provide a more comprehensive background that includes both buffer management and packet scheduling techniques. Buffer management is the function which assigns buffers to contain packets in transit. There are two levels of buffer management in a shared memory switch. At the switch level, the total amount of memory in the switch is partitioned between the various output ports, while at the port level, the amount of memory allocated to a particular port is divided between the flows that share the output port. The partitioning at either level can be static [7], [8] or dynamic [9]. These studies of buffer management techniques are focused primarily on the effect of buffer assignment policies on the performance of the switch. The study in [5] shows that buffer management techniques can provide rate guarantees using a simple FIFO scheduler. Guerin et. al. have demonstrated that by reserving amount of buffer space, the buffer manager can provide lossless transmission guarantee to a (, leaky bucket rate constrained flow, where is the burst size, is the token rate, is the data transmission rate and is the output buffer size. It is also shown that the reservation of buffer is necessary to provide the lossless guarantee. However, as we will see later in this paper, the reservation can be reduced by usingmultiple pipelined sections 1. It is essential for the buffer manager to reserve a minimal amount of buffer space that can meet the QoS guarantees. For example, suppose that two buffer management methods and can provide the same QoS guarantees to a flow and that and reserve 10KByte and 20KBytes buffer space, respectively. The flow may be non-conformant and transmits more data than its subscription profile. will detect oversubscription by as soon as packets from occupy more than 10KByte buffer space, while will only detect this fact later. There are several The work [5] also contains an extension that uses multiple FIFO queues and an algorithm to assign flows to the various queues to minimize buffer utilization. This algorithm differs from PS. Furthermore, the flow assignment algorithm is an off-line algorithm that requires the (, values of all flows. PS is an on-line method IEEE INFOCOM 2001

3 adverse effects when the buffer management method is not sufficiently discriminating and allows an excessive number of rogue packets: there will be less buffer space available for storing packets from conformant flows and as a result, the ability for the switch to make future QoS guarantees is reduced. the average queue length is increased resulting in larger queueing delay and possibly larger delay variances (jitter. Once a packet is admitted by the buffer management method, it is entered into the output buffer for the destination port awaiting its transmission. The transmission order of the packets that are currently in the buffer is determined by the packet scheduling algorithm. The simplest and most commonly used scheduling method is FIFO, whose operational complexing is constant, independent of the number of packets in the output queue. Novel scheduling algorithms have been developed to better provide service guarantees to flows. Two types of scheduling algorithms have received tremendous interest in the recent years: (1 EDF-like and (2 WFQ-like schedulers. The EDF scheduler [10] orders the packets in the output buffer according to their deadline time value so that the packet with the earliest deadline will be serviced first. This service discipline is important for real time traffic. To avoid the computationally costly sort operation, the Rotating Priority Queue (RPQ technique [11], [12] uses a set of rotating FIFO queues to sort the packets. The RPQ+ queueing technique can approximate the EDF scheduler arbitrarily closely. The goal of the WFQ-like scheduling algorithm is to achieve fairness among the active flows. The Fluid-flow Fairness Queue (FFQ also known as General Processor Sharing (GPS is a hypothetical packet service discipline which assumes that the flows have infinitely small granularity. Data of the active flows are transmitted one bit at a time in a round robin fashion. This service discipline is not practical as packets must be transmitted in their entirety. Packet schedulers that approximate the FFQ discipline are called Packet-by-Packet fair queueing (PFQ [3]. The PFQ scheduler first computes the virtual finish time of the packets currently in the output buffer using the FFQ scheduler as reference and transmits the packets with the smallest virtual finish time. An unweighted and a weighted version of PFQ called Fair Queueing (FQ and Weighted Fair Queueing (WFQ are presented in [6] and [2], respectively. The WFQ scheduler requires O( time to schedule a packet, where is the number of active flows. More efficient scheduling algorithms that approximate the WFQ scheduler have been developed, for example, Self-clock Fair Queueing [3] and Leap Forward Virtual Clock [4]. In the remainder of this paper, we will present the PS buffer management method and show that it can provide rate guarantees to leaky bucket constrained flows. We will also compare the buffer reservation, throughput, delay and jitter performance of PS with the method in [5] which can provide similar service guarantees. III. PIPELINED SECTIONS BUFFER MANAGEMENT A. Algorithm Description The PS buffer management partitions the output buffer into equal buffer sections and assigns a fixed amount of space in each section to each flow. The sections are numbered from 1 to with section being the last section. Packets can be admitted into any section but only packets in the last section will be transmitted. Sections 1 through serve as holding areas and the packets and state information from section will be transferred to section when section becomes empty, for. We assume that packets have fixed size (e.g., ATM cells and that the amount of space reserved in each section is an integral multiple of the packet size. The PS buffer management method can be used in conjunction with any scheduling discipline. Figure 2 shows the structure of the PS buffer management method using three pipelined sections. Section 1 Arriving packet Reserved buffer space Reserved buffer space Reserved buffer space Section 2 Section 3 Fig. 2. Structure of the pipelined sections Let NReserved[ ] denote the amount of space reserved in each section for flow,, where is the number of flows that requested guarantees. Each section maintains an available buffer space vector NAvail[1.. ]. The variable section[ ].NAvail[ ] represents the amount of available buffer space for flow in section, where and. The value of section[ ].NAvail[ ] is initially equal to NReserved[ ] and is updated when packets enter and leave the switch. Each section has an associated state which can be open or closed and the packet transmission procedure for section depends on its current state. Each section is initially open and section,, will be IEEE INFOCOM 2001

4 1 1 1 come closed when some flow has exhausted all its reserved buffer space in section and must use section to store some of its data. Particularly, if section is closed, then some flow is currently transmitting above its guaranteed rate. In response, the PS method will store new arrivals of such flows in the lower sections which will receive a lower priority service. The arrivals of the other flows will continue to be entered into section as long as they do not exceed their reserved amount and they will receive speedier service. The packet transmission procedure is shown in Figure 3. The head of the line packet 2 from flow in section is selected for transmission when the transmission link becomes idle. When the switch completes the transmission of, the amount of available reserved space section[ ].NAvail[ ] for flow in section is increased with the packet length.length if section is open. However, the variable section[ ].NAvail[ ] is not increased if section is closed. (It will however be decreased when a packet from is entered into section. Therefore, when section becomes closed, the PS method will only admit packets from flows that have not yet exhausted their reservation for section into that section. The number of packets that will be admitted after the closing of section is bounded and section is guaranteed to empty at some later point in time. When section becomes empty, the data and state information from section will be copied into section, for and section 1 will be initialized to an empty section. We call this operationsection rotation. Notice that if section is closed when the sections are rotated, then section s state will remained closed. This situation can occur when some flow is very aggressive/bursty and some of its packets are buffered in section. We will see later that the implementation of the section rotation operation does not require any packet copy operation and we use it only to illustrate thelogical operation of PS. In Figure 3, the variables,, denote the index of the section that was used to buffer the last arrival from flow and is initially equal to, for. These variables are used by the packet admission procedure (discussed next to determine the section in which an arriving packet will be entered. When the sections are rotated, the index of this section must be increased by one, except when. The variables are not used by the packet transmission procedure. The packet admission algorithm is shown in Figure 4. A new packet arrival from flow is stored in its designated This packet depends on the packet scheduling method used. := departing packet of flow ; if (section[ ] is open section[ ].NAvail[ ] +=.length; else /* Section is closed */ if ( is last packet in section /* Rotate the sections */ for ( = 1; ; ++ Copy all packets and state information from section to section ; /* Init empty section 1 */ for ( = 1; ; ++ section[ ].NAvail[ ] = Reserve[ ]; /* Update indices!#" used by */ /* the packet admission procedure */ for ( = 1; ; ++!#" = $&% '(*,+,!#"-.0/ ; Fig. 3. PS Packet Transmission Algorithm arrival section 23 which is equal to if section has sufficient available buffer space to contain the new packet. Otherwise, an attempt to store the packet within the next available section, , is made. The new packet is admitted into the designated arrival section2- if and it is rejected (discarded if 2 :8. Furthermore, if 2 <; 9 6 (i.e., the current arrival and the previous arrival are stored in different sections, then section 0 will be marked as closed. Notice that section can be closed if some flow has some data stored in section. In this case, after a section rotation, the state of section will also be closed. " := arriving packet of flow ;!#" := index of section used to admit last packet of flow ; if ( ".length = section[!#" ].NAvail[ ] > " =!#" ; else > " =!#" - 1; if ( > " > 0 Admit " in section > " ; section[ > " ].NAvail[ ] -= ".length; if ( > "@? A!#" section[!#" ].State = CLOSED;!#" A > " ; else Reject " ; Fig. 4. PS Packet Admission Procedure 1533 IEEE INFOCOM 2001

5 = B. Operational Complexity The PS admission protocol in Figure 4 is clearly. Although in the presentation of the PS transmission protocol the packets and the state information from section are copied into section, for, in the implementation of PS, the copy operation can be avoided by organizing the sections as a circular array. An index variable is used to point to the array element used to represent section. The element # mod of the array stores information on section, 8. When the sections are rotated, we update the variable to mod and will point to the section which was previously section. The indices,, do not need to be updated in the case of a circular array. The section rotation operation initializes the variables section[1].navail[ ],, to those of an empty section and hence, the run time overhead of the packet transmission procedure is in the worst case 5. However, this worst case complexity is only incurred when the sections are rotated. Since rotation is done only when a closed section becomes empty, it is not executed for every packet transmission and the average run time overhead is much less than 5. Our simulations show that only a small number (less than 1% of packet transmissions will initiate a section rotation. Furthermore, if the interval between two consecutive section rotations is sufficiently large, we can use the following programming trick to initialize section[1].navail[ ],, asynchronously. We use an array of elements to store the state information of the sections and the element # mod stores the information on section. The, 8 element mod is not used and can be initialized asynchronously between two consecutive section rotations. When is incremented, the initialized array element mod will then be used to store state information of section 1 and the re-initialization step can be dropped from the transmission algorithm. The run time overhead for the transmission procedure will then become. C. Rate Guarantee Property Assume the total amount of buffer space for an output port is and the link speed (transmission rate is. The amount of buffer space in each section is then equal to. Assume the amount of buffer space reserved in each section for a flow is, for. The total amount of space reserved for is. We will call a flow saturated if section[ ].NAvail[ ] is equal to zero and section is closed. The output data rate of a saturated flow is greater than or equal to if # because when section is closed, the amount of data that can enter section by is bounded #. The following proposition presents a buffer reservation method that will guarantee lossless service to leaky bucket rate constrained flows. Proposition 1: If flow (, allocating is a conformant leaky bucket rate constrained flow and # 4 "!, then for $#&% (1 amount of buffer space in each section will guarantee lossless service for flow. Proof: Let ' and denote the cumulative arrivals and the amount of data from flow that are buffered the sections 1 to at time (, respectively. Notice that does not include data buffered in section. Define *,+.-0/ ' 87 ( 97 ;: <' (2 which is the burst potential of defined in [5]. It represents the size of the token pool in the leaky bucket of flow at time ( and captures the potential burstiness at(. We will prove that flow is lossless by showing: (?>@? A Case 1: 8 Since the maximum burst potential is less or equal to the burst size and A (which follows from (1, we have that: Case 2: 798 B 8 A A Consider the interval [C 2( ] where C is the first time that C grows above zero without ever dropping back to zero: CD*FE;G0HJI6( KMLN(<OP ( K* 8Q ( K &7 8 for ( KSRT C 2( U, we can con- From the fact that clude that section is closed during the entire interval T C 2( U. The PS method will rotate the sections continuously and the number of bits transmitted per rotation is V. Also, each time when the sections are rotated, ( K will be decremented by IEEE INFOCOM 2001

6 ' ' Let C denote the time just prior to timec. We have that C- 8. Therefore: B <' C The LHS and RHS are in general not equal as it is possible that a part of the burst of flow which arrives at time C can be absorbed in section. From (2, we have that: <' And therefore: C Assume that there are ( <C ( <C section rotations in [C 2( ] and let denote the times when the packets and state informations from section are transferred to section, for. The following figure summarizes the timing relationship: τ Q ( τ f = 0 and ( <=T Q f (t > 0 T1 T 2 T3 Tn <= (n-1t <=T The time between two consecutive rotations is at most, for. Also, C. Therefore: ( <C We can thus bound the LHS of (3 by: From (1, we have that B It also follows from (1 that which completes the proof. &, so that: B A B A IV. NUMERICAL EXAMPLES t (3 t and therefore: We have studied the performance of the PS buffer management scheme and compared it to the method in [5]. We call the method in [5] Single Section or SS for short. Figure 5 shows the amount of buffer space reserved by PS and SS to provide lossless service to a flow when its burst size is varied. The link has an output buffer size of 1MByte and the link speed is 48 Mbps. The flow is rate constrained by a (, leaky bucket with token rate = 1 Mbps. The buffer reservation to guarantee lossless service in SS for the above example is 86% bytes [5]. The buffer allocation in PS depends on the number of sections. From Proposition 1, the assignment in PS for equal to 10, 20, 40 and 100 are 4 "! 86%, 4 "! 86%, 4 "! 86% 86% 4 "! 86% 8 8 and, respectively. These functions have been plotted in Figure 5. For small values of, the constant term dominates and for larger values, the buffer allocation in PS grows linearly with. The buffer allocation in PS is considerably less than SS for a wide range of burst sizes but for extremely bursty flows with very large values for, PS will reserve more buffer space than SS. We can also see that the rate of increase in buffer allocation in PS is less when is increased. PS will however maintain more state information for a larger value of. is reduced when is increased and this can result in more frequent section rotations. Therefore, the value of should be kept reasonably small to achieve good performance. We recommend using between 10 and 20. Figure 6 shows the relationship between the buffer reservation and the output buffer size. One of the techniques to increase the capacity of a switch is to increase its buffer size. However, the buffer requirement for a flow is also increased resulting in less efficient usage of the added buffer memory. Figure 6 shows how the PS scheme with 20 sections and the SS method use an increased amount of output buffer space. The figure depicts the amount of buffer reserved to guarantee lossless service for a (, leaky bucket conformant flow for = 1Mbps and three different values of : 50KBytes, 100KBytes and 150KBytes. The link speed of the switch is 48 Mbps. Buffer Reservation (bytes Output buffer size = 1 MBytes Token rate = 1 Mbps Link speed = 48 Mbps SS PS, 10 sections PS, 20 sections PS, 40 sections PS, 100 sections Burst Size (bps Fig. 5. Buffer reservation as a function of burst size 1535 IEEE INFOCOM 2001

7 Buffer Reservation (bytes Token rate = 1 Mbps Link speed = 48 Mbps Single Section Pipelined Sections BS = 50 KBytes BS = 100 KBytes BS = 150 KB BS = 100 KB BS = 50 KB BS = 150 KB Output Buffer Size (bytes Fig. 6. Buffer reservation as a function of output buffer size The buffer requirements for SS and PS are bytes and 4 "! bytes, respectively. We see that when the output buffer size is increased, the buffer requirement increases linearly in SS. On the other hand, the buffer reservation in PS remains unchanged when L and in which case, all of the added buffers will be available for use to provide service guarantees to new flows. flow 2 flow 3 flow 4 flow 5 flow 6 flow 7 flow 1 flow 8 flow 9 Output buffer 48 Mbps Fig. 7. Network used in the simulation study TABLE I FLOWS USED IN THE SIMULATIONS Source rate Reservation Flow 1,2,3 16Mbps 2Mbps 50KB 2Mbps 4,5,6 40Mbps 8Mbps 100KB 8Mbps 7,8 40Mbps 4Mbps 50KB 0.4Mbps 9 40Mbps 16Mbps 50KB 2Mbps We have also studied the loss and delay performance of the PS method using simulations. The scheduling method used in the simulations is FIFO. The network used in the simulation is shown in Figure 7. The transmission rate of the output link is 48 Mbps. There are nine input flows whose properties are shown in Table I. Column 1 in the table lists the flow indices. Columns 2 and 3 show the peak data rate and the average data rate of each flow. The packet arrivals of each flow are generated by a Markovian on/off process. When the Markovian process is in the on state, packets are transmitted with a constant rate that is equal to the peak rate in Table I. No packets are transmitted in the off state. The duration of the on and off periods for a flow is such that "!# ;, where %$ and $ are the average length of the on and off periods, respectively. The average length of an on period in the simulations is set to 1 msec. Columns 4 and 5 of Table I show the reservation parameters. The switch uses the and parameters to compute the amount of buffer reservation for each flow. Flows 1 through 6 are conformant to their profiles and their Markovian arrival generation process is followed by a leaky bucket rate shaper with the specified burst size and token generation rate. Flows 7, 8 and 9 are non-conformant and their arrivals are not shaped by a leaky bucket. Notice that the non-conformant flow 9 has a very high data rate and it is important that the buffer management scheme detects its violation early. Similar flows are also used in a simulation study in [5]. The length of each packet is 53 bytes (ATM cells and each simulation is run for 4,000 sec simulation time. TABLE II BUFFER ALLOCATION FOR FLOWS IN THE SIMULATION Output buffer size 0.7MByte 1MByte 2MBytes Flow PS SS PS SS PS SS 1,2, ,5, , Total M M 1.45M 1.97M Table II shows the buffer reservations for the flows in Table I made by a PS method with 20 sections and the SS method for three different output buffer sizes. The unit in this table is KByte except when it is indicated otherwise. The last row in Table II lists the total amount of reservation needed to provide lossless guarantee to all flows, provided that the flows are conformant. Notice in Table II that the buffer reservation in PS is much tighter than in SS. For example, the amount of space reserved to accommodate flow 9 in PS and SS are 83.7KBytes and 133.3KBytes, respectively, for a 2MBytes output buffer. The lower amount of reservation will allow the PS method to detect rogue packets earlier and this will reduce packet delay considerably as we will see in another simulation experiment given in Table III below. Table II also shows that when the output buffer is 0.7MByte, the PS scheme can ensure lossless 1536 IEEE INFOCOM 2001

8 service to all flows if they are conformant. On the other hand, the SS scheme will need a total of 1.08MByte which exceeds the total amount of buffer space. Hence, SS cannot provide lossless service with 0.7 MByte output buffer space even if all flows are conformant. Notice also that when the output buffer space is increased from 0.7MByte to 1MByte, the amount of reservation in PS and SS are increased by 149.4KBytes and 200KBytes, respectively. Hence, PS allocates 50% of the new buffer space to the existing flows while SS allocates 66.7%. The SS method will need an output buffer size of about 2MBytes to ensure lossless service to all flows, provided that they are conformant. Percentage data loss (% PS, flows 1,2,3 PS, flows 4,5,6 PS, flows 7,8 PS, flow 9 SS, flows 1,2,3 SS, flows 4,5,6 SS, flows 7,8 SS, flow 9 TABLE III PACKET DELAY AND JITTER USING A 2MBYTES BUFFER Delay Delay variance Flow PS SS PS SS 1,2,3 1.91ms 95.35ms 3(ms 26(ms 4,5,6 1.86ms 95.36ms 2(ms 26(ms 7, ms 95.58ms 280(ms 26(ms ms 95.16ms 199(ms 27(ms from flows 1 through 6 in PS is almost 50 times less than SS. This significant reduction is a result of several factors. First, the average queue length is significantly reduced in PS. From Table II, we can see than the maximum buffer occupancy in PS and SS are 1.45MByte and 1.97MByte, respectively. The lower buffer limit in PS results in lower queueing delay for all flows. (The non-conformant flows experience more packet loss in PS, but the packet that are admitted will experience better average delay. PS also has a built-in scheduling component that provides improved delay for less bursty flows. Highly bursty flows will saturate section and packet arrivals will then be entered into the lower priority sections Output buffer size (bytes Fig. 8. Percentage packet loss as a function of output buffer size Figure 8 shows the percentage of data loss incurred by each flow for various output buffer sizes ranging for 50KBytes to 500KBytes. Although the buffer size are too small to guarantee lossless service, the figure clearly shows the difference between PS and SS in their ability to recognize oversubscription by non-conformant flows. We can see that the packet loss for each flow is similar in both methods for an output buffer size of 50KBytes. But as soon as the buffer space is increased to 100KBytes, PS rejects 36.4% of packets from the high data rate nonconformant flow 9. In contrast, the SS method only drops 4.5% of flow 9 s data when buffer size is 100KBytes and buffer size must be increased to 350KBytes for SS to reject 36.0% of flow 9 s packets. Table III shows the average packet delay and the variance in delay for the flows. The output buffer size is 2MBytes which is sufficient for PS and SS to provide lossless service to all flows, provided that the flows are conformant. In the simulations, some packets from flows 7, 8 and 9 have been rejected by both PS and SS. We can see from Table III that the average delay experienced by packets TABLE IV NUMBER OF SECTION ROTATIONS PER PACKET SENT Number of sections Finally, Table IV shows the effect of on the operational complexity of the PS buffer management method. The output buffer size in this experiment is 1MByte and the average number of section rotations are computed using the simulation experiment in Table I. Recall that a packet transmission will cause PS to perform a section rotation operation if a closed section becomes empty. The table shows that for 8, a section rotation is performed on the average for every 569 packet transmissions while for 8, it is performed on the average of once for every 256 transmissions. The average interval time between two consecutive section rotations will be at least 5.02msec and 2.25msec for 10 and 20 sections, respectively. Recall that the section information can be initialized between two consecutive section rotations and if this interval time is large enough, the average run time complexity of PS becomes IEEE INFOCOM 2001

9 V. CONCLUSION We have presented the novel Pipelined Sections buffer management method that organizes the output buffer space into prioritized sections. Each flow that requires quality of service guarantee is assigned some buffer space in each section. Arrivals are first entered into the last section which has the highest transmission priority. When the reserved space in the last section is exhausted, new arrivals are stored in other sections. Only packets in the last section are being transmitted and the remaining sections are used to buffer and organize packets. Occasionally, when the switch has transmitted all packets in the last section, the sections are rotated and packets from a lower priority section are moved to a higher priority section. The PS buffer management method can be used with any packet scheduling technique. The PS method is highly scalable. Although it has a worst case run time complexity of 5, where is the number of flows, its average complexity is much less than 5. Also, results from the simulation experiments have indicated that the interval between two consecutive section rotations is sufficiently large to allow the information in a section to be initialized asynchronously, in which case the average run time complexity of PS is then. We have shown that the PS method can provide rate guarantee to a leaky bucket constrained flow using significantly less buffer reservation than the technique in [5]. The reduced buffer thresholds allows PS to detect rate violation earlier and less rogue packets from non-conformant flows are allowed into the switch. The average queue length is reduced and simulation experiments have shown that packet delay is improved significantly. In summary, its many strengths (rate guarantee, simplicity, scalability, low buffer reservation, lower delay and jitter makes PS a well-suited buffer management method for high-speed networks. [4] S. Suri, G. Varghese and G. Chandranmenon, Leap Forward Virtual Clock: A New Fair Queueing Scheme with Guaranteed Delays and Throughput Fairness, IEEE Infocom 1997, pp , [5] R. Guérin, S Kamat, V. Peris and R. Rajan, Scalable QoS Provision Through Buffer Management, ACM SIGCOMM 1998, pp , [6] A. Demers and S. Keshav, Analysis and Simulation of a Fair Queueing Algorithm, ACM SIGCOMM 1989, pp. 1 12, [7] M. I. Irland, Buffer Management in a Packet Switch, IEEE Trans. Communication, vol 26, pp , March [8] G. Latouche, Exponential Servers Sharing a Finite Storage: Comparison of Space Allocation Policies, IEEE Trans. Communication, vol 28, pp , June [9] A. K. Choudhury and E. L. Hahne, Dynamic Queue Length Thresholds in a Shared Memory ATM Switch, IEEE Infocom1996, pp , [10] D. Ferrari and D. C. Verma, A Scheme for Real-Time Channel Establishment in Wide-Area Networks, IEEE JSAC, vol 8, number 3, pp , April, [11] J. Liebeherr and D. E. Wrege, A Versatile Packet Multiplexer for Quality-of-Service Networks, Proc. 4th International Symposium on High Performance Distributed Computing (HPDC-4, pp , August [12] D. E. Wrege and J. Liebeherr, A Near-Optimal Packet Scheduler for QoS Networks, IEEE Infocom ACKNOWLEDGMENTS The authors would like to thank the anonymous referees for their insightful comments. REFERENCES [1] S. Suri, D. Tipper and G. Meempat Comparative Evaluation of Space Priority in ATM Networks, Proceedings of IEEE Infocom 1994, Toronto, Canada, June, [2] A. K. J. Parekh and R. G. Gallager A Generalized Processor Sharing Approach to Flow Control in Integrated Services Networks: The Single Node Case, IEEE/ACM Transaction on Networking, vol. 1, no. 3, pp , [3] S. J. Golestani, A Self-Clocked Fair Queueing Scheme for Broadband Applications, IEEE Infocom 1994, pp , IEEE INFOCOM 2001

Network Model for Delay-Sensitive Traffic

Network Model for Delay-Sensitive Traffic Traffic Scheduling Network Model for Delay-Sensitive Traffic Source Switch Switch Destination Flow Shaper Policer (optional) Scheduler + optional shaper Policer (optional) Scheduler + optional shaper cfla.

More information

Quality of Service (QoS)

Quality of Service (QoS) Quality of Service (QoS) The Internet was originally designed for best-effort service without guarantee of predictable performance. Best-effort service is often sufficient for a traffic that is not sensitive

More information

Unit 2 Packet Switching Networks - II

Unit 2 Packet Switching Networks - II Unit 2 Packet Switching Networks - II Dijkstra Algorithm: Finding shortest path Algorithm for finding shortest paths N: set of nodes for which shortest path already found Initialization: (Start with source

More information

Scheduling. Scheduling algorithms. Scheduling. Output buffered architecture. QoS scheduling algorithms. QoS-capable router

Scheduling. Scheduling algorithms. Scheduling. Output buffered architecture. QoS scheduling algorithms. QoS-capable router Scheduling algorithms Scheduling Andrea Bianco Telecommunication Network Group firstname.lastname@polito.it http://www.telematica.polito.it/ Scheduling: choose a packet to transmit over a link among all

More information

Adaptive-Weighted Packet Scheduling for Premium Service

Adaptive-Weighted Packet Scheduling for Premium Service -Weighted Packet Scheduling for Premium Service Haining Wang Chia Shen Kang G. Shin The University of Michigan Mitsubishi Electric Research Laboratory Ann Arbor, MI 489 Cambridge, MA 239 hxw,kgshin @eecs.umich.edu

More information

TELE Switching Systems and Architecture. Assignment Week 10 Lecture Summary - Traffic Management (including scheduling)

TELE Switching Systems and Architecture. Assignment Week 10 Lecture Summary - Traffic Management (including scheduling) TELE9751 - Switching Systems and Architecture Assignment Week 10 Lecture Summary - Traffic Management (including scheduling) Student Name and zid: Akshada Umesh Lalaye - z5140576 Lecturer: Dr. Tim Moors

More information

Congestion Control Open Loop

Congestion Control Open Loop Congestion Control Open Loop Muhammad Jaseemuddin Dept. of Electrical & Computer Engineering Ryerson University Toronto, Canada References 1. A. Leon-Garcia and I. Widjaja, Communication Networks: Fundamental

More information

Queuing. Congestion Control and Resource Allocation. Resource Allocation Evaluation Criteria. Resource allocation Drop disciplines Queuing disciplines

Queuing. Congestion Control and Resource Allocation. Resource Allocation Evaluation Criteria. Resource allocation Drop disciplines Queuing disciplines Resource allocation Drop disciplines Queuing disciplines Queuing 1 Congestion Control and Resource Allocation Handle congestion if and when it happens TCP Congestion Control Allocate resources to avoid

More information

Advanced Computer Networks

Advanced Computer Networks Advanced Computer Networks QoS in IP networks Prof. Andrzej Duda duda@imag.fr Contents QoS principles Traffic shaping leaky bucket token bucket Scheduling FIFO Fair queueing RED IntServ DiffServ http://duda.imag.fr

More information

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 Question 344 Points 444 Points Score 1 10 10 2 10 10 3 20 20 4 20 10 5 20 20 6 20 10 7-20 Total: 100 100 Instructions: 1. Question

More information

Improving QOS in IP Networks. Principles for QOS Guarantees

Improving QOS in IP Networks. Principles for QOS Guarantees Improving QOS in IP Networks Thus far: making the best of best effort Future: next generation Internet with QoS guarantees RSVP: signaling for resource reservations Differentiated Services: differential

More information

A Versatile Packet Multiplexer for Quality-of-Service Networks*

A Versatile Packet Multiplexer for Quality-of-Service Networks* A Versatile Packet Multiplexer for Quality-of-Service Networks* Jorg Liebeherr Dallas E. Wrege Department of Computer Science University of Virginia Charlottesville, VA 22903 Abstract A novel packet multiplexing

More information

Episode 5. Scheduling and Traffic Management

Episode 5. Scheduling and Traffic Management Episode 5. Scheduling and Traffic Management Part 3 Baochun Li Department of Electrical and Computer Engineering University of Toronto Outline What is scheduling? Why do we need it? Requirements of a scheduling

More information

Network Layer Enhancements

Network Layer Enhancements Network Layer Enhancements EECS 122: Lecture 14 Department of Electrical Engineering and Computer Sciences University of California Berkeley Today We have studied the network layer mechanisms that enable

More information

Network Support for Multimedia

Network Support for Multimedia Network Support for Multimedia Daniel Zappala CS 460 Computer Networking Brigham Young University Network Support for Multimedia 2/33 make the best of best effort use application-level techniques use CDNs

More information

SIMULATION OF PACKET DATA NETWORKS USING OPNET

SIMULATION OF PACKET DATA NETWORKS USING OPNET SIMULATION OF PACKET DATA NETWORKS USING OPNET Nazy Alborz, Maryam Keyvani, Milan Nikolic, and Ljiljana Trajkovic * School of Engineering Science Simon Fraser University Vancouver, British Columbia, Canada

More information

TDDD82 Secure Mobile Systems Lecture 6: Quality of Service

TDDD82 Secure Mobile Systems Lecture 6: Quality of Service TDDD82 Secure Mobile Systems Lecture 6: Quality of Service Mikael Asplund Real-time Systems Laboratory Department of Computer and Information Science Linköping University Based on slides by Simin Nadjm-Tehrani

More information

Performance Analysis of Cell Switching Management Scheme in Wireless Packet Communications

Performance Analysis of Cell Switching Management Scheme in Wireless Packet Communications Performance Analysis of Cell Switching Management Scheme in Wireless Packet Communications Jongho Bang Sirin Tekinay Nirwan Ansari New Jersey Center for Wireless Telecommunications Department of Electrical

More information

Master Course Computer Networks IN2097

Master Course Computer Networks IN2097 Chair for Network Architectures and Services Prof. Carle Department for Computer Science TU München Chair for Network Architectures and Services Prof. Carle Department for Computer Science TU München Master

More information

Master Course Computer Networks IN2097

Master Course Computer Networks IN2097 Chair for Network Architectures and Services Prof. Carle Department for Computer Science TU München Master Course Computer Networks IN2097 Prof. Dr.-Ing. Georg Carle Christian Grothoff, Ph.D. Chair for

More information

EP2210 Scheduling. Lecture material:

EP2210 Scheduling. Lecture material: EP2210 Scheduling Lecture material: Bertsekas, Gallager, 6.1.2. MIT OpenCourseWare, 6.829 A. Parekh, R. Gallager, A generalized Processor Sharing Approach to Flow Control - The Single Node Case, IEEE Infocom

More information

Resource allocation in networks. Resource Allocation in Networks. Resource allocation

Resource allocation in networks. Resource Allocation in Networks. Resource allocation Resource allocation in networks Resource Allocation in Networks Very much like a resource allocation problem in operating systems How is it different? Resources and jobs are different Resources are buffers

More information

Design of a Weighted Fair Queueing Cell Scheduler for ATM Networks

Design of a Weighted Fair Queueing Cell Scheduler for ATM Networks Design of a Weighted Fair Queueing Cell Scheduler for ATM Networks Yuhua Chen Jonathan S. Turner Department of Electrical Engineering Department of Computer Science Washington University Washington University

More information

Lecture Outline. Bag of Tricks

Lecture Outline. Bag of Tricks Lecture Outline TELE302 Network Design Lecture 3 - Quality of Service Design 1 Jeremiah Deng Information Science / Telecommunications Programme University of Otago July 15, 2013 2 Jeremiah Deng (Information

More information

Congestion in Data Networks. Congestion in Data Networks

Congestion in Data Networks. Congestion in Data Networks Congestion in Data Networks CS420/520 Axel Krings 1 Congestion in Data Networks What is Congestion? Congestion occurs when the number of packets being transmitted through the network approaches the packet

More information

Mohammad Hossein Manshaei 1393

Mohammad Hossein Manshaei 1393 Mohammad Hossein Manshaei manshaei@gmail.com 1393 Voice and Video over IP Slides derived from those available on the Web site of the book Computer Networking, by Kurose and Ross, PEARSON 2 Multimedia networking:

More information

Real-Time Protocol (RTP)

Real-Time Protocol (RTP) Real-Time Protocol (RTP) Provides standard packet format for real-time application Typically runs over UDP Specifies header fields below Payload Type: 7 bits, providing 128 possible different types of

More information

Quality of Service (QoS)

Quality of Service (QoS) Quality of Service (QoS) A note on the use of these ppt slides: We re making these slides freely available to all (faculty, students, readers). They re in PowerPoint form so you can add, modify, and delete

More information

Simulation of a Scheduling Algorithm Based on LFVC (Leap Forward Virtual Clock) Algorithm

Simulation of a Scheduling Algorithm Based on LFVC (Leap Forward Virtual Clock) Algorithm Simulation of a Scheduling Algorithm Based on LFVC (Leap Forward Virtual Clock) Algorithm CHAN-SOO YOON*, YOUNG-CHOONG PARK*, KWANG-MO JUNG*, WE-DUKE CHO** *Ubiquitous Computing Research Center, ** Electronic

More information

Comparison of Shaping and Buffering for Video Transmission

Comparison of Shaping and Buffering for Video Transmission Comparison of Shaping and Buffering for Video Transmission György Dán and Viktória Fodor Royal Institute of Technology, Department of Microelectronics and Information Technology P.O.Box Electrum 229, SE-16440

More information

048866: Packet Switch Architectures

048866: Packet Switch Architectures 048866: Packet Switch Architectures Output-Queued Switches Deterministic Queueing Analysis Fairness and Delay Guarantees Dr. Isaac Keslassy Electrical Engineering, Technion isaac@ee.technion.ac.il http://comnet.technion.ac.il/~isaac/

More information

Simulation-Based Performance Comparison of Queueing Disciplines for Differentiated Services Using OPNET

Simulation-Based Performance Comparison of Queueing Disciplines for Differentiated Services Using OPNET Simulation-Based Performance Comparison of Queueing Disciplines for Differentiated Services Using OPNET Hafiz M. Asif and El-Sayed M. El-Alfy College of Computer Science and Engineering King Fahd University

More information

Kommunikationssysteme [KS]

Kommunikationssysteme [KS] Kommunikationssysteme [KS] Dr.-Ing. Falko Dressler Computer Networks and Communication Systems Department of Computer Sciences University of Erlangen-Nürnberg http://www7.informatik.uni-erlangen.de/~dressler/

More information

Episode 5. Scheduling and Traffic Management

Episode 5. Scheduling and Traffic Management Episode 5. Scheduling and Traffic Management Part 2 Baochun Li Department of Electrical and Computer Engineering University of Toronto Keshav Chapter 9.1, 9.2, 9.3, 9.4, 9.5.1, 13.3.4 ECE 1771: Quality

More information

What Is Congestion? Computer Networks. Ideal Network Utilization. Interaction of Queues

What Is Congestion? Computer Networks. Ideal Network Utilization. Interaction of Queues 168 430 Computer Networks Chapter 13 Congestion in Data Networks What Is Congestion? Congestion occurs when the number of packets being transmitted through the network approaches the packet handling capacity

More information

different problems from other networks ITU-T specified restricted initial set Limited number of overhead bits ATM forum Traffic Management

different problems from other networks ITU-T specified restricted initial set Limited number of overhead bits ATM forum Traffic Management Traffic and Congestion Management in ATM 3BA33 David Lewis 3BA33 D.Lewis 2007 1 Traffic Control Objectives Optimise usage of network resources Network is a shared resource Over-utilisation -> congestion

More information

Topic 4b: QoS Principles. Chapter 9 Multimedia Networking. Computer Networking: A Top Down Approach

Topic 4b: QoS Principles. Chapter 9 Multimedia Networking. Computer Networking: A Top Down Approach Topic 4b: QoS Principles Chapter 9 Computer Networking: A Top Down Approach 7 th edition Jim Kurose, Keith Ross Pearson/Addison Wesley April 2016 9-1 Providing multiple classes of service thus far: making

More information

Optimal per-node Rate Allocation to provide per-flow End-to-End Delay Guarantees in a Network of Routers supporting Guaranteed Service Class

Optimal per-node Rate Allocation to provide per-flow End-to-End Delay Guarantees in a Network of Routers supporting Guaranteed Service Class Optimal per-node Rate Allocation to provide per-flow End-to-End Delay Guarantees in a Network of Routers supporting Guaranteed Service Class Aniruddha Diwan ECE Department Indian Institute of Science Bangalore

More information

Stop-and-Go Service Using Hierarchical Round Robin

Stop-and-Go Service Using Hierarchical Round Robin Stop-and-Go Service Using Hierarchical Round Robin S. Keshav AT&T Bell Laboratories 600 Mountain Avenue, Murray Hill, NJ 07974, USA keshav@research.att.com Abstract The Stop-and-Go service discipline allows

More information

Lecture 4 Wide Area Networks - Congestion in Data Networks

Lecture 4 Wide Area Networks - Congestion in Data Networks DATA AND COMPUTER COMMUNICATIONS Lecture 4 Wide Area Networks - Congestion in Data Networks Mei Yang Based on Lecture slides by William Stallings 1 WHAT IS CONGESTION? congestion occurs when the number

More information

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services Overview 15-441 15-441 Computer Networking 15-641 Lecture 19 Queue Management and Quality of Service Peter Steenkiste Fall 2016 www.cs.cmu.edu/~prs/15-441-f16 What is QoS? Queuing discipline and scheduling

More information

Switch Architecture for Efficient Transfer of High-Volume Data in Distributed Computing Environment

Switch Architecture for Efficient Transfer of High-Volume Data in Distributed Computing Environment Switch Architecture for Efficient Transfer of High-Volume Data in Distributed Computing Environment SANJEEV KUMAR, SENIOR MEMBER, IEEE AND ALVARO MUNOZ, STUDENT MEMBER, IEEE % Networking Research Lab,

More information

Performance Characteristics of a Packet-Based Leaky-Bucket Algorithm for ATM Networks

Performance Characteristics of a Packet-Based Leaky-Bucket Algorithm for ATM Networks Performance Characteristics of a Packet-Based Leaky-Bucket Algorithm for ATM Networks Toshihisa OZAWA Department of Business Administration, Komazawa University 1-23-1 Komazawa, Setagaya-ku, Tokyo 154-8525,

More information

Journal of Electronics and Communication Engineering & Technology (JECET)

Journal of Electronics and Communication Engineering & Technology (JECET) Journal of Electronics and Communication Engineering & Technology (JECET) JECET I A E M E Journal of Electronics and Communication Engineering & Technology (JECET)ISSN ISSN 2347-4181 (Print) ISSN 2347-419X

More information

THERE are a growing number of Internet-based applications

THERE are a growing number of Internet-based applications 1362 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 14, NO. 6, DECEMBER 2006 The Stratified Round Robin Scheduler: Design, Analysis and Implementation Sriram Ramabhadran and Joseph Pasquale Abstract Stratified

More information

Comparing the bandwidth and priority Commands of a QoS Service Policy

Comparing the bandwidth and priority Commands of a QoS Service Policy Comparing the and priority s of a QoS Service Policy Contents Introduction Prerequisites Requirements Components Used Conventions Summary of Differences Configuring the Configuring the priority Which Traffic

More information

CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS

CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS 28 CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS Introduction Measurement-based scheme, that constantly monitors the network, will incorporate the current network state in the

More information

Core-Stateless Guaranteed Rate Scheduling Algorithms

Core-Stateless Guaranteed Rate Scheduling Algorithms Core-Stateless Guaranteed Rate Scheduling Algorithms Jasleen Kaur and Harrick M Vin Distributed Multimedia Computing Laboratory Department of Computer Sciences University of Texas at Austin Abstract Many

More information

Overview. Lecture 22 Queue Management and Quality of Service (QoS) Queuing Disciplines. Typical Internet Queuing. FIFO + Drop tail Problems

Overview. Lecture 22 Queue Management and Quality of Service (QoS) Queuing Disciplines. Typical Internet Queuing. FIFO + Drop tail Problems Lecture 22 Queue Management and Quality of Service (QoS) Overview Queue management & RED Fair queuing Khaled Harras School of Computer Science niversity 15 441 Computer Networks Based on slides from previous

More information

Lecture 17 Multimedia Transport Subsystem (Part 3)

Lecture 17 Multimedia Transport Subsystem (Part 3) CS 414 Multimedia Systems Design Lecture 17 Multimedia Transport Subsystem (Part 3) Klara Nahrstedt Spring 2010 Administrative MP2: deadline Monday, March 1, demos 5-7pm (sign up in class on Monday) HW1:

More information

QoS provisioning. Lectured by Alexander Pyattaev. Department of Communications Engineering Tampere University of Technology

QoS provisioning. Lectured by Alexander Pyattaev. Department of Communications Engineering Tampere University of Technology QoS provisioning Lectured by Alexander Pyattaev Department of Communications Engineering Tampere University of Technology alexander.pyattaev@tut.fi March 6, 2012 Outline 1 Introduction 2 QoS support elements

More information

of-service Support on the Internet

of-service Support on the Internet Quality-of of-service Support on the Internet Dept. of Computer Science, University of Rochester 2008-11-24 CSC 257/457 - Fall 2008 1 Quality of Service Support Some Internet applications (i.e. multimedia)

More information

Common network/protocol functions

Common network/protocol functions Common network/protocol functions Goals: Identify, study common architectural components, protocol mechanisms Synthesis: big picture Depth: important topics not covered in introductory courses Overview:

More information

QUALITY of SERVICE. Introduction

QUALITY of SERVICE. Introduction QUALITY of SERVICE Introduction There are applications (and customers) that demand stronger performance guarantees from the network than the best that could be done under the circumstances. Multimedia

More information

Priority Traffic CSCD 433/533. Advanced Networks Spring Lecture 21 Congestion Control and Queuing Strategies

Priority Traffic CSCD 433/533. Advanced Networks Spring Lecture 21 Congestion Control and Queuing Strategies CSCD 433/533 Priority Traffic Advanced Networks Spring 2016 Lecture 21 Congestion Control and Queuing Strategies 1 Topics Congestion Control and Resource Allocation Flows Types of Mechanisms Evaluation

More information

Scheduling Algorithms to Minimize Session Delays

Scheduling Algorithms to Minimize Session Delays Scheduling Algorithms to Minimize Session Delays Nandita Dukkipati and David Gutierrez A Motivation I INTRODUCTION TCP flows constitute the majority of the traffic volume in the Internet today Most of

More information

RED behavior with different packet sizes

RED behavior with different packet sizes RED behavior with different packet sizes Stefaan De Cnodder, Omar Elloumi *, Kenny Pauwels Traffic and Routing Technologies project Alcatel Corporate Research Center, Francis Wellesplein, 1-18 Antwerp,

More information

DiffServ Architecture: Impact of scheduling on QoS

DiffServ Architecture: Impact of scheduling on QoS DiffServ Architecture: Impact of scheduling on QoS Abstract: Scheduling is one of the most important components in providing a differentiated service at the routers. Due to the varying traffic characteristics

More information

Quality of Service (QoS)

Quality of Service (QoS) CEN445 Network Protocols and Algorithms Chapter 5 Network Layer 5.4 Quality of Service Dr. Mostafa Hassan Dahshan Department of Computer Engineering College of Computer and Information Sciences King Saud

More information

Wireless Networks (CSC-7602) Lecture 8 (15 Oct. 2007)

Wireless Networks (CSC-7602) Lecture 8 (15 Oct. 2007) Wireless Networks (CSC-7602) Lecture 8 (15 Oct. 2007) Seung-Jong Park (Jay) http://www.csc.lsu.edu/~sjpark 1 Today Wireline Fair Schedulling Why? Ideal algorithm Practical algorithms Wireless Fair Scheduling

More information

Flow Control. Flow control problem. Other considerations. Where?

Flow Control. Flow control problem. Other considerations. Where? Flow control problem Flow Control An Engineering Approach to Computer Networking Consider file transfer Sender sends a stream of packets representing fragments of a file Sender should try to match rate

More information

Computer Networking. Queue Management and Quality of Service (QOS)

Computer Networking. Queue Management and Quality of Service (QOS) Computer Networking Queue Management and Quality of Service (QOS) Outline Previously:TCP flow control Congestion sources and collapse Congestion control basics - Routers 2 Internet Pipes? How should you

More information

Credit-Based Fair Queueing (CBFQ) K. T. Chan, B. Bensaou and D.H.K. Tsang. Department of Electrical & Electronic Engineering

Credit-Based Fair Queueing (CBFQ) K. T. Chan, B. Bensaou and D.H.K. Tsang. Department of Electrical & Electronic Engineering Credit-Based Fair Queueing (CBFQ) K. T. Chan, B. Bensaou and D.H.K. Tsang Department of Electrical & Electronic Engineering Hong Kong University of Science & Technology Clear Water Bay, Kowloon, Hong Kong

More information

QoS Configuration. Overview. Introduction to QoS. QoS Policy. Class. Traffic behavior

QoS Configuration. Overview. Introduction to QoS. QoS Policy. Class. Traffic behavior Table of Contents QoS Configuration 1 Overview 1 Introduction to QoS 1 QoS Policy 1 Traffic Policing 2 Congestion Management 3 Line Rate 9 Configuring a QoS Policy 9 Configuration Task List 9 Configuring

More information

A New Fair Weighted Fair Queuing Scheduling Algorithm in Differentiated Services Network

A New Fair Weighted Fair Queuing Scheduling Algorithm in Differentiated Services Network IJCSNS International Journal of Computer Science and Network Security, VOL.6 No.11, November 26 267 A New Fair Weighted Fair Queuing Scheduling Algorithm in Differentiated Services Network M. A. Elshaikh,

More information

FIRM: A Class of Distributed Scheduling Algorithms for High-speed ATM Switches with Multiple Input Queues

FIRM: A Class of Distributed Scheduling Algorithms for High-speed ATM Switches with Multiple Input Queues FIRM: A Class of Distributed Scheduling Algorithms for High-speed ATM Switches with Multiple Input Queues D.N. Serpanos and P.I. Antoniadis Department of Computer Science University of Crete Knossos Avenue

More information

A QoS aware Packet Scheduling Scheme for WiMAX

A QoS aware Packet Scheduling Scheme for WiMAX A QoS aware Packet Scheduling Scheme for WiMAX Mahasweta Sarkar and Harpreet Sachdeva ABSTRACT WiMAX is one of the most promising broadband wireless technologies today. The WiMAX standard-802.16-is designed

More information

Multiplexing. Common network/protocol functions. Multiplexing: Sharing resource(s) among users of the resource.

Multiplexing. Common network/protocol functions. Multiplexing: Sharing resource(s) among users of the resource. Common network/protocol functions Goals: Identify, study common architectural components, protocol mechanisms Synthesis: big picture Depth: Important topics not covered in introductory courses Overview:

More information

CSCD 433/533 Advanced Networks Spring Lecture 22 Quality of Service

CSCD 433/533 Advanced Networks Spring Lecture 22 Quality of Service CSCD 433/533 Advanced Networks Spring 2016 Lecture 22 Quality of Service 1 Topics Quality of Service (QOS) Defined Properties Integrated Service Differentiated Service 2 Introduction Problem Overview Have

More information

An Enhanced Dynamic Packet Buffer Management

An Enhanced Dynamic Packet Buffer Management An Enhanced Dynamic Packet Buffer Management Vinod Rajan Cypress Southeast Design Center Cypress Semiconductor Cooperation vur@cypress.com Abstract A packet buffer for a protocol processor is a large shared

More information

Multimedia Traffic Management and Congestion Control in Satellite ATM Networks

Multimedia Traffic Management and Congestion Control in Satellite ATM Networks Multimedia Traffic Management and Congestion Control in Satellite ATM Networks by S. Annukka Piironen Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of

More information

A Pipelined Memory Management Algorithm for Distributed Shared Memory Switches

A Pipelined Memory Management Algorithm for Distributed Shared Memory Switches A Pipelined Memory Management Algorithm for Distributed Shared Memory Switches Xike Li, Student Member, IEEE, Itamar Elhanany, Senior Member, IEEE* Abstract The distributed shared memory (DSM) packet switching

More information

Internet Services & Protocols. Quality of Service Architecture

Internet Services & Protocols. Quality of Service Architecture Department of Computer Science Institute for System Architecture, Chair for Computer Networks Internet Services & Protocols Quality of Service Architecture Dr.-Ing. Stephan Groß Room: INF 3099 E-Mail:

More information

TRANSPORTING MPEG-II VIDEO STREAMS ON ATM NETWORKS WITH A MODIFIED J-EDD SCHEME

TRANSPORTING MPEG-II VIDEO STREAMS ON ATM NETWORKS WITH A MODIFIED J-EDD SCHEME Malaysian Journal of Computer Science, Vol. 10 No. 2, December 1997, pp. 17-25 TRANSPORTING MPEG-II VIDEO STREAMS ON ATM NETWORKS WITH A MODIFIED J-EDD SCHEME Ting-Chao Hou, Chien-Chang Chen and Wen-Jer

More information

Quality of Service (QoS) Computer network and QoS ATM. QoS parameters. QoS ATM QoS implementations Integrated Services Differentiated Services

Quality of Service (QoS) Computer network and QoS ATM. QoS parameters. QoS ATM QoS implementations Integrated Services Differentiated Services 1 Computer network and QoS QoS ATM QoS implementations Integrated Services Differentiated Services Quality of Service (QoS) The data transfer requirements are defined with different QoS parameters + e.g.,

More information

IMPLEMENTATION OF CONGESTION CONTROL MECHANISMS USING OPNET

IMPLEMENTATION OF CONGESTION CONTROL MECHANISMS USING OPNET Nazy Alborz IMPLEMENTATION OF CONGESTION CONTROL MECHANISMS USING OPNET TM Communication Networks Laboratory School of Engineering Science Simon Fraser University Road map Introduction to congestion control

More information

Distributed Call Admission Control for Ad Hoc Networks

Distributed Call Admission Control for Ad Hoc Networks Distributed Call Admission Control for Ad Hoc Networks Shahrokh Valaee and Baochun Li Abstract This paper introduces a distributed call admission controller for ad hod networks. The call admission controller

More information

A Method for Traffic Scheduling Based on Token Bucket QoS Parameters

A Method for Traffic Scheduling Based on Token Bucket QoS Parameters A Method for Traffic Scheduling Based on Token Bucket QoS Parameters Fernando Moreira 1 José Ruela 2,3 Departamento de Informática, Universidade Portucalense, Porto, Portugal (fmoreira@uptpt) DEEC, Faculdade

More information

A Control-Theoretical Approach for Fair Share Computation in Core-Stateless Networks

A Control-Theoretical Approach for Fair Share Computation in Core-Stateless Networks A Control-Theoretical Approach for Fair Share Computation in Core-Stateless Networks Hoon-Tong Ngin and Chen-Khong Tham National University of Singapore, Department of Electrical and Computer Engineering,

More information

Core-Stateless Guaranteed Rate Scheduling Algorithms

Core-Stateless Guaranteed Rate Scheduling Algorithms Core-Stateless Guaranteed Rate Scheduling Algorithms Jasleen Kaur and Harrick M. Vin Distributed Multimedia Computing Laboratory Department of Computer Sciences University of Texas at Austin Abstract Many

More information

Published by: PIONEER RESEARCH & DEVELOPMENT GROUP ( 1

Published by: PIONEER RESEARCH & DEVELOPMENT GROUP (  1 RATE CONTROL SCHEME IN DATA NETWORK Joy Eneh 1, Harris Orah 2 1 Department of Electrical and Electronic Engineering, Nnamdi Azikiwe University Awka, Anambra state, Nigeria. 2 Department of Electronic Engineering,

More information

Research Letter A Simple Mechanism for Throttling High-Bandwidth Flows

Research Letter A Simple Mechanism for Throttling High-Bandwidth Flows Hindawi Publishing Corporation Research Letters in Communications Volume 28, Article ID 74878, 5 pages doi:11155/28/74878 Research Letter A Simple Mechanism for Throttling High-Bandwidth Flows Chia-Wei

More information

Communication using Multiple Wireless Interfaces

Communication using Multiple Wireless Interfaces Communication using Multiple Interfaces Kameswari Chebrolu and Ramesh Rao Department of ECE University of California, San Diego Abstract With the emergence of different wireless technologies, a mobile

More information

Chapter 24 Congestion Control and Quality of Service 24.1

Chapter 24 Congestion Control and Quality of Service 24.1 Chapter 24 Congestion Control and Quality of Service 24.1 Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 24-1 DATA TRAFFIC The main focus of congestion control

More information

Quality of Service in the Internet

Quality of Service in the Internet Quality of Service in the Internet Problem today: IP is packet switched, therefore no guarantees on a transmission is given (throughput, transmission delay, ): the Internet transmits data Best Effort But:

More information

RSVP 1. Resource Control and Reservation

RSVP 1. Resource Control and Reservation RSVP 1 Resource Control and Reservation RSVP 2 Resource Control and Reservation policing: hold sources to committed resources scheduling: isolate flows, guarantees resource reservation: establish flows

More information

Resource Control and Reservation

Resource Control and Reservation 1 Resource Control and Reservation Resource Control and Reservation policing: hold sources to committed resources scheduling: isolate flows, guarantees resource reservation: establish flows 2 Usage parameter

More information

Quality of Service in the Internet

Quality of Service in the Internet Quality of Service in the Internet Problem today: IP is packet switched, therefore no guarantees on a transmission is given (throughput, transmission delay, ): the Internet transmits data Best Effort But:

More information

QoS for Real Time Applications over Next Generation Data Networks

QoS for Real Time Applications over Next Generation Data Networks QoS for Real Time Applications over Next Generation Data Networks Final Project Presentation December 8, 2000 http://www.engr.udayton.edu/faculty/matiquzz/pres/qos-final.pdf University of Dayton Mohammed

More information

FACULTY OF COMPUTING AND INFORMATICS

FACULTY OF COMPUTING AND INFORMATICS namibia UniVERSITY OF SCIEnCE AnD TECHnOLOGY FACULTY OF COMPUTING AND INFORMATICS DEPARTMENT OF COMPUTER SCIENCE QUALIFICATION: Bachelor of Computer Science {Honours) QUALIFICATION CODE: 08BCSH LEVEL:

More information

NOTE03L07: INTRODUCTION TO MULTIMEDIA COMMUNICATION

NOTE03L07: INTRODUCTION TO MULTIMEDIA COMMUNICATION NOTE03L07: INTRODUCTION TO MULTIMEDIA COMMUNICATION Some Concepts in Networking Circuit Switching This requires an end-to-end connection to be set up before data transmission can begin. Upon setup, the

More information

Application of Network Calculus to the TSN Problem Space

Application of Network Calculus to the TSN Problem Space Application of Network Calculus to the TSN Problem Space Jean Yves Le Boudec 1,2,3 EPFL IEEE 802.1 Interim Meeting 22 27 January 2018 1 https://people.epfl.ch/105633/research 2 http://smartgrid.epfl.ch

More information

Performance and Evaluation of Integrated Video Transmission and Quality of Service for internet and Satellite Communication Traffic of ATM Networks

Performance and Evaluation of Integrated Video Transmission and Quality of Service for internet and Satellite Communication Traffic of ATM Networks Performance and Evaluation of Integrated Video Transmission and Quality of Service for internet and Satellite Communication Traffic of ATM Networks P. Rajan Dr. K.L.Shanmuganathan Research Scholar Prof.

More information

Dynamic Window-Constrained Scheduling for Multimedia Applications

Dynamic Window-Constrained Scheduling for Multimedia Applications Dynamic Window-Constrained Scheduling for Multimedia Applications Richard West and Karsten Schwan College of Computing Georgia Institute of Technology Atlanta, GA 3332 Abstract This paper describes an

More information

A DiffServ IntServ Integrated QoS Provision Approach in BRAHMS Satellite System

A DiffServ IntServ Integrated QoS Provision Approach in BRAHMS Satellite System A DiffServ IntServ Integrated QoS Provision Approach in BRAHMS Satellite System Guido Fraietta 1, Tiziano Inzerilli 2, Valerio Morsella 3, Dario Pompili 4 University of Rome La Sapienza, Dipartimento di

More information

Worst-case Ethernet Network Latency for Shaped Sources

Worst-case Ethernet Network Latency for Shaped Sources Worst-case Ethernet Network Latency for Shaped Sources Max Azarov, SMSC 7th October 2005 Contents For 802.3 ResE study group 1 Worst-case latency theorem 1 1.1 Assumptions.............................

More information

FDDI-M: A SCHEME TO DOUBLE FDDI S ABILITY OF SUPPORTING SYNCHRONOUS TRAFFIC

FDDI-M: A SCHEME TO DOUBLE FDDI S ABILITY OF SUPPORTING SYNCHRONOUS TRAFFIC FDDI-M: A SCHEME TO DOUBLE FDDI S ABILITY OF SUPPORTING SYNCHRONOUS TRAFFIC Kang G. Shin Real-time Computing Laboratory EECS Department The University of Michigan Ann Arbor, Michigan 48109 &in Zheng Mitsubishi

More information

INTEGRATION of data communications services into wireless

INTEGRATION of data communications services into wireless 208 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL 54, NO 2, FEBRUARY 2006 Service Differentiation in Multirate Wireless Networks With Weighted Round-Robin Scheduling and ARQ-Based Error Control Long B Le, Student

More information

Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks

Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks X. Yuan, R. Melhem and R. Gupta Department of Computer Science University of Pittsburgh Pittsburgh, PA 156 fxyuan,

More information

Congestion Control in Communication Networks

Congestion Control in Communication Networks Congestion Control in Communication Networks Introduction Congestion occurs when number of packets transmitted approaches network capacity Objective of congestion control: keep number of packets below

More information