A. Charny Delay Bounds In a Network with Aggregate Scheduling. Draft Version 4. 2/29/00. Delay Bounds In a Network With Aggregate Scheduling 1

Size: px
Start display at page:

Download "A. Charny Delay Bounds In a Network with Aggregate Scheduling. Draft Version 4. 2/29/00. Delay Bounds In a Network With Aggregate Scheduling 1"

Transcription

1 A. Charny Delay Bounds In a Network with Aggregate Scheduling. Version 4. 2/29/00 Delay Bounds In a Network With Aggregate Scheduling Anna Charny acharny@cisco.com Abstract A large number of products implementing aggregate buffering and scheduling mechanisms have been developed and deployed, and still more are under development. With the rapid increase in the demand for reliable end-to-end QoS solutions, it becomes increasingly important to understand the implications of aggregate scheduling on the resulting QoS capabilities. This document studies the bounds on the worst case delay in a network implementing aggregate scheduling. A lower bound on the worst case delay is derived. It is shown that in a general network configuration the delays achieved with aggregate scheduling are very sensitive to utilization and can in general be quite high. It is also shown that for a class of network configurations described in the paper it is possible to give an upper bound that provides reasonable worst case delay guarantees for reasonable utilization numbers. These bounds are a function of network utilization, maximum hop count of any flow, and the shaping parameters at the network ingress. It is also argued that for a general network configuration and utilization numbers which are independent on the maximum hop count, an upper bound on delay, if it exists, must be a function of the number of nodes and/or the number of flows in the network..0 Introduction. Motivation In the last decade, the problem of providing end-to-end QoS in the Integrated Services networks has received a lot of attention. One of the major challenges in providing hard end-to-end QoS guarantees is the problem of scalability. Traditional approaches to providing hard end-to-end QoS guarantees, which involve per flow signalling, buffering and scheduling, are difficult, if not impossible, to implement in very high-speed core equipment. As a result, methods based on traffic aggregation have recently become widespread. A notable example of methods based on aggregation has been developed in the Differential Services Working group of IETF. In particular, [4] defines Expedited Forwarding per Hop Behavior (EF PHB), the underlying principle of which is to ensure that at each hop the aggregate of traffic requiring EF PHB treatment receives service rate exceeding the total bandwidth requirements of all flows in the aggregate at this hop. ecently, a lot of practical implementations of EF BHB have emerged where all EF traffic is shaped and policed at the backbone ingress, while in the core equipment all EF traffic shares a single priority FIFO or single high-weight queue in a Class-Based Fair Queuing scheduler. Since these implementations offer a very high degree of scalability at comparatively low price, they are naturally very attractive. Sufficient bandwidth must be available on any link in the network to accommodate bandwidth demands of all individual flows requiring end-to-end QoS. One way of ensuring such bandwidth availability is by generously over provisioning the network capacity. Such over provisioning requires adequate methods for measuring and predicting traffic demands. While a lot of work is being done in this area, there remains a substantial concern that the absence of explicit bandwidth reservations may undermine the ability of the network to provide hard QoS guarantees. As a result,. This document is draft that is a substantially revised from its previous version. Page of 3

2 A. Charny Delay Bounds In a Network with Aggregate Scheduling. Version 4. 2/29/00 approaches based on explicit aggregate bandwidth reservations using SVP aggregation are being proposed []. The use of SVP aggregation allows the uncertainty in bandwidth availability to be overcome in a scalable manner. As a result, methods based on SVP aggregation can provide hard end-to-end bandwidth guarantees to individual flows sharing the traffic aggregate with a particular aggregate SVP reservation. However, regardless of whether bandwidth availability is ensured by over provisioning, or by explicit bandwidth reservations, there remains a problem of whether it is possible to provide not only bandwidth guarantees but also endto-end latency and jitter guarantees for traffic requiring such guarantees, in the case when only aggregate scheduling is implemented in the core. An important example of traffic requiring very stringent delay guarantees is voice. A popular approach to supporting voice traffic is to use EF PHB and serve it at a high priority. The underlying intuition is that as long as the load of voice traffic is substantially 2 smaller than the service rate of the voice queue, there will be no (or very little) queueing delay and hence very little latency and jitter. However, little work has been done in quantifying just exactly how much voice traffic can actually be supported without violating stringent end-to-end delay budget that voice traffic requires. Another example when a traffic aggregate may require a stringent latency and jitter guarantee is the so-called Virtual Leased Line (VLL) [4]. The underlying idea of the VLL is to provide a customer an emulation of a dedicated line of a given bandwidth over the IP infrastructure. One attractive property of the real dedicated line is that a customer who has traffic with diverse QoS requirements can perform local scheduling based on those requirements and local bandwidth allocation policies, and the end-to-end QoS experienced by his traffic will be entirely under the customer s control. One way to provide a close emulation of this service over IP infrastructure could be to allocate a queue per each VLL in a WFQ scheduler at each router in the network. However, since there could be a very large number of VLLs in the core, per-vll queueing and scheduling presents a substantial challenge. As a result, it is very attractive to use aggregate queueing and scheduling for VLL traffic as well. However, aggregate buffering scheduling introduces substantially more jitter than per-vll scheduling. This jitter results in substantial deviation from the hard pipe model of the dedicated link, and may cause substantial degradation of service as seen by delay-sensitive traffic inside a given VLL. Hence, understanding just how severe the degradation of service (as seen by delay sensitive traffic within a VLL) can be with aggregate scheduling is an important problem that has not been well understood. Understanding delay and jitter characteristics of aggregate traffic is also important in the context of providing endto-end Guaranteed Service [2] across a network with aggregate scheduling, such as a Diffserv cloud implementing EF PHB. Availability of a meaningful mathematical delay bound which can be derived from some topological parameters of the Diffserv cloud and some global characteristics of delay-sensitive traffic makes it possible to view the Diffserv cloud as a single node in the path of a flow, and therefore enables providing Guaranteed Services end-to-end [7]. One of the goals of this document is to quantify the service guarantees that can be achieved with aggregate scheduling. With respect to this goal this document takes an unpopular approach of analyzing the worst case performance. The lack of popularity of the worst case analysis is based on the perception that worst case scenarios are corner cases, which are very unlikely to occur. Even when they do occur, they affect rare random packets, which is highly unlikely to be noticeable by the users of the service. This opinion is frequently supported by the average case analysis based on classic queueing theory assumptions, or by simulation results. 2. A commonly used rule of thumb is that priority traffic utilization should not exceed 50% of the link bandwidth. Page 2 of 3

3 A. Charny Delay Bounds In a Network with Aggregate Scheduling. Version 4. 2/29/00 Unfortunately this view may be too optimistic. Even in the case of traditional data traffic, classic queueing results based on the assumption of independence and Poisson models have long been under attack. The realization of the limitations of these models for Internet data traffic have lead to the development of models based on self-similarity of network traffic, but the applicability of self-similar models to real-time traffic like voice is questionable as well. The problem is that encoded voice traffic is periodic by nature, and voice generated by devices such as VoIP PBXs using hardware coders synchronized to network SONET clocks can additionally be highly synchronized. The consequence of periodicity is that if a bad pattern occurs for one packet of a flow, it is likely to repeat itself periodically for packets of the same flow. Therefore, it is not just a random isolated packet that can be affected by a bad pattern, making it very unlikely that more than a very small number of packets of any given flow will be affected (and hence justifying the assumption that the users will never notice such a rare event) - rather, it is a random flow or set of flows that may see consistently bad performance for the duration of the call. As the amount of VoIP traffic increases, every now and then a bad case will happen, and even if does not happen that frequently, when it does occur, a set of users are most certainly likely to suffer potentially severe performance degradation. Although it is not difficult to demonstrate performance degradation in software or paper simulation, it is unfortunately unlikely that simulation results (which by necessity use a small number of flows in simple network configurations) will give us insight on just how often one would see these problems in the real world Internet. Hence, it is unfortunately not clear at all how to adequately evaluate the frequency of these events, given that we appear to have no tools other than simplistic simulation of a relatively small flows in relatively simple network configurations to work with. These considerations provide motivation to take a closer look at the worst case behavior. This paper undertakes the task of studying the worst case delays in a network with aggregate scheduling. It is demonstrated that the lower bound on the worst case delay is quite high. It is also shown that for a class of networks it is possible to derive an upper bound on the worst case delay as a function of the utilization factor, the maximum hop count of any flow, and the parameters of the ingress shapers. It is shown that for the considered class of network configurations the bound is quite tight. ecently, an upper bounds on delay in an arbitrary network with aggregate scheduling has been derived in [8]. The upper bound of [8] is derived under the condition that the maximum allowed utilization is bounded by a function which is close 3 to where h is maximum hop count of any flow. While the bound in [8] appears currently the only bound h which holds for an arbitrary network configuration, unfortunately it explodes in the vicinity of α = This means, h unfortunately, that the for a network with reasonable diameter (e.g. 0 hops) the values of utilization which yield reasonable delay bound is quite small (substantially less than 0%). In fact, as shown in this paper, for small enough values of utilization the bound of [8] can be improved, but the practical usefulness of this result is limited due to the small values of utilization involved. One contribution of this paper is to argue that for α > an upper bound on h the delay in an arbitrary network topology, if it exists, can no longer be a function of α and h only, but rather must depend on some other parameters of the network reflecting its size, such as the number of ingress nodes to the network. The existence of such bound for α > and its exact form for a general network configuration at the moment h remains an open question. 3. For sufficiently large routers, where the total capacity of all inputs is substantially greater than the speed of an individual link. Page 3 of 3

4 A. Charny Delay Bounds In a Network with Aggregate Scheduling. Version 4. 2/29/00.2 Model, Assumptions and Terminology The network model considered in this paper is illustrated in Fig.. It is assumed that the backbone, depicted by the cloud, is a graph of general topology. Each node in this graph (not shown) will be referred to as a backbone, or core, router. The devices at the edge of the backbone will be referred to as backbone edge routers (BE). Connected to the BE are the access network edge routers (AE). For example, BE may be an edge device of an Internet Service Provider (ISP), while AE may be a customer router connecting the customer s network to the ISP. It is assumed that all devices in the backbone implement aggregate class-based scheduling such as strict Priority Scheduling or Class-based Weighted Fair Queueing (CBWFQ). It is also assumed that traffic aggregates requiring QoS guarantees inside the network are shaped at the ingress to the backbone [4]. This is frequently referred to as traffic conditioning. However, there is substantial flexibility in exactly what constitutes a traffic aggregate that is being shaped, and exactly where the shaping is performed. In the context of EF PHB, one frequently assumed model is that an aggregate is all EF traffic of a particular customer, regardless of the destination of that traffic. This is the assumption of the so-called hose model. In this case shaping is performed at AE, while BE is typically assumed to perform percustomer policing, and then put all EF traffic of all its customers in a priority FIFO (or a queue in a CBWFQ scheduler). In another model, which is frequently referred to as the pipe model, each customer (AE) shapes its aggregate traffic to each of several potential destinations, while BE may either police each aggregate individually and place all traffic of a particular class in a class FIFO, or reshape traffic of each customer 4. While the location of the device performing the shaping and the granularity of traffic aggregates on which the shaping is performed is different is all of these cases, all shaping models discussed above fall into the model shown in Fig. 2. Here, individual customer flows (for a suitable definition of flow) are shaped at the customer egress (AE), and can be optionally aggregated and/or reshaped at the backbone edge (BE). Access Edge outer (AE) Backbone Edge outer (BE) Backbone Figure Network Model. Core routers inside the Backbone Cloud are not shown. 4. eshaping requires per flow queueing and scheduling, which is frequently unavailable at the BE level. Page 4 of 3

5 A. Charny Delay Bounds In a Network with Aggregate Scheduling. Version 4. 2/29/00 At yet a higher level of abstraction, the network with aggregate class-based scheduling can be modelled as shown in Fig. 3. In this conceptual model flows belonging to a particular class arrive to the backbone shaped to their rates, while in every core router all flows of a given class share a single class FIFO which is served either by a priority scheduler, or by a CBWFQ scheduler. The term shaped flows here needs to be clarified. Depending on the context, the term flow is used to refer to some aggregate of traffic sharing the same path through the backbone. A flow can be a particular customer s traffic, an aggregate of some subset of traffic of several customers, all traffic sharing a particular path, etc. The term shaped flow is used to mean that traffic of a flow i entering the backbone conforms to a leaky bucket (r(i), b(i)). Formally, a flow is said to conform to a leaky bucket (r,b) across some boundary if the amount of traffic At (, t 2 ) of this flow crossing this boundary in an arbitrary interval of time ( t, t 2 ) satisfies the constraint At (, t 2 ) ( t 2 t )r + b (the latter is frequently referred to as a leaky-bucket constraint). Given that a flow consists of discrete packets, the burst size b cannot be less than the size of the largest packet to enable a flow to send one packet at high link speed without violating the leaky bucket constraint. In particular, a flow with maximum packet size L max will be said to be ideally shaped at rate r if it satisfies the leaky bucket constraint with b = L max. The depth b of the leaky bucket will be frequently referred to as burst, or burstiness, of the flow. As mentioned in section., the ability to provide delay and latency guarantees to a subset of traffic assumes availability of sufficient bandwidth everywhere in the network. This document assumes that there exists a mechanism (based on explicit reservations such as described in [], or some reliable provisioning technique) that ensures that there is enough bandwidth on all links in the network to satisfy the bandwidth needs of all delay-sensitive traffic. In fact, it is assumed that the sum of rates of all leaky buckets of all flows sharing a particular delay-sensitive class queue in any backbone node does not exceed a fixed fraction of the service rate of that queue. AE shaper AE shaper AE shaper Optional aggregate shaper BE link to backbone Figure 2 Ingress shaping model Page 5 of 3

6 A. Charny Delay Bounds In a Network with Aggregate Scheduling. Version 4. 2/29/00 Figure 3 Conceptual model of aggregate class-based scheduling. All flows shown here belong to the same class. In particular, if the class is served at the highest priority, then it is assumed that at any link of capacity C there is at most αc is used by priority traffic. In the case where the class queue is served by a CBWFQ scheduler with a particular weight 0 < w <, it is assumed that the combined rate of all traffic sharing the queue does not exceed αs, where S = wc is the guaranteed service rate of that queue. In either case, it will be said that the utilization of traffic in any class is assumed to never exceeds some fixed α on any link, where α is a network-wide parameter (that is, utilization is understood with respect to service rate of the queue rather than the total capacity of the link). Finally, this document makes a standard (albeit simplifying) assumption that all devices in the network are output buffered. Additional delay terms in delay bounds may be needed to accommodate possible deviations of real-life devices from the output-buffered model..3 Overview Shaped flows Class queue in core router Class queue in core router Class queue in core router Section 2 investigates various causes of the increase in burtstiness and resulting delay in a network with aggregate scheduling. It is demonstrated in several examples that, despite popular belief, it is possible to achieve very high edgeto-edge delays even if delay-sensitive traffic such as voice is served at strict priority, and the utilization of priority traffic is less than 50% of the link. Section 3 derives an upper bound on edge-to-edge delay for a class of network topologies as a function of several characteristic network parameters, the accuracy of ingress shaping and the minimal edge-to-edge aggregate rate. It is demonstrated that at least for the considered class of topologies, low utilization is essential to meet voice delay budget Page 6 of 3

7 A. Charny Delay Bounds In a Network with Aggregate Scheduling. Version 4. 2/29/00 in the worst case. It is also shown that there exists an area in the space of parameters where reasonable delay budget can be met in the worst case, for the class of network topologies considered, as long as the routes of all flows are limited to a fixed number of hops. It is also demonstrated that the theoretical delay bound obtained in section 3 is actually very tight for the considered class of network topologies. It is argued that both achievable delay and the worst case delay bound for the considered class of topologies increase as the inaccuracy of rate shaping increases and decrease with the increase of edge-to-edge aggregate shaping rate. Section 4 considers the delay bounds for a general topology network. It is argued that for utilization numbers that are outside the parameter space where the bound of [8] exists, the delay bound, if it exists, must be a function of the size of the network or some other network parameters, rather than just the hop count h and utilization factor α. Finally, Section 5 gives a summary and a short discussion of the results. 2.0 easons for Large Worst-case Delay with Aggregate Scheduling Using priority FIFO to support highly delay sensitive traffic such as voice and video is based on the intuition that as long as traffic requiring strict delay guarantees is served at a rate exceeding its bandwidth requirements, then, regardless of the queuing structure at the routers, the queueing delay will be minimal. Hence, there appears to be no need to invest in per-flow state, queueing and/or complicated scheduling, and a single priority FIFO will suffice, as long as the link speed is larger than the input rate. A similar argument also applies to a queue in a CBWFQ server, as long as the weight of the queue is sufficiently large to ensure that the service rate is greater than the total input rate of all traffic sharing the queue. It is easy to see that from the point of view of potential worst case queueing delay, a queue in a CBWFQ scheduler can be no better than delays in a strict priority FIFO on a link of corresponding capacity. Hence, if large delay can be demonstrated for the case of strict priority FIFO, it can also be demonstrated for CBWFQ. With this in mind, the discussion in this section (the goal of which is to demonstrate that large delays can occur even if the service rate of the aggregate queue is substantially larger that the input rate of traffic sharing the queue), will be given in the context of a strict priority FIFO. Unfortunately, while it is true that ensuring that the service rate exceeds the input rate implies that there will be no priority traffic queued anywhere on the average, this does not mean that the instantaneous queues will always be small. As will be shown below, even if all flows are ideally shaped, and priority traffic load is substantially smaller than the link bandwidth, instantaneous queues can be quite large. Such instantaneous queue build-up introduces a certain amount of jitter, which increases the burstiness of otherwise ideally shaped traffic. This burstiness can in turn cause further delay as traffic, which is no longer ideally shaped, merges downstream, causing yet more delay, jitter and further accumulation of burstiness. This effect may be especially severe if some portion of priority traffic sharing the FIFO has substantial burst to rate ratio. However, even in the case of identical flows, such as voice-only traffic sharing the priority FIFO, burst accumulation over several hops may be large enough to cause substantial violation of tight delay budget needed for this highly delay sensitive traffic such as real-time voice. Fortunately, it turns out that there exist an area in the parameter space where it is possible to provide delay guarantees across a network. However, to provide some intuition of exactly what could cause excessive delays in an under utilized network, the remaining part of this section discusses various factors contributing to burst accumulation and the resulting increase in end-to-end delay. All examples in this section should be understood in the context of Figures 2 and 3. More specifically, Examples -4 assume that no shaping is performed at BE, but each AE individually Page 7 of 3

8 A. Charny Delay Bounds In a Network with Aggregate Scheduling. Version 4. 2/29/00 shapes its traffic requiring priority treatment. In these examples, the smallest granularity entity to which the shaping can be applied is a single microflow (such as a single voice conversation). It is shown that in general, aggregate buffering and scheduling inside the backbone can result in unacceptably high delay, unless utilization of delay sensitive traffic is kept much lower than is widely assumed. 2. Increase in Burstiness Due to a Difference in eserved ates. The following example demonstrates that even if flows are ideally shaped at the input to a priority FIFO, and even if the total bandwidth of all priority flows is much smaller than the link bandwidth, at the output from the FIFO the burstiness of some flows may drastically increase at a single hop. Example. Consider a.5 Mbps link, and a 0.5 Mbps flow (let s color it red) sharing this link with fifty 5 Kbps flows (let s color them blue). This corresponds to 50% link utilization. Assume that all packets are 50 bytes long. Suppose all these flows are ideally shaped to conform to leaky buckets with burst B=50bytes ( packet) and the corresponding rates. That means that at the input, packets of the red flow are ideally spaced at 0.8 ms. Suppose that the router has 50 inputs and that there is one blue flow per input. Suppose further that at time zero a packet of each blue flow arrived to the output interface. During this time at most blue packet could have been transmitted on the output.5 Mbps link. If a red packet arrives immediately following all blue packets, it will therefore find a queue of 49 blue packets ahead of it. That means that it will be delayed by 49 packet times at link speed. While the red packet is waiting for transmission, ~7 more red packets arrive (since the red packets are expected to arrive every three packet times at link speed), so when the blue queue finally drains, the red flow will have a 8 packet back-to-back burst. That means that its leaky bucket burst parameter characterizing the red flow s traffic at the output is ~2 packets 5, which exceeds the input burst parameter by the factor of 2. This example is illustrated in Fig 4. It is easy to see that the larger the difference between the rates of the blue and the red flows (and hence the more blue flows there can be without violation of the 50% utilization constraint), the higher increase in the burstiness of the red flow can occur. There are two simple observations that follow from this example: Observation : Although the link is under subscribed and the queue is indeed empty on the average, the difference in flow rates can cause instantaneous queue to be large even if traffic is ideally shaped at the input to a router. Observation 2: A flow ideally shaped at the input may have potentially large burstiness at the output from the FIFO queue even if the link is under utilized. This increases burstiness at the input to a downstream node. 5. At the ideal rate of =0.5 Mbps a red packet should be transmitted every third packet transmission time on a.5 Mbps link. Therefore only 6 packets should have been transmitted in the transmission time of the 8 packet red burst, which means that there were 2 packets in excess of the ideal rate transmitted in the given interval. Hence by definition, the leaky bucket to which the red flow conforms on the output must have the burst B of at least B=2 packets. Page 8 of 3

9 A. Charny Delay Bounds In a Network with Aggregate Scheduling. Version 4. 2/29/00 50 packet-times... Flow 5 (red) Figure 4... Flow 50 (blue) Flow 3 (blue) Flow 2 (blue) Flow (blue) Packet arrivals Example. Increase in burstiness as a result of different flow rates 8 red packets Packet departures 2.2 Increase in Burstiness due to Arrival Synchronization of a Large Number of Similar-ate Flows. Example took advantage of the difference among the flow rates to demonstrate how burstiness of a flow can dramatically increase at a single hop. The next example demonstrates that even if all flows have the same rate and burst at the input to a router, and even if all links are under utilized, a packet can still encounter substantial delay that may be harmful to real-time voice traffic or other highly delay-sensitive traffic. To emphasize the dependence of delay on various parameters, Example 2 is first derived parametrically, and then evaluated numerically. Example 2. Consider a router with d+ inputs and some number of outputs. Let 0 < α < denote the max priority traffic utilization factor allowed on any link. Consider a single output with capacity blue packets. Consider a single flow with reserved rate r. Let s color this flow red. Let this flow arrive from a (red) input link of the same capacity as the output link. Consider further n other flows (which we color blue) sharing the output link with the red flow. Assume that all of these flows are equally distributed among the remaining d input links (those links will be also colored blue), and that each of the red and blue flows conforms to a leaky bucket (b,r). Assuming that the rate r of each individual flow is small compared to the combined rate of all flows 6, this implies that the maximum number of blue flows sharing the output link can be C out Page 9 of 3

10 A. Charny Delay Bounds In a Network with Aggregate Scheduling. Version 4. 2/29/00 αc out r αc n = out r r () nb Assume that all blue input links have the same capacity C in. Consider time t in which a burst of size can arrive nb d at the speed of the blue input link. Then t = During this time the total of nb blue bits can arrive at the router and tc out C in dc in nbc out = bits may have left on the output link. Hence at time t the max blue queue at the output dc in nbc out link could have been Q nb Q =. It takes time D = to drain this queue, so a red packet arriving to the output dc in at time t will find the queue Q and will be delayed by D = nb nb Using (), this yields Note that (2) implies that the absolute speed of the links is irrelevant here, since the delays depends primarily on the ratio of the speed of the output to the combined speed of all blue inputs together. This result is contrary to the commonly expressed intuition that large delays can never happen in the backbone because the link speeds are fast enough - it is not the absolute speed of the interfaces but rather the number of interfaces and the relative difference in the speeds of different interfaces that plays the central role here. The same delay would occur with a 622 Mbps output and ten 2.5 Gbps inputs or an 55 Mbps output and Mbps inputs (which could happen in a high speed backbone) as with a.5 Mbps output and four OC3 inputs (which could happen at the network egress). To see numerically the implication of equation (2), suppose that all flows in Example 2 are voice flows using G.729 encoding. Assume that we allow voice utilization to be up to α = a number commonly quoted as a boundary for adequate priority FIFO performance. Suppose further that we are looking at a router in a backbone, which has a number of OC48 and OC3 links. Suppose further that we are looking at the voice flows converging from 0 OC48 links onto an OC3 link. Then, using the notation of Example 2, we have = 55Mbps, C in = 2.5Gbps, d=0. G.729 generates 20 ms bytes packets every 20 ms. Adding 40 bytes of the IP/UDP/TP header we would get 60 byte packets, which would translate to 24 Kbps bit rate. Assume for now that all flows are ideally shaped at the input to the router, so the maximum per flow burst b is just the length of a single packet. Substituting b=60 bytes and r = 24 Kbps in (2) we would get the queuing delay D=0 ms at one backbone hop. C out We now take a close look at this result in detail. Estimates of acceptable queueing delay in the backbone for voice traffic typically vary in the range of 0-40 ms (see for example [3]). In the remainder of this document the backbone queuing delay budget will be assumed (somewhat arbitrarily) to be 30 ms. As will be clear from the text of the document, this choice is not crucial to the discussion and the conclusions presented here. In a 0 hop backbone this implies 3 ms queueing delay budget per backbone hop. This means that in our example per hop delay budget was C out dc in nb D C out αb C out = = C out dc in r dc in C out (2) 6. This assumption is equivalent to saying that there are many flows sharing the link. It is made merely for algebraic convenience and does not impact any of the conclusions drawn from this example. Page 0 of 3

11 A. Charny Delay Bounds In a Network with Aggregate Scheduling. Version 4. 2/29/00 exceeded by the factor of ~3 by ideally shaped traffic at 50% link utilization. Moreover, it is easy to see that this delay can be recreated at every backbone hop: suppose the red flow in Example 2 passes a chain configuration of N OC3 hops in the backbone, where at each hop the same amount of blue traffic enters from 0 OC48 links, merges with the red flow for one hop, and leaves at the next hop, while another batch of blue traffic joins for the next hop. This is illustrated in Fig 5, where the red flow traverses 4 hops, while each set of blue flows (only 4 of which are shown at each hop) join the red flow for exactly one hop before exiting at the next hop. This means that in Example 2, to meet the delay budget over 0 hops the utilization must not exceed ~5%. Note that this result does not take into account the possibility of interference of a data MTU at all. There are two points to be made here. One is that the scenario described in Example 2, where many fast OC48 links intersect with a lower speed link such as OC3 at a single hop is not at all as uncommon as it may seem. One place where it can occur is at the backbone edge. However, as DWDM is taking off, and large fiber bundles are pulled across the country, the likelihood of a core router with many OC48 interfaces on a single fiber and some lower-speed interfaces corresponding to today s backbone infrastructure appears quite likely. Furthermore, due to load-balancing techniques it is quite likely that traffic sharing an OC3 interface may arrive from several parallel OC48 interfaces. The second point is that this example is NOT specific to the FIFO implementation - for packets of the same size of identical rate flows the scenario described in Example 2 will hold for any scheduler, since under all circumstances, if a single packet of each flows all arrive in a short period of time, it is inevitable that some packet will have to wait until every other packet gets transmitted. red Hop Hop 4 blue Page of 3

12 A. Charny Delay Bounds In a Network with Aggregate Scheduling. Version 4. 2/29/00 Figure 5 Example 5: eproducing the same delay for the red flow at several hops. The difference between priority FIFO and more sophisticated schedulers becomes essential in the case when flows have different traffic characteristics, i.e. different rates, such as in Example, or different packet or burst sizes. This implies a side observation: Observation 3: If all voice flows are individually shaped at the network ingress, 50% utilization is too high to ensure meeting the voice budget over several hops in the worst case, regardless of the scheduling discipline involved. This is true even in the absence of competing lower priority traffic. ecall now that all the numeric computations so far were done under the assumption that all flows at every hop were ideally shaped identical G.729 encoded voice flows, which implied that a single packet could be forced to wait at most by one 60 byte packet of every other flow. It is easy to see from equation (2) that if we allow traffic with higher burst-to-rate ratio share the link with our red voice flow in Example 2, then, given the same utilization factor, the delay experienced by some packets of the voice flow will be linear in the increase in the average burst-to-rate ratio. This means, in particular, that if the average burst size-to-rate ratio of blue traffic in Example 2 is increased by a factor of K compared to that of G.729, then in order to obtain the same 0 ms delay bound we would need to further decrease priority traffic utilization by the factor of K. These considerations lead to another, somewhat counter-intuitive side observation: unless header compression is implemented in the backbone, reducing the bit-rate of a voice encoder may not necessarily allow increasing the voice utilization factor in the backbone. To see this, consider a lower-rate 6.4 Kbps G.723 encoding which generates 48 byte payload every 60 ms, yielding lower bit rate of 6.4 Kbps. Adding 40 bytes IP/UDP/TP header, this yields packet size L=88 bytes and bit rate of.7 Kbps. It is easy to see that substituting these numbers into equation (2) with the same parameters as we did for G.729, we get per hop delay of approximately 29 ms instead of 0 for G.729! Hence, to meet the same delay budget in the same configuration with G.723 flows, one would need to further reduce the allowed utilization factor by almost a factor of 3! This is explained easily by observing that lower-level protocols headers increase the ratio of a packet size (which in this case is the burst parameter of the leaky bucket) to the rate, and hence yields a larger delay by equation (2) The above considerations are summarized by the following observation: Observation 4: If flows with different traffic characteristics are allowed to share the strict priority FIFO, the worst case delay of a given priority flow becomes a function of the packet size and burst characteristics of other flows sharing the FIFO. Given utilization factor of priority traffic, the worst case delay of a packet of a priority flow is proportional to the largest burst-to-rate ratio among all other flows sharing the priority FIFO. Observation 4 appears to lead to the conclusion that bursty traffic should simply not be allowed to share priority FIFO with any delay sensitive traffic like voice. There are three difficulties in enforcing such policy. The first is that the large value of the leaky bucket burst parameter may not necessarily reflect that the flow is bursty in the common Page 2 of 3

13 A. Charny Delay Bounds In a Network with Aggregate Scheduling. Version 4. 2/29/00 understanding of the word - a flow which is ideally shaped and therefore sending a packet at its ideal inter-packet intervals may still have a large leaky bucket burst parameter if its packets are sufficiently large. The second difficulty has to do with the fact that those bursty flows may require strict delay guarantees themselves, so purging them out of the priority FIFO then requires implementation of a scheduler capable of providing required service guarantees to other traffic with potentially widely variable traffic characteristics in addition to providing the required QoS traffic for voice. Finally, even if the use of priority FIFO is restricted to identical voice flows ideally shaped at the network ingress, as traffic passes through a number of hops, burstiness can grow from hop to hop. This accumulated burstiness may cause excessive delays, as discussed in the next section. To see how burstiness can accumulate in principle, consider the case when a voice packet with ideal inter-packet interval of 20 ms is delayed by 20 ms over some hop, while the next packet was not delayed at all. This will cause accumulation of a 2 packet burst. Similarly, if the delay at the first and second hops was only 0 ms, the 2 packet burst might accumulate by the third hop. Equation (2) implies that if instead of one packet burst per flow, one can see 2-3 packet bursts per flow at some hop, then the delay at that hop may increase by the factor of 2-3 compared to the initial hop. Considering Example 2 again, if we allow accumulation of 2-3 packets per flow over a few hops (which is quite realistic in practice), then we will get potentially ms queuing delay for G.729 and 76-4 ms for G.723 at a single hop downstream. The next section will concentrate on the issue of burst accumulation and shows that the end-to-end delay can become quite large as a result of such accumulation. 2.3 Increase in Delay Due To Burst Accumulation. The goal of this section is to construct an example in which a single packet will experience a substantial increase in delay from hop to hop. It will be shown that the delay can grow exponentially with the number of hops in the network, and that it strongly depend on the utilization of the priority traffic. For the purposes of this section it will be assumed that all links in the network are of the same speed, and that all priority flows have the same rates and burst characteristics. More specifically, it will be assumed that all flows are ideally shaped to the same leaky bucket (r,b) at the entry to the network. As in Example 2, the expression for delay will be first derived analytically, and then numerical values will be examined. The examples below will use the notion of a degree of a hop, which is defined as follows: Definition: A router interface is called an h-degree hop, if there is at least one flow for which this hop is exactly the h-th hop in its route, and for all other flows this hop is no more than h-th hop in their route. It is said that a node is of degree h, if all of its interfaces are of degree not exceeding h, and at least one of its interfaces is of degree h. The first example will construct a network with all link speeds being the same speed C, each of them having the same number d input interfaces. In this network, d nodes of degree j will be connected to a single node of degree j+ for all j h for some h. Page 3 of 3

14 A. Charny Delay Bounds In a Network with Aggregate Scheduling. Version 4. 2/29/00 Example 3. Consider a first-degree node with d inputs of speed C. Let n be the maximum number of flows merging on a single output link, also of capacity C. Assume that all flows are identical, conforming to leaky buckets (r,b). Assume further that the utilization of the output link is α, so that n is given by αc n = (3) r n Assume that these flows come from d input links, so that each input link carries -- individually shaped flows. Let s d color all flows coming from d- input interfaces blue, and all flows coming from the d-th interface red. Let be the time needed to deliver a large burst consisting of b bits per blue flow on each of the input interfaces. Let all such bursts of size of all blue flows arrive in the interval ( 0, t ), and let the first bit of the red burst arrive at time t. The arrival pattern is illustrated in Fig. 6. Then, allowing for transmission of one low priority MTU starting at time zero and taking into account (4), the first bit of the red flow will see the queue Therefore, the first bit of the red flow will suffer the delay of Suppose that in the interval ( t, t + D ) the total of red bits has arrived 7, where t nb = dc nb B 0 = d Q = ( d )B 0 + MTU Ct = ( d 2)B 0 + MTU D = --- (( d 2)B C 0 + MTU) B = B 0 + D nr = ---- d is the combined rate of all red flows. Suppose further that no more blue bits arrived after time t. Then the burst of ( B red bits would leave the -st degree node following a burst of size d )bn of blue bits. Now, a second-degree d (4) (5) (6) (7) (8) (9) 7. This is consistent with the amount of traffic that could arrive in this interval given that each individual flow in constrained by a leaky bucket (r,b). However, this assumes small enough packet size, so that discretization affects can be ignored for the purposes of this example. Page 4 of 3

15 A. Charny Delay Bounds In a Network with Aggregate Scheduling. Version 4. 2/29/00 node will be constructed by taking d first degree nodes and connecting them to the second-degree node, as shown in Fig.7. Each of the input links to the second-degree node will carry a large blue burst followed by B red bits. Let s synchronize the time of arrival of the first bit of red traffic on each of the first d- input interfaces to the second degree node to occur at the same time τ. Choose the length of the links (or the timing of arrival of traffic to the first-degree node corresponding to the d-th input link to the 2-nd degree node) such that the arrival of the first red bit on the d-th input link coincides with the arrival of the last red bit on each of the first d- interfaces (that is, the first red bit arrives B on the d-th input to the second-degree node a time τ ). Now let all the red flows coming from all d inputs of the C second-degree node share a single output, while let all the blue flows exit the network at this node via some set of output links which are different from the output link shared by all red flows. Now, to continue the iterative construction, re-color blue all red traffic coming from d- input links to the seconddegree node (this traffic is shown in block red pattern in Fig 7; traffic being recolored is indicated by a large blue arrow on the left), and leave the red traffic coming from the d-th input link red. It is easy to see that arrival pattern of the red bits and the newly re-colored blue bits (formerly red bits from the first d- input interfaces) is identical to that shown in Fig.6, with the exception of the fact that the burst size B o = nb at the first-degree node now needs to be replaced by the burst size B at the second-degree node. epeating the derivation of (7) without modification and using (8), we get D 2 = --- (( d 2)B C + MTU) = --- (( d 2) ( B which can be re-written, using (7), as C 0 + D ) + MTU) D ( d 2) B (0) C 0 + C --- (( d 2)B 0 + MTU) + MTU --- = = (( d 2)B C 0 + MTU) ( d 2) C Assuming again that no more bits arrive to the first d- input links to the 2-nd degree node, while allowing D 2 Page 5 of 3

16 A. Charny Delay Bounds In a Network with Aggregate Scheduling. Version 4. 2/29/00 Arrivals total of B i =(D i r+b)/c bits Link d- Link... Link d t=0 t =nb/dc t 2 =t +D t=0 Figure 6 Arrivals and departures at the first degree node in Example 3. red bits to catch up with the rest of the red bits on the d-th input link, as was done at the first degree hop, we get B 2 B + D 2 B 0 + D + D 2 B (( d 2)B, C 0 + MTU) --- = = = + + (( d 2)B C 0 + MTU) ( d 2) C which in turn can be rewritten as B 2 B = + (( d 2)B. () C 0 + MTU) ( d 2) C Now this construction is repeated h times as shown in Fig. 7: the node of degree k is linked to d outputs of d nodes of degree k-. It will now be shown by induction that for all... Departures 2 j h t=nb(d-)/dc t=nb(d-)/c +(D i r+b)/c j B j B (( d 2)B C 0 + MTU) ( d 2) k ( d 2) j C = + = B C 0 + (( d 2)B 0 + MTU) ( d 2) k = (2) and Page 6 of 3

17 A. Charny Delay Bounds In a Network with Aggregate Scheduling. Version 4. 2/29/00 This will be used at next hop These bits will leave at this hop -degree network k-- degree network k-- degree network k- -degree network Exit links for incoming blue flows K-degree network These bits will leave at next hop; pattern changed to block from solid red at the entry to this hop Leave red ecolor blue To k+- degree network Figure 7 Construction of merging network in Example 3 (here d=3). Note that part of red traffic entering the hop (denoted by block pattern) was actually red when it exited the previous hop. It is recolored by using block pattern to distinguish it from the red traffic on the only link that will be used at the next degree hop. All this block patterned traffic will leave the network at the next hop. It needs to be recolored blue before it enters the next degree network only for the purpose of enabling iterative construction. That is, the traffic seen in (red) block pattern at the exit from network of degree k will be seen as blue at the entrance to the hop of degree k+. D j --- = (( d 2)B C 0 + MTU) ( d 2) j C (3) The base case for the inductive proof of (2) and (3) is given by equations (0)-(). To show inductive step, suppose that (2) and (3) hold for all k j. Then, repeating the construction for the j+-degree node and using the inductive hypothesis, we get the delay at this hop of degree j+ as Page 7 of 3

18 A. Charny Delay Bounds In a Network with Aggregate Scheduling. Version 4. 2/29/00 D j + = --- (( d 2)B C j + MTU) = ( d 2) j C --- ( d 2) B C 0 + (( d 2)B 0 + MTU) MTU ( d 2) which can be rewritten as D j --- = (( d 2)B, which proves the inductive step for (3). C 0 + MTU) ( d 2) C + j Further, by construction the red burst at the output of the hop of degree j is equal to j B j + B j + D j + B = = + (( d 2)B C 0 + MTU) ( d 2) k --- C + (( d 2)B C 0 + MTU) ( d 2) j C k = which in turn can be rewritten as B j + B j + D j + B = = + (( d 2)B which C 0 + MTU) ( d 2) k C completes the inductive step for (2). k = Finally, substituting (5), (9), and (3) into (3), the delay at hop of degree j can be rewritten as D j αb ( d 2) MTU ( d 2) = + α j r d C d Now, since each flow traverses the succession of hops with degree increasing from to H, the total queueing delay experienced by the first red packet leaving the hop of degree in the network of Example 3 is given by D H = D j = j = The next example uses the network constructed in Example 3 to create a network where a still higher end-to-enddelay can be achieved. Example 4. Consider a path containing H backbone router hops 8, and consider a packet traversing this path. Let s color the path and the packet black (See Fig. 8, in which the route of the black blow is shown by a triple black arrow). Let each hop have d- input links in addition to the black input link. Consider now (d-)h merging H--hop networks constructed in Example 3, and attach the output of the H--st hop of each of these merging network to one of the inputs to one of the black router hops. Let the (recolored) blue traffic of Example 3 coming from the (H-)-st hop of each of the merging network (see Fig. 7) immediately depart the network at the first black hop, while let the red traffic of Example 3 on that link merge on the output link from the black router with the black flow, as shown in Fig. 8. Assume further that the red traffic goes exactly one black hop and departs at the next one. Hence, by construction, the black flow shares each of its hops with red traffic, the latter traversing its last h-th hop. It is easy to see that just as in Example 3, we can choose the timing of the arrival of the first black packet at each black hop right after the arrival of d- red bursts of size (see Fig 6 again). B H j + ( d 2) ( d 2) αb + α H MTU d d r C α ( d 2) d (4) (5) 8. The routers at which a particular aggregate enters or leaves the network is not included in the backbone hop count. Page 8 of 3

Quality of Service II

Quality of Service II Quality of Service II Patrick J. Stockreisser p.j.stockreisser@cs.cardiff.ac.uk Lecture Outline Common QoS Approaches Best Effort Integrated Services Differentiated Services Integrated Services Integrated

More information

Lecture 13. Quality of Service II CM0256

Lecture 13. Quality of Service II CM0256 Lecture 13 Quality of Service II CM0256 Types of QoS Best Effort Services Integrated Services -- resource reservation network resources are assigned according to the application QoS request and subject

More information

CSCD 433/533 Advanced Networks Spring Lecture 22 Quality of Service

CSCD 433/533 Advanced Networks Spring Lecture 22 Quality of Service CSCD 433/533 Advanced Networks Spring 2016 Lecture 22 Quality of Service 1 Topics Quality of Service (QOS) Defined Properties Integrated Service Differentiated Service 2 Introduction Problem Overview Have

More information

Worst-case Ethernet Network Latency for Shaped Sources

Worst-case Ethernet Network Latency for Shaped Sources Worst-case Ethernet Network Latency for Shaped Sources Max Azarov, SMSC 7th October 2005 Contents For 802.3 ResE study group 1 Worst-case latency theorem 1 1.1 Assumptions.............................

More information

Advanced Computer Networks

Advanced Computer Networks Advanced Computer Networks QoS in IP networks Prof. Andrzej Duda duda@imag.fr Contents QoS principles Traffic shaping leaky bucket token bucket Scheduling FIFO Fair queueing RED IntServ DiffServ http://duda.imag.fr

More information

Quality of Service (QoS) Computer network and QoS ATM. QoS parameters. QoS ATM QoS implementations Integrated Services Differentiated Services

Quality of Service (QoS) Computer network and QoS ATM. QoS parameters. QoS ATM QoS implementations Integrated Services Differentiated Services 1 Computer network and QoS QoS ATM QoS implementations Integrated Services Differentiated Services Quality of Service (QoS) The data transfer requirements are defined with different QoS parameters + e.g.,

More information

DiffServ Architecture: Impact of scheduling on QoS

DiffServ Architecture: Impact of scheduling on QoS DiffServ Architecture: Impact of scheduling on QoS Abstract: Scheduling is one of the most important components in providing a differentiated service at the routers. Due to the varying traffic characteristics

More information

Network Support for Multimedia

Network Support for Multimedia Network Support for Multimedia Daniel Zappala CS 460 Computer Networking Brigham Young University Network Support for Multimedia 2/33 make the best of best effort use application-level techniques use CDNs

More information

Real-Time Protocol (RTP)

Real-Time Protocol (RTP) Real-Time Protocol (RTP) Provides standard packet format for real-time application Typically runs over UDP Specifies header fields below Payload Type: 7 bits, providing 128 possible different types of

More information

CSE 123b Communications Software

CSE 123b Communications Software CSE 123b Communications Software Spring 2002 Lecture 10: Quality of Service Stefan Savage Today s class: Quality of Service What s wrong with Best Effort service? What kinds of service do applications

More information

Mohammad Hossein Manshaei 1393

Mohammad Hossein Manshaei 1393 Mohammad Hossein Manshaei manshaei@gmail.com 1393 Voice and Video over IP Slides derived from those available on the Web site of the book Computer Networking, by Kurose and Ross, PEARSON 2 Multimedia networking:

More information

Configuring QoS. Understanding QoS CHAPTER

Configuring QoS. Understanding QoS CHAPTER 29 CHAPTER This chapter describes how to configure quality of service (QoS) by using automatic QoS (auto-qos) commands or by using standard QoS commands on the Catalyst 3750 switch. With QoS, you can provide

More information

Quality of Service Mechanism for MANET using Linux Semra Gulder, Mathieu Déziel

Quality of Service Mechanism for MANET using Linux Semra Gulder, Mathieu Déziel Quality of Service Mechanism for MANET using Linux Semra Gulder, Mathieu Déziel Semra.gulder@crc.ca, mathieu.deziel@crc.ca Abstract: This paper describes a QoS mechanism suitable for Mobile Ad Hoc Networks

More information

IP Differentiated Services

IP Differentiated Services Course of Multimedia Internet (Sub-course Reti Internet Multimediali ), AA 2010-2011 Prof. 7. IP Diffserv introduction Pag. 1 IP Differentiated Services Providing differentiated services in IP networks

More information

Resource allocation in networks. Resource Allocation in Networks. Resource allocation

Resource allocation in networks. Resource Allocation in Networks. Resource allocation Resource allocation in networks Resource Allocation in Networks Very much like a resource allocation problem in operating systems How is it different? Resources and jobs are different Resources are buffers

More information

Unit 2 Packet Switching Networks - II

Unit 2 Packet Switching Networks - II Unit 2 Packet Switching Networks - II Dijkstra Algorithm: Finding shortest path Algorithm for finding shortest paths N: set of nodes for which shortest path already found Initialization: (Start with source

More information

Fundamental Trade-offs in Aggregate Packet Scheduling

Fundamental Trade-offs in Aggregate Packet Scheduling Fundamental Trade-offs in Aggregate Packet Scheduling Zhi-Li Zhang Ý, Zhenhai Duan Ý and Yiwei Thomas Hou Þ Ý Dept. of Computer Science & Engineering Þ Fujitsu Labs of America University of Minnesota 595

More information

Fragmenting and Interleaving Real-Time and Nonreal-Time Packets

Fragmenting and Interleaving Real-Time and Nonreal-Time Packets CHAPTER 16 Fragmenting and Interleaving Real-Time and Nonreal-Time Packets Integrating delay-sensitive real-time traffic with nonreal-time data packets on low-speed links can cause the real-time packets

More information

Principles. IP QoS DiffServ. Agenda. Principles. L74 - IP QoS Differentiated Services Model. L74 - IP QoS Differentiated Services Model

Principles. IP QoS DiffServ. Agenda. Principles. L74 - IP QoS Differentiated Services Model. L74 - IP QoS Differentiated Services Model Principles IP QoS DiffServ Differentiated Services Architecture DSCP, CAR Integrated Services Model does not scale well flow based traffic overhead (RSVP messages) routers must maintain state information

More information

Problems with IntServ. EECS 122: Introduction to Computer Networks Differentiated Services (DiffServ) DiffServ (cont d)

Problems with IntServ. EECS 122: Introduction to Computer Networks Differentiated Services (DiffServ) DiffServ (cont d) Problems with IntServ EECS 122: Introduction to Computer Networks Differentiated Services (DiffServ) Computer Science Division Department of Electrical Engineering and Computer Sciences University of California,

More information

Application of Network Calculus to the TSN Problem Space

Application of Network Calculus to the TSN Problem Space Application of Network Calculus to the TSN Problem Space Jean Yves Le Boudec 1,2,3 EPFL IEEE 802.1 Interim Meeting 22 27 January 2018 1 https://people.epfl.ch/105633/research 2 http://smartgrid.epfl.ch

More information

A Preferred Service Architecture for Payload Data Flows. Ray Gilstrap, Thom Stone, Ken Freeman

A Preferred Service Architecture for Payload Data Flows. Ray Gilstrap, Thom Stone, Ken Freeman A Preferred Service Architecture for Payload Data Flows Ray Gilstrap, Thom Stone, Ken Freeman NASA Research and Engineering Network NASA Advanced Supercomputing Division NASA Ames Research Center Outline

More information

Basics (cont.) Characteristics of data communication technologies OSI-Model

Basics (cont.) Characteristics of data communication technologies OSI-Model 48 Basics (cont.) Characteristics of data communication technologies OSI-Model Topologies Packet switching / Circuit switching Medium Access Control (MAC) mechanisms Coding Quality of Service (QoS) 49

More information

QUALITY of SERVICE. Introduction

QUALITY of SERVICE. Introduction QUALITY of SERVICE Introduction There are applications (and customers) that demand stronger performance guarantees from the network than the best that could be done under the circumstances. Multimedia

More information

CCVP QOS Quick Reference Sheets

CCVP QOS Quick Reference Sheets Why You Need Quality of Service (QoS)...3 QoS Basics...5 QoS Deployment...6 QoS Components...6 CCVP QOS Quick Reference Sheets Basic QoS Configuration...11 Traffic Classification and Marking...15 Queuing...26

More information

Improving QOS in IP Networks. Principles for QOS Guarantees

Improving QOS in IP Networks. Principles for QOS Guarantees Improving QOS in IP Networks Thus far: making the best of best effort Future: next generation Internet with QoS guarantees RSVP: signaling for resource reservations Differentiated Services: differential

More information

Lecture Outline. Bag of Tricks

Lecture Outline. Bag of Tricks Lecture Outline TELE302 Network Design Lecture 3 - Quality of Service Design 1 Jeremiah Deng Information Science / Telecommunications Programme University of Otago July 15, 2013 2 Jeremiah Deng (Information

More information

Quality of Service (QoS)

Quality of Service (QoS) Quality of Service (QoS) A note on the use of these ppt slides: We re making these slides freely available to all (faculty, students, readers). They re in PowerPoint form so you can add, modify, and delete

More information

Before configuring standard QoS, you must have a thorough understanding of these items: Standard QoS concepts.

Before configuring standard QoS, you must have a thorough understanding of these items: Standard QoS concepts. Prerequisites for Quality of Service, on page 1 QoS Components, on page 2 QoS Terminology, on page 2 Information About QoS, on page 3 QoS Implementation, on page 4 QoS Wired Model, on page 8 Classification,

More information

Configuring QoS CHAPTER

Configuring QoS CHAPTER CHAPTER 34 This chapter describes how to use different methods to configure quality of service (QoS) on the Catalyst 3750 Metro switch. With QoS, you can provide preferential treatment to certain types

More information

Configuring QoS. Finding Feature Information. Prerequisites for QoS

Configuring QoS. Finding Feature Information. Prerequisites for QoS Finding Feature Information, page 1 Prerequisites for QoS, page 1 Restrictions for QoS, page 3 Information About QoS, page 4 How to Configure QoS, page 28 Monitoring Standard QoS, page 80 Configuration

More information

Quality of Service (QoS)

Quality of Service (QoS) Quality of Service (QoS) EE 122: Intro to Communication Networks Fall 2007 (WF 4-5:30 in Cory 277) Vern Paxson TAs: Lisa Fowler, Daniel Killebrew & Jorge Ortiz http://inst.eecs.berkeley.edu/~ee122/ Materials

More information

Prioritizing Services

Prioritizing Services CHAPTER 8 Voice, video, and data applications have differing quality of service needs. Voice applications, for example, require a small but guaranteed amount of bandwidth, are less tolerant of packet delay

More information

Multimedia Networking. Network Support for Multimedia Applications

Multimedia Networking. Network Support for Multimedia Applications Multimedia Networking Network Support for Multimedia Applications Protocols for Real Time Interactive Applications Differentiated Services (DiffServ) Per Connection Quality of Services Guarantees (IntServ)

More information

Latency on a Switched Ethernet Network

Latency on a Switched Ethernet Network FAQ 07/2014 Latency on a Switched Ethernet Network RUGGEDCOM Ethernet Switches & Routers http://support.automation.siemens.com/ww/view/en/94772587 This entry is from the Siemens Industry Online Support.

More information

General comments on candidates' performance

General comments on candidates' performance BCS THE CHARTERED INSTITUTE FOR IT BCS Higher Education Qualifications BCS Level 5 Diploma in IT April 2018 Sitting EXAMINERS' REPORT Computer Networks General comments on candidates' performance For the

More information

Announcements. Quality of Service (QoS) Goals of Today s Lecture. Scheduling. Link Scheduling: FIFO. Link Scheduling: Strict Priority

Announcements. Quality of Service (QoS) Goals of Today s Lecture. Scheduling. Link Scheduling: FIFO. Link Scheduling: Strict Priority Announcements Quality of Service (QoS) Next week I will give the same lecture on both Wednesday (usual ) and next Monday Same and room Reminder, no lecture next Friday due to holiday EE : Intro to Communication

More information

Master Course Computer Networks IN2097

Master Course Computer Networks IN2097 Chair for Network Architectures and Services Prof. Carle Department for Computer Science TU München Chair for Network Architectures and Services Prof. Carle Department for Computer Science TU München Master

More information

Before configuring standard QoS, you must have a thorough understanding of these items:

Before configuring standard QoS, you must have a thorough understanding of these items: Finding Feature Information, page 1 Prerequisites for QoS, page 1 QoS Components, page 2 QoS Terminology, page 3 Information About QoS, page 3 Restrictions for QoS on Wired Targets, page 41 Restrictions

More information

Master Course Computer Networks IN2097

Master Course Computer Networks IN2097 Chair for Network Architectures and Services Prof. Carle Department for Computer Science TU München Master Course Computer Networks IN2097 Prof. Dr.-Ing. Georg Carle Christian Grothoff, Ph.D. Chair for

More information

Configuring QoS CHAPTER

Configuring QoS CHAPTER CHAPTER 37 This chapter describes how to configure quality of service (QoS) by using automatic QoS (auto-qos) commands or by using standard QoS commands on the Catalyst 3750-E or 3560-E switch. With QoS,

More information

Episode 5. Scheduling and Traffic Management

Episode 5. Scheduling and Traffic Management Episode 5. Scheduling and Traffic Management Part 2 Baochun Li Department of Electrical and Computer Engineering University of Toronto Keshav Chapter 9.1, 9.2, 9.3, 9.4, 9.5.1, 13.3.4 ECE 1771: Quality

More information

The effect of per-input shapers on the delay bound in networks with aggregate scheduling

The effect of per-input shapers on the delay bound in networks with aggregate scheduling The effect of per-input shapers on the delay bound in networks with aggregate scheduling Evgueni Ossipov and Gunnar Karlsson Department of Microelectronics and Information Technology KTH, Royal Institute

More information

Sections Describing Standard Software Features

Sections Describing Standard Software Features 30 CHAPTER This chapter describes how to configure quality of service (QoS) by using automatic-qos (auto-qos) commands or by using standard QoS commands. With QoS, you can give preferential treatment to

More information

Scheduling Algorithms to Minimize Session Delays

Scheduling Algorithms to Minimize Session Delays Scheduling Algorithms to Minimize Session Delays Nandita Dukkipati and David Gutierrez A Motivation I INTRODUCTION TCP flows constitute the majority of the traffic volume in the Internet today Most of

More information

Latency on a Switched Ethernet Network

Latency on a Switched Ethernet Network Page 1 of 6 1 Introduction This document serves to explain the sources of latency on a switched Ethernet network and describe how to calculate cumulative latency as well as provide some real world examples.

More information

Configuring QoS CHAPTER

Configuring QoS CHAPTER CHAPTER 36 This chapter describes how to configure quality of service (QoS) by using automatic QoS (auto-qos) commands or by using standard QoS commands on the Catalyst 3750 switch. With QoS, you can provide

More information

MQC Hierarchical Queuing with 3 Level Scheduler

MQC Hierarchical Queuing with 3 Level Scheduler MQC Hierarchical Queuing with 3 Level Scheduler The MQC Hierarchical Queuing with 3 Level Scheduler feature provides a flexible packet scheduling and queuing system in which you can specify how excess

More information

DiffServ Architecture: Impact of scheduling on QoS

DiffServ Architecture: Impact of scheduling on QoS DiffServ Architecture: Impact of scheduling on QoS Introduction: With the rapid growth of the Internet, customers are demanding multimedia applications such as telephony and video on demand, to be available

More information

Implementation of Differentiated Services over ATM

Implementation of Differentiated Services over ATM Implementation of Differentiated s over ATM Torsten Braun, Arik Dasen, and Matthias Scheidegger; Institute of Computer Science and Applied Mathematics, University of Berne, Switzerland Karl Jonas and Heinrich

More information

of-service Support on the Internet

of-service Support on the Internet Quality-of of-service Support on the Internet Dept. of Computer Science, University of Rochester 2008-11-24 CSC 257/457 - Fall 2008 1 Quality of Service Support Some Internet applications (i.e. multimedia)

More information

Traffic Engineering 2: Layer 2 Prioritisation - CoS (Class of Service)

Traffic Engineering 2: Layer 2 Prioritisation - CoS (Class of Service) Published on Jisc community (https://community.jisc.ac.uk) Home > Network and technology service docs > Vscene > Technical details > Products > H.323 > Guide to reliable H.323 campus networks > Traffic

More information

1188 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 13, NO. 5, OCTOBER Wei Sun, Student Member, IEEE, and Kang G. Shin, Fellow, IEEE

1188 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 13, NO. 5, OCTOBER Wei Sun, Student Member, IEEE, and Kang G. Shin, Fellow, IEEE 1188 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 13, NO. 5, OCTOBER 2005 End-to-End Delay Bounds for Traffic Aggregates Under Guaranteed-Rate Scheduling Algorithms Wei Sun, Student Member, IEEE, and Kang

More information

TDDD82 Secure Mobile Systems Lecture 6: Quality of Service

TDDD82 Secure Mobile Systems Lecture 6: Quality of Service TDDD82 Secure Mobile Systems Lecture 6: Quality of Service Mikael Asplund Real-time Systems Laboratory Department of Computer and Information Science Linköping University Based on slides by Simin Nadjm-Tehrani

More information

Kommunikationssysteme [KS]

Kommunikationssysteme [KS] Kommunikationssysteme [KS] Dr.-Ing. Falko Dressler Computer Networks and Communication Systems Department of Computer Sciences University of Erlangen-Nürnberg http://www7.informatik.uni-erlangen.de/~dressler/

More information

PERFORMANCE OF PREMIUM SERVICE IN QOS IP NETWORK

PERFORMANCE OF PREMIUM SERVICE IN QOS IP NETWORK PERFORMANCE OF PREMIUM SERVICE IN QOS IP NETWORK Wojciech Burakowski Monika Fudala Halina Tarasiuk Institute of Telecommunications, Warsaw University of Technology ul. Nowowiejska 15/19, 00-665 Warsaw,

More information

Differentiated Services

Differentiated Services Diff-Serv 1 Differentiated Services QoS Problem Diffserv Architecture Per hop behaviors Diff-Serv 2 Problem: QoS Need a mechanism for QoS in the Internet Issues to be resolved: Indication of desired service

More information

DiffServ over MPLS: Tuning QOS parameters for Converged Traffic using Linux Traffic Control

DiffServ over MPLS: Tuning QOS parameters for Converged Traffic using Linux Traffic Control 1 DiffServ over MPLS: Tuning QOS parameters for Converged Traffic using Linux Traffic Control Sundeep.B.Singh, Girish.P.Saraph, Chetan.P.Bhadricha and Girish.K.Dadhich Indian Institute of Technology Bombay,

More information

Lecture 5: Performance Analysis I

Lecture 5: Performance Analysis I CS 6323 : Modeling and Inference Lecture 5: Performance Analysis I Prof. Gregory Provan Department of Computer Science University College Cork Slides: Based on M. Yin (Performability Analysis) Overview

More information

Implementation of a leaky bucket module for simulations in NS-3

Implementation of a leaky bucket module for simulations in NS-3 Implementation of a leaky bucket module for simulations in NS-3 P. Baltzis 2, C. Bouras 1,2, K. Stamos 1,2,3, G. Zaoudis 1,2 1 Computer Technology Institute and Press Diophantus Patra, Greece 2 Computer

More information

Episode 5. Scheduling and Traffic Management

Episode 5. Scheduling and Traffic Management Episode 5. Scheduling and Traffic Management Part 3 Baochun Li Department of Electrical and Computer Engineering University of Toronto Outline What is scheduling? Why do we need it? Requirements of a scheduling

More information

Lesson 14: QoS in IP Networks: IntServ and DiffServ

Lesson 14: QoS in IP Networks: IntServ and DiffServ Slide supporting material Lesson 14: QoS in IP Networks: IntServ and DiffServ Giovanni Giambene Queuing Theory and Telecommunications: Networks and Applications 2nd edition, Springer All rights reserved

More information

Fundamental Trade-Offs in Aggregate Packet Scheduling

Fundamental Trade-Offs in Aggregate Packet Scheduling Fundamental Trade-Offs in Aggregate Packet Scheduling Zhenhai Duan, Member, IEEE, Zhi-Li Zhang, Member, IEEE, Yiwei Thomas Hou, Senior Member, IEEE 1 Abstract In this paper we investigate the fundamental

More information

Analysis of the interoperation of the Integrated Services and Differentiated Services Architectures

Analysis of the interoperation of the Integrated Services and Differentiated Services Architectures Analysis of the interoperation of the Integrated Services and Differentiated Services Architectures M. Fabiano P.S. and M.A. R. Dantas Departamento da Ciência da Computação, Universidade de Brasília, 70.910-970

More information

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services Overview 15-441 15-441 Computer Networking 15-641 Lecture 19 Queue Management and Quality of Service Peter Steenkiste Fall 2016 www.cs.cmu.edu/~prs/15-441-f16 What is QoS? Queuing discipline and scheduling

More information

Week 7: Traffic Models and QoS

Week 7: Traffic Models and QoS Week 7: Traffic Models and QoS Acknowledgement: Some slides are adapted from Computer Networking: A Top Down Approach Featuring the Internet, 2 nd edition, J.F Kurose and K.W. Ross All Rights Reserved,

More information

Telematics 2. Chapter 3 Quality of Service in the Internet. (Acknowledgement: These slides have been compiled from Kurose & Ross, and other sources)

Telematics 2. Chapter 3 Quality of Service in the Internet. (Acknowledgement: These slides have been compiled from Kurose & Ross, and other sources) Telematics 2 Chapter 3 Quality of Service in the Internet (Acknowledgement: These slides have been compiled from Kurose & Ross, and other sources) Telematics 2 (WS 14/15): 03 Internet QoS 1 Improving QOS

More information

Lecture 14: Performance Architecture

Lecture 14: Performance Architecture Lecture 14: Performance Architecture Prof. Shervin Shirmohammadi SITE, University of Ottawa Prof. Shervin Shirmohammadi CEG 4185 14-1 Background Performance: levels for capacity, delay, and RMA. Performance

More information

QoS for Real Time Applications over Next Generation Data Networks

QoS for Real Time Applications over Next Generation Data Networks QoS for Real Time Applications over Next Generation Data Networks Final Project Presentation December 8, 2000 http://www.engr.udayton.edu/faculty/matiquzz/pres/qos-final.pdf University of Dayton Mohammed

More information

Sections Describing Standard Software Features

Sections Describing Standard Software Features 27 CHAPTER This chapter describes how to configure quality of service (QoS) by using automatic-qos (auto-qos) commands or by using standard QoS commands. With QoS, you can give preferential treatment to

More information

EE 122: Differentiated Services

EE 122: Differentiated Services What is the Problem? EE 122: Differentiated Services Ion Stoica Nov 18, 2002 Goal: provide support for wide variety of applications: - Interactive TV, IP telephony, on-line gamming (distributed simulations),

More information

Quality of Service in the Internet

Quality of Service in the Internet Quality of Service in the Internet Problem today: IP is packet switched, therefore no guarantees on a transmission is given (throughput, transmission delay, ): the Internet transmits data Best Effort But:

More information

DiffServ over MPLS: Tuning QOS parameters for Converged Traffic using Linux Traffic Control

DiffServ over MPLS: Tuning QOS parameters for Converged Traffic using Linux Traffic Control 1 DiffServ over MPLS: Tuning QOS parameters for Converged Traffic using Linux Traffic Control Sundeep.B.Singh and Girish.P.Saraph Indian Institute of Technology Bombay, Powai, Mumbai-400076, India Abstract

More information

Satellite-Based Cellular Backhaul in the Era of LTE

Satellite-Based Cellular Backhaul in the Era of LTE Satellite-Based Cellular Backhaul in the Era of LTE Introduction 3 Essential Technologies for 3G/LTE Backhauling over Satellite 6 Gilat s Solution SkyEdge II-c Capricorn 7 Why Ultra-fast TDMA is the Only

More information

Understanding SROS Priority Queuing, Class-Based WFQ, and QoS Maps

Understanding SROS Priority Queuing, Class-Based WFQ, and QoS Maps Configuration Guide 5991-2121 May 2006 Understanding SROS Priority Queuing, Class-Based WFQ, and QoS Maps This Configuration Guide explains the concepts behind configuring your Secure Router Operating

More information

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 Question 344 Points 444 Points Score 1 10 10 2 10 10 3 20 20 4 20 10 5 20 20 6 20 10 7-20 Total: 100 100 Instructions: 1. Question

More information

Internet Services & Protocols. Quality of Service Architecture

Internet Services & Protocols. Quality of Service Architecture Department of Computer Science Institute for System Architecture, Chair for Computer Networks Internet Services & Protocols Quality of Service Architecture Dr.-Ing. Stephan Groß Room: INF 3099 E-Mail:

More information

CHAPTER 5 PROPAGATION DELAY

CHAPTER 5 PROPAGATION DELAY 98 CHAPTER 5 PROPAGATION DELAY Underwater wireless sensor networks deployed of sensor nodes with sensing, forwarding and processing abilities that operate in underwater. In this environment brought challenges,

More information

Resource Reservation Protocol

Resource Reservation Protocol 48 CHAPTER Chapter Goals Explain the difference between and routing protocols. Name the three traffic types supported by. Understand s different filter and style types. Explain the purpose of tunneling.

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

Quality of Service in the Internet

Quality of Service in the Internet Quality of Service in the Internet Problem today: IP is packet switched, therefore no guarantees on a transmission is given (throughput, transmission delay, ): the Internet transmits data Best Effort But:

More information

Topic 4b: QoS Principles. Chapter 9 Multimedia Networking. Computer Networking: A Top Down Approach

Topic 4b: QoS Principles. Chapter 9 Multimedia Networking. Computer Networking: A Top Down Approach Topic 4b: QoS Principles Chapter 9 Computer Networking: A Top Down Approach 7 th edition Jim Kurose, Keith Ross Pearson/Addison Wesley April 2016 9-1 Providing multiple classes of service thus far: making

More information

H3C S9500 QoS Technology White Paper

H3C S9500 QoS Technology White Paper H3C Key words: QoS, quality of service Abstract: The Ethernet technology is widely applied currently. At present, Ethernet is the leading technology in various independent local area networks (LANs), and

More information

A Pipelined Memory Management Algorithm for Distributed Shared Memory Switches

A Pipelined Memory Management Algorithm for Distributed Shared Memory Switches A Pipelined Memory Management Algorithm for Distributed Shared Memory Switches Xike Li, Student Member, IEEE, Itamar Elhanany, Senior Member, IEEE* Abstract The distributed shared memory (DSM) packet switching

More information

WHITE PAPER. Latency & Jitter WHITE PAPER OVERVIEW

WHITE PAPER. Latency & Jitter WHITE PAPER OVERVIEW Latency & Jitter In Networking Performance Evaluation OVERVIEW Latency and jitter are two key measurement parameters when evaluating and benchmarking the performance of a network, system or device. Different

More information

A Path Decomposition Approach for Computing Blocking Probabilities in Wavelength-Routing Networks

A Path Decomposition Approach for Computing Blocking Probabilities in Wavelength-Routing Networks IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 8, NO. 6, DECEMBER 2000 747 A Path Decomposition Approach for Computing Blocking Probabilities in Wavelength-Routing Networks Yuhong Zhu, George N. Rouskas, Member,

More information

Resource Sharing for QoS in Agile All Photonic Networks

Resource Sharing for QoS in Agile All Photonic Networks Resource Sharing for QoS in Agile All Photonic Networks Anton Vinokurov, Xiao Liu, Lorne G Mason Department of Electrical and Computer Engineering, McGill University, Montreal, Canada, H3A 2A7 E-mail:

More information

Scheduling. Scheduling algorithms. Scheduling. Output buffered architecture. QoS scheduling algorithms. QoS-capable router

Scheduling. Scheduling algorithms. Scheduling. Output buffered architecture. QoS scheduling algorithms. QoS-capable router Scheduling algorithms Scheduling Andrea Bianco Telecommunication Network Group firstname.lastname@polito.it http://www.telematica.polito.it/ Scheduling: choose a packet to transmit over a link among all

More information

Computer Networking. Queue Management and Quality of Service (QOS)

Computer Networking. Queue Management and Quality of Service (QOS) Computer Networking Queue Management and Quality of Service (QOS) Outline Previously:TCP flow control Congestion sources and collapse Congestion control basics - Routers 2 Internet Pipes? How should you

More information

Comparing the bandwidth and priority Commands of a QoS Service Policy

Comparing the bandwidth and priority Commands of a QoS Service Policy Comparing the and priority s of a QoS Service Policy Contents Introduction Prerequisites Requirements Components Used Conventions Summary of Differences Configuring the Configuring the priority Which Traffic

More information

Differentiated Service Router Architecture - Classification, Metering and Policing

Differentiated Service Router Architecture - Classification, Metering and Policing Differentiated Service Router Architecture - Classification, Metering and Policing Presenters: Daniel Lin and Frank Akujobi Carleton University, Department of Systems and Computer Engineering 94.581 Advanced

More information

Module objectives. Integrated services. Support for real-time applications. Real-time flows and the current Internet protocols

Module objectives. Integrated services. Support for real-time applications. Real-time flows and the current Internet protocols Integrated services Reading: S. Keshav, An Engineering Approach to Computer Networking, chapters 6, 9 and 4 Module objectives Learn and understand about: Support for real-time applications: network-layer

More information

Comparison of Shaping and Buffering for Video Transmission

Comparison of Shaping and Buffering for Video Transmission Comparison of Shaping and Buffering for Video Transmission György Dán and Viktória Fodor Royal Institute of Technology, Department of Microelectronics and Information Technology P.O.Box Electrum 229, SE-16440

More information

Request for Comments: K. Poduri Bay Networks June 1999

Request for Comments: K. Poduri Bay Networks June 1999 Network Working Group Request for Comments: 2598 Category: Standards Track V. Jacobson K. Nichols Cisco Systems K. Poduri Bay Networks June 1999 An Expedited Forwarding PHB Status of this Memo This document

More information

Internet Engineering Task Force (IETF) December 2014

Internet Engineering Task Force (IETF) December 2014 Internet Engineering Task Force (IETF) Request for Comments: 7417 Category: Experimental ISSN: 2070-1721 G. Karagiannis Huawei Technologies A. Bhargava Cisco Systems, Inc. December 2014 Extensions to Generic

More information

IP Network Emulation

IP Network Emulation Developing and Testing IP Products Under www.packetstorm.com 2017 PacketStorm Communications, Inc. PacketStorm is a trademark of PacketStorm Communications. Other brand and product names mentioned in this

More information

CS519: Computer Networks. Lecture 5, Part 5: Mar 31, 2004 Queuing and QoS

CS519: Computer Networks. Lecture 5, Part 5: Mar 31, 2004 Queuing and QoS : Computer Networks Lecture 5, Part 5: Mar 31, 2004 Queuing and QoS Ways to deal with congestion Host-centric versus router-centric Reservation-based versus feedback-based Window-based versus rate-based

More information

QoS MIB Implementation

QoS MIB Implementation APPENDIXB This appendix provides information about QoS-based features that are implemented on the Cisco Carrier Routing System line cards and what tables and objects in the QoS MIB support these QoS features.

More information

Quality of Service (QoS)

Quality of Service (QoS) Quality of Service (QoS) The Internet was originally designed for best-effort service without guarantee of predictable performance. Best-effort service is often sufficient for a traffic that is not sensitive

More information

P D1.1 RPR OPNET Model User Guide

P D1.1 RPR OPNET Model User Guide P802.17 D1.1 RPR OPNET Model User Guide Revision Nov7 Yan F. Robichaud Mark Joseph Francisco Changcheng Huang Optical Networks Laboratory Carleton University 7 November 2002 Table Of Contents 0 Overview...1

More information