Toward End-to-End Fairness: A Framework for the Allocation of Multiple Prioritized Resources in Switches and Routers

Size: px
Start display at page:

Download "Toward End-to-End Fairness: A Framework for the Allocation of Multiple Prioritized Resources in Switches and Routers"

Transcription

1 Toward End-to-End Fairness: A Framework for the Allocation of Multiple Prioritized Resources in Switches and Routers Yunkai Zhou Microsoft Corporation, Redmond, WA and Harish Sethu Department of Electrical and Computer Engineering Drexel University, Philadelphia, PA 19104, USA Abstract As flows of traffic traverse a network, they share with other flows a variety of resources such as links, buffers and router CPUs in their path Fairness is an intuitively desirable property in the allocation of resources in a network shared among flows of traffic from different users While fairness in bandwidth allocation over a shared link has been extensively studied, overall end-to-end fairness in the use of all the resources in the network is ultimately the desired goal End-to-end fairness becomes especially critical when fair allocation algorithms are used as a component of the mechanisms used to provide endto-end quality-of-service guarantees This paper seeks to answer the question of what is fair when a set of traffic flows share multiple resources in the network with a shared order of preference for the opportunity to use these resources We present the Principle of Fair Prioritized Resource Allocation or the FPRA principle, a powerful extension of any of the classic notions of fairness such as max-min fairness, proportional fairness and utility maxmin fairness defined over a single resource We illustrate this principle by applying it to a system model with a buffer and an output link shared among competing flows of traffic To complete our illustration of the applicability of the FPRA principle, we propose a measure of fairness and evaluate representative buffer allocation algorithms based on this measure Besides buffer allocation, the FPRA principle may also be used in other contexts in data communication networks and operating system design Key words: QoS, resource allocation, fair scheduling, buffer management, max-min Preprint submitted to Elsevier Science 29 October 2003

2 1 Introduction Fairness is an intuitively desirable property in the allocation of resources in a network shared among multiple flows of traffic from different users Strict fairness in traffic management can improve the isolation between users sharing a network, offer a more predictable performance and also improve performance by eliminating some bottlenecks In addition, fair scheduling policies can also be used to guarantee certain quality-of-service requirements such as delay bounds and minimum bandwidths Several formal notions of fairness have been proposed to address the question of what is fair in the allocation of a single shared resource among multiple requesting entities These include max-min fairness [1 3], proportional fairness [4] and utility max-min fairness [5] During the last several years, a variety of algorithms that seek to approximate these formal notions have been proposed and implemented to achieve fair allocation of bandwidth on a shared link [1, 2, 4 7] Bandwidth on a link, however, is only one among several kinds of resources shared by multiple flows in a typical network As flows of traffic traverse through a network, they share with other flows a variety of resources such as links, buffers and router CPUs in their path The allocation policies with respect to each of these resources can have a significant impact on the overall performance and quality of service achieved by flows Even though fair scheduling of bandwidth over a link has received the most attention, overall end-to-end fairness in the use of all the resources in the network is ultimately the desired goal For example, in the Internet, the end-to-end congestion control mechanisms, implemented in various versions of the TCP protocol, depend not only on the bandwidth allocation in the network but also on the packet loss rate as the indication of congestion Therefore, an unfair management of buffers can cause biased packet loss rates for different flows, and thus lead to a failure in providing end-to-end fairness This failure will result in spite of fair bandwidth allocation policies applied at each transmission link in the entire network Some researchers, for example, have already noticed that buffer allocation policies have an effect on the overall end-to-end fairness in bandwidth allocation [8 10] Quality-of-service is often an end-to-end issue, of which end-to-end fairness is a critically important piece Fair bandwidth allocation algorithms such as Weighted Fair Queueing [1,2] have frequently been used as a component of an overall mechanism that ensures end-to-end delay guarantees Similarly, mechanisms to achieve This work was supported in part by NSF CAREER Award CCR and US Air Force Contract F A preliminary version of this paper appeared in Proc IEEE Int l Performance, Computing, and Communications Conference (IPCCC), April 2003, Phoenix, AZ, USA Corresponding author Tel: ; Fax: addresses: yunkaiz@microsoftcom (Yunkai Zhou), sethu@ecedrexeledu (Harish Sethu) 2

3 end-to-end fairness in the use of all the resources in the network will play a significant role in achieving true end-to-end quality-of-service guarantees If the shared set of resources are all of the same type and if the flows do not have a preference toward using one resource over another, formal notions of fairness developed for a single shared resource may be readily extended to the set of multiple resources For example, as discussed in [11], when N identical links, each of capacity R, are shared among a set of flows, the ideally fair allocation is identical to the ideally fair allocation of a single link of capacity NR Frequently, however, the shared set of resources are of different types (such as links and buffers), and in addition, the competing flows have a preference toward using one of these resources over another For example, at a switch or a router, flows prefer to be allocated the output link resource, and only when the output link is not available do they choose to use the buffer resources Even in the allocation of multiple resources of the same type, a certain resource may be preferred over another such as when, under certain conditions, terrestrial links are preferred over satellite links In these instances, the notions of fairness developed for sharing a single resource are not as readily extended In this paper, therefore, we consider a shared set of resources of the same or different type, but with the resources ordered by preference, ie, prioritized resources We assume that all competing flows have the same order of preference in the use of the shared set of resources The primary contribution of this paper is a general theoretical framework for defining and measuring fairness when a set of traffic flows share multiple resources in the network with a shared order of preference for the opportunity to use these resources The framework developed in this paper provides a powerful generalization of any notion of fairness defined for the allocation of a single shared resource We introduce two concepts the Cumulative Resource Dividend (CRDIV) of a flow under a certain resource allocation policy represents the benefit accrued to the flow due to the portion of the shared set of resources allocated to it under the policy; the Cumulative Resource Demand (CRDEM) of a flow is the benefit accrued to the flow when all of the shared set of resources is exclusively allocated to the flow These two concepts are generic in the sense that we make no assumptions on what is the shared set of resources and how one may compute the desired benefit to a flow One may now use one s favorite notion of fairness in the distribution of a single shared resource, and state that a fair allocation policy among a set of competing flows is one that achieves a fair distribution of the cumulative resource dividends with respect to the cumulative resource demands of the flows However, just as the notions of fairness in the allocation of a single shared resource can be applied only over certain specific intervals of time (intervals during which the number of actively competing flows stays constant) [12], this generalized principle also applies only over certain specific intervals of time depending on the properties of the traffic flows A significant contribution of this paper is the formal definition of these intervals of time in the context of multiple resources shared by competing flows 3

4 We introduce and illustrate our Principle of Fair Prioritized Resource Allocation (the FPRA principle) by considering in detail its application in a system with a link and a buffer shared among a set of flows Over the last couple of decades, researchers have proposed and analyzed a variety of buffer management and related policies [13 17] Most of these have attempted to maximize performance or achieve congestion avoidance although several of them have also tried to be fair by one measure or another A precise and formal notion of fairness in buffer allocation, however, has not yet been developed Thus, there is currently no theoretical framework around which one can design practical and fair buffer allocation algorithms, and also, there are no formal means of evaluating the fairness of various buffer allocation policies already proposed This paper illustrates in detail how one may use the FPRA principle to provide such a framework to define as well as measure fairness in buffer allocation We also propose a method of system decomposition to extend the FPRA principle to systems with multiple output links In a system with multiple output links, flows headed to different output links do not have a common set of shared resources, since the output links are not shared among these flows Using the method of system decomposition proposed in this paper, the overall system can be decomposed into two types of subsystems: one unshared link subsystem and several shared link subsystems All competing entities in each subsystem share a common set of resources, and therefore, the FPRA principle can applied to define fairness in each subsystem, and the fairness in the overall system can be defined based on the fairness in each subsystem The contributions of this paper represent a first step toward achieving provable end-to-end fairness The scope of this paper is limited to addressing the problem of fairness in the allocation of multiple prioritized resources The problem of rigorously defining and measuring fairness in the allocation of multiple non-prioritized resources is also a challenging problem addressed in some other works [18] This paper is organized as follows In Section 2, we introduce the concepts of cumulative resource dividends and demands Based on these concepts, we define the concept of stationary intervals of time over which one can apply notions of fairness in a system with multiple resources We conclude the section with the statement of the Principle of Fair Prioritized Resource Allocation, and a simple example illustrating the principle In Section 3, we further illustrate an application of the FPRA principle in a system with a buffer and a link shared among a set of flows, and present a formal notion of fairness in buffer allocation We also provide a measure of fairness in such a system, and use simulation to evaluate the fairness of a few well-known buffer allocation strategies An extension of this work to systems with multiple output links is presented in Section 4 and finally section 5 concludes the paper 4

5 2 The Principle of Fair Prioritized Resource Allocation Several different notions of fairness have been proposed in the research literature for the allocation of a single shared resource among a set of requesting entities We begin our discussion by introducing a notation that allows a representation of any of these notions of fairness Without loss of generality, we assume that the entities demanding a portion of the resource are traffic flows Consider N traffic flows, labeled 1, 2,, N, and with a weight w i associated with flow i Let R be the size of the resource shared among these N flows, and d i be the demand corresponding to flow i For the sake of convenience, throughout this paper we use vectors to indicate values corresponding to a set of flows We denote a vector by the indexed value in a pair of square brackets For instance, we denote the demand vector as [d i ] Therefore, given the demand vector [d i ], the weight vector [w i ], and the total available resource R, any given notion of fairness may be represented as, [a i ] = F(R, [d i ], [w i ]) (1) where a i is the allocation for flow i based on the notion of fairness defined by the function F The function F is different for different notions of fairness such as max-min fairness, proportional fairness or utility max-min fairness Given a notion of fairness defined by a function, F, an ideal scheduling policy, denoted by G F (S), is one that exactly achieves this notion of fairness in system S For example, if F represents the function corresponding to the max-min fair share policy with respect to the bandwidth [3], and L represents a work-conserving system with a single shared link, G F (L) will denote the Generalized Processor Sharing (GPS) policy [2], the ideally fair scheduling policy for max-min fairness 21 Resource Dividends and Demands Consider a set of flows using a shared set of resources in a certain system S We generically refer to the desired goal of the flows as the utility sought by the flow For example, in the scheduling of bandwidth over a single shared link, the utility may be defined as either the bandwidth or some function of the bandwidth achieved by a flow The objective of fair scheduling algorithms such as GPS [2], WF 2 Q [6] and DRR [7] is to fairly distribute this utility among the competing flows, with the utility achieved by a flow defined as the bandwidth achieved by the flow In the following, we use the concept of cumulative utility over a given interval of time, which is merely the utility considered over an interval of time It is important to note that, in this paper, we do not impose any particular notion of how cumulative utility over an interval should be defined Our only assumption in this regard is that the cumulative utility over any interval achieved by a flow is always non-negative 5

6 and does not decrease with an increase in the amount of any resource allocated to it The definition of cumulative utility may be very different in different contexts For example, in scheduling traffic flows over a shared link as accomplished by fair scheduling algorithms such as WF 2 Q and DRR, the cumulative utility achieved by a flow over an interval of time would be defined as the amount of its data transmitted through the shared link during the interval For real-time applications with guaranteed delay requirements, one may define the cumulative utility over an interval as the fraction of packets that are successfully delivered within the specified guaranteed delay over this interval of time Over a given interval of time (, t 2 ), denote by U S,P i (, t 2 ) the cumulative utility achieved by flow i under allocation policy P in system S Consider a policy P for the allocation of the shared set of resources Consider an allocation policy, None(i), which grants none of the shared resources to flow i By our notation, (, t 2 ) is the cumulative utility achieved by flow i during time interval (, t 2 ) with the allocation policy None(i) The difference in the cumulative utilities achieved by a flow with and without the use of the allocated portion of the shared set of resources, ie, the difference between U S,P i (, t 2 ) and U S,None(i) i (, t 2 ), represents the benefit accrued to the flow due to this shared set of resources The following formally defines this concept U S,None(i) i Definition 1 The Cumulative Resource Dividend (denoted by CRDIV S,P i (, t 2 )) of flow i in system S under the allocation policy P over an interval of time (, t 2 ) is defined as, CRDIV S,P i (, t 2 ) = U S,P i (, t 2 ) U S,None(i) i (, t 2 ) (2) Now, a notion of fairness in the allocation of the shared resources should specify a distribution of these cumulative resource dividends among the flows However, such a notion of fairness cannot be developed without also defining a notion of the demands placed on the shared set of resources by the flows For example, it is only sensible that flows which have no need for the shared set of resources, ie, with no demand for them, should not unnecessarily be allocated any of these resources This principle is a trivial generalization of already existing notions of fairness in the allocation of a single resource The demand of a flow for the shared set of resources can be expressed in terms of the benefit or the cumulative resource dividend that the flow desires from an allocation of the shared set of resources Any flow would like a biased allocation policy that grants all of the shared set of resources exclusively to it Therefore, the demand of a flow is really the benefit accrued to the flow, ie, the cumulative resource dividend of the flow when all of the shared set of resources is allocated exclusively to the flow Let All(i) be an allocation policy that allocates all of the shared resources, in entirety and exclusively, to flow i The notion of the demand of a flow can now be formally defined as follows 6

7 Definition 2 The Cumulative Resource Demand (denoted by CRDEM S i (, t 2 )) of flow i in system S over an interval of time (, t 2 ) is defined as the cumulative resource dividend achieved by the flow in the same system under policy All(i) over the same time interval In other words, CRDEM S i (, t 2 ) = U S,All(i) i (, t 2 ) U S,None(i) i (, t 2 ) (3) Note that the cumulative resource demand is independent of the allocation policy P Note also that the cumulative resource demand of a flow is no less than the cumulative resource dividend of the flow under any allocation policy In scheduling of bandwidth over a single shared link, a flow gets no throughput at all with policy None(i) since the link is the only resource that contributes to the utility Thus, over any time interval, all of the bandwidth allocated to a flow represents the benefit accrued to the flow from the shared resource In this case, the cumulative resource dividend of a flow over a given interval of time with a scheduling policy is the same as the total amount of data from the flow scheduled for transmission by the policy during this interval Similarly, the cumulative resource demand of a flow over a certain interval of time is just the total amount of data that the flow could transmit during the interval if it did not have to compete with any other flow 22 The FPRA Principle Assume the shared set of resources in the system under consideration consists of K distinct resources, labeled 1 through K We assume that these resources are ordered by preference, ie, resource 1 is the most preferred resource and resource K is the least preferred resource We denote by S the entire system consisting of the K resources, and by S k, 1 k K, the system consisting of resources 1 through k Note that in S 1, which only contains resource 1, fairness of allocation can be directly defined using a notion of fairness F defined for a single shared resource Based on the definition of the cumulative resource demand and the cumulative resource dividend, over any interval of time, the shared resources can now be allocated according to any notion of fairness F applied to the cumulative resource dividends with respect to the cumulative resource demands This would ensure that each flow receives, as per the notion of fairness F, a fair share of the dividend from the shared set of resources However, one cannot apply such a notion of fairness over any interval of time, and this significantly hinders a simple extension of the notion of fairness from the single-resource case to that with multiple resources For example, a notion of fairness such as the max-min strategy cannot be applied to any arbitrary interval of time in the allocation of bandwidth on a link among competing flows In this case, a flow is considered active at any given instant of time if and only if it is backlogged [7]; and active over a given interval of time if and only if it is active at each instant of time during this interval The max-min principle may 7

8 only be applied over intervals of time during which no flow changes its state from being active to not being active, or vice-versa In our study, we refer to such an interval of time over which one can apply a notion of fairness as a stationary interval In extending a notion of fairness to multiple resources, we will have to extend the concept of the state of a flow and the concept of a stationary interval We now proceed to recursively define these concepts, leading to a formal notion of fairness in a system with K shared resources Assume that, as per a given notion of fairness F, the fairness of allocation in system S k 1 is already known Our theoretical framework is based on a simple common-sense approach whereby we allocate the most preferred resource fairly, and then allocate the second-most preferred resource fairly among the flows with unsatisfied demands, and so on Thus, in a system with multiple resources, one may define fairness recursively based on the fairness in a system without the least preferred resource Now, in system S k, the shared set of resources may be divided into two distinct groups: one consisting of resources 1 through k 1 as in system S k 1, and the other consisting of just resource k Note that a flow would prefer to use the resources in the first group over resource k This implies that, only when the demand of a flow for the resources in system S k 1 cannot be satisfied, does this flow compete with other flows for resource k In other words, a flow should be considered to be in competition for resource k only if, in the absence of resource k, the flow is not satisfied with a fair allocation in system S k 1 Therefore, an active flow with respect to resource k should be one whose demand, in system S k 1, is not satisfied under the ideally fair allocation policy, G F (S k 1 ) The following definitions formalize this thought Definition 3 With respect to resource k, a flow i is active during an interval of time (, t 2 ) as per the notion of fairness F, if and only if, over each subinterval of time (τ 1, τ 2 ) such that τ 1 τ 2 t 2, the cumulative resource demand of flow i in system S k is greater than the cumulative resource dividend it would achieve in system S k 1 under the ideally fair allocation policy, G F (S k 1 ) In other words, flow i is active with respect to resource k over (, t 2 ) if and only if, CRDEM S k i (τ 1, τ 2 ) > CRDIV S k 1,G F (S k 1 ) i (τ 1, τ 2 ) for all time intervals (τ 1, τ 2 ) such that τ 1 τ 2 t 2 Definition 4 With respect to resource k, a flow i is inactive during an interval of time (, t 2 ), as per the notion of fairness F, if and only if, over each subinterval of time (τ 1, τ 2 ) such that τ 1 τ 2 t 2, the cumulative resource demand of flow i in system S k is equal to the cumulative resource dividend it would achieve in system S k 1 under the ideally fair allocation policy, G F (S k 1 ) That is, CRDEM S k i (τ 1, τ 2 ) = CRDIV S k 1,G F (S k 1 ) i (τ 1, τ 2 ) Note that it is possible that a flow is neither active nor inactive with respect to 8

9 resource k over a certain interval of time, since the above definitions are based on conditions that require to be satisfied in each subinterval of time within the given interval For example, consider two contiguous intervals of time In the first interval, assume that a certain flow is active with respect to a resource while in the second interval the flow is inactive with respect to the same resource Then, in the combined interval of time consisting of both the above two intervals, the flow is neither active nor inactive with respect to the resource Thus, during any given interval, a flow may be said to be in one of three states with respect to a resource: active, inactive or neither In our case of multiple resources, if a flow does not need the least preferred resource, then it implies that the flow is satisfied and is not in active competition with other flows Generalizing the concept used in the allocation of a single resource, one may define fairness with respect to a resource over an interval only when the number of flows competing for the resource stays constant during the interval We are now prepared to present the the concept of a stationary interval and the Principle of Fair Prioritized Resource Allocation Definition 5 In a system S k, a certain interval of time is called a stationary interval, if and only if, each flow is either active or inactive (but not neither) with respect to resource k (the least preferred resource in system S k ) over this interval The Principle of Fair Prioritized Resource Allocation (FPRA Principle): Consider a system S k and an allocation policy P P is fair as per a notion of fairness F, if and only if, over all stationary intervals of time with respect to system S k, the cumulative resource dividends achieved by the flows are distributed exactly as per the notion of fairness F with respect to the cumulative resource demands requested by the flows Note that, if a flow i is neither active nor inactive over a certain time interval (, t 2 ), this time interval can be divided into a contiguous sequence of subintervals, during each of which flow i is either active or inactive Thus, even though the FPRA principle defines fairness only over stationary intervals, any given interval of time may be broken down into a sequence of contiguous stationary intervals Thus, the FPRA principle may be used to define a fair allocation over any given interval 3 Application of FPRA Principle in Buffer Allocation In this section, we illustrate the FPRA principle over multiple resources by considering a system with a buffer and a link shared by a set of competing flows, with the link being the preferred or prioritized resource 9

10 I 1 (t) I i (t) I N (t) Entry A 1 (t) A i(t) A N (t) Buffer Capacity C(t) Bi(t) D 1 (t) Di(t) D N (t) Exit R(t) 31 System Model Fig 1 The system model In our system model, a shared buffer is fed by N flows, labeled as 1, 2,, N, destined to the same output link Let R(t) be the maximum link speed at time instant t, and C(t) be the capacity of the shared buffer at time instant t Both values are defined to be functions of time in order to accommodate general situations, such as a higher-level allocation scheme that may change the available capacity of the link or the buffer We assume that all flows belong to the same service priority class, and w i is the weight associated with flow i An entry scheduler regulates the entry of traffic from the N flows into the shared buffer The entry scheduler determines which data from which flows are permitted into the buffer and which are not The entry scheduler is also responsible for pushout, ie, the discarding of data from the shared buffer in order to accommodate new arriving traffic from another flow An exit scheduler dequeues traffic from the shared buffer and transmits them onto the output link The exit scheduler, as in scheduling algorithms for the allocation of bandwidth on a link, determines the sequence in which traffic from various flows will exit through the output link A buffer allocation strategy is completely defined by the actions of the entry and the exit schedulers Fig 1 illustrates our system model and some of the notation used in this paper Let S denote the system under consideration Let I i (t) be the rate at which data arrives in flow i at time instant t seeking entry into the shared buffer This is the only input into the system S Consider a buffer allocation policy P, a combination of the entry and the exit scheduler s policies Define the admission rate A S,P i (t), at time instant t, as the rate at which data from flow i gets accepted into the shared buffer of system S under the allocation policy P Traffic that is not admitted into the shared buffer is dropped Note that A S,P i (t) can be negative, such as when the net rate of acceptance into the buffer is negative due to pushouts A S,P i (t) I i (t) holds for all i and t Define the departure rate, D S,P i (t), as the actual rate at which traffic belonging to flow i departs the shared buffer through the output link of system S under the allocation policy P At time instant t, let B S,P i (t) be the queue length or the buffer occupancy of flow i in the shared buffer in system S under the allocation policy P At any given time instant t t 0, B S,P i (t) = B S,P i (t 0 ) + t ( t 0 A S,P i ) (τ) D S,P i (τ) dτ (4) 10

11 Throughout this paper, the sum of a quantity over all flows is denoted by dropping the subscript for the flow in the notation For example, I(t) is the sum of the input rates of all of the N flows, ie, I(t) = N i=1 I i (t) A S,P (t), B S,P (t) and D S,P (t) are also defined similarly Of course, D S,P (t) R(t), and B S,P (t) C(t) Note that, as mentioned before, the buffer allocation is completely determined by the actions of the entry and the exit schedulers, which together determine A S,P i (t) and D S,P i (t) Also note that the queue length of a flow in the shared buffer is completely determined by the admission rate, the departure rate and the initial queue length, as given by (4) Defining what is fair in buffer allocation in system S over a certain interval of time (, t 2 ), therefore, is the same as defining the conditions on A S,P i (t) and D S,P i (t) for all t in (, t 2 ), such that P is fair 32 What is Fair in Buffer Allocation? A primary purpose of the shared buffer is to improve the throughput over the link by avoiding packet losses Therefore, we use the total amount of data from a flow transmitted over a link during any given interval of time as the cumulative utility achieved by the flow over this interval Thus, the cumulative utility of a flow i in system S over any interval of time is given by, for any allocation policy P U S,P t2 i (, t 2 ) = D S,P i (τ)dτ (5) Consider the allocation policy None(i) In the absence of this set of shared resources (both the link and the buffer), the cumulative utility is obviously 0 With an allocation policy P, therefore, the cumulative resource dividend over an interval for each flow is exactly the cumulative utility achieved by the flow over the interval The cumulative resource demand of flow i is the cumulative utility it gets using the allocation policy All(i), which allocates the entire buffer and the output link exclusively to this flow Thus, applying (5) into (2) and (3), we have, and CRDIV S,P t2 i (, t 2 ) = D S,P i (τ)dτ (6) CRDEM S i (, t 2 ) = for any flow i and any allocation policy P t2 D S,All(i) i (τ)dτ (7) Recall that to define the state of a flow as active, inactive or neither, with respect to the buffer resource (least preferred resource), we need to consider the system without the buffer resource as per Definitions 3 and 4 Let S be the same system 11

12 as S, but without the shared buffer, ie, a system with just a single shared output link Recall that the ideally fair allocation policy in this system is G F (S ) Now, a flow i is said to be active with respect to the buffer resource, or simply active, over an interval of time (, t 2 ) if and only if, over each subinterval (τ 1, τ 2 ) such that τ 1 τ 2 t 2, τ2 τ 1 D S,All(i) τ2 i (τ)dτ > D S,G F (S ) i (τ)dτ (8) τ 1 Similarly, a flow i is said to be inactive with respect to the buffer resource, or simply inactive, over an interval of time (, t 2 ) if and only if, during each subinterval (τ 1, τ 2 ), τ2 D S,All(i) τ2 i (τ)dτ = D S,G F (S ) i (τ)dτ (9) τ 1 τ 1 A stationary interval is one during which each flow is either active or inactive, and an allocation policy P is fair if and only if, over all stationary intervals (, t 2 ), [ S,P CRDIV i (, t 2 ) ] ( = F CRDIV S,P (, t 2 ), [ CRDEM S i (, t 2 ) ] ), [w i ] (10) 33 Measure of Fairness in Buffer Allocation In the scheduling of bandwidth over a link, the GPS scheduler is considered an ideal one when the notion of fairness is based on the max-min policy [3] The fairness of a scheduling discipline for bandwidth on a link, therefore, is frequently measured by how closely the discipline approximates the ideal scheduler, GPS The Absolute Fairness Bound (AFB) in the context of scheduling bandwidth over a link is defined as, P i (, t 2 ) AFB = max G i(, t 2 ) (11) i, (,t 2 ) w i where G i (, t 2 ) is the service flow i gets during time interval (, t 2 ) under the GPS scheduler, and P i (, t 2 ) is the service it gets under the policy P [3] In measuring the fairness in buffer allocation, we extend the basic premise of the AFB as the maximum difference in the service received by a flow under the policy being measured and the one under the ideally fair policy To accomplish this, we need to first identify the ideally fair policy Note that a fair buffer allocation is not necessarily the one that delivers the best performance For example, an unfair allocation may lead to a higher performance than a fair allocation would Thus, one could have many fair allocation algorithms, each at a different performance level Therefore, in measuring the fairness of an allocation policy P, we need to compare it to an ideal fair policy at the same performance level as P In our study, we use the sum of the cumulative utilities of all the flows to indicate the performance achieved by an allocation policy Note that, in the system model under consideration, the cu- w i 12

13 mulative utility achieved by a flow during an interval is the same as the cumulative resource dividend of the flow over that interval Let G F (S, P ) be an ideally fair buffer allocation policy for system S due to the notion of fairness F, such that its total cumulative utility is identical to that of P, ie, t2 t2 D S,P (τ)dτ = D S,G F (S,P ) (τ)dτ Since G F (S, P ) is exactly fair, note that (10) holds for G F (S, P ) over all stationary intervals A common but frequently unstated assumption made in the definition of the AFB in (11) is that the scheduling policy being measured is a work-conserving one This ensures that the total performance in terms of the sum of the cumulative resource dividends (same as cumulative utilities) achieved by the flows under the policy being measured is identical to that under the GPS scheduler Therefore, a normalizing quantity based on performance is not needed in the definition in (11) in order for it serve as the measure of fairness in a comparative analysis of various workconserving algorithms In our study of fairness in buffer allocation, however, we have made no assumptions about the performance and about whether or not the allocation policy being measured is work-conserving with respect to the shared set of resources A normalizing quantity based on the performance, therefore, is necessary in extending the notion of the fairness measure in (11) to our case of buffer allocation algorithms This normalization should allow us to use our fairness measure in a valid comparison between various buffer allocation strategies We now define the normalized Absolute Fairness Measure over an interval of time as follows Definition 6 In a system S with a shared buffer, a shared output link and a given input traffic arrival pattern, the normalized Absolute Fairness Measure, denoted by nafm S,P (, t 2 ), of an allocation policy P over an interval of time (, t 2 ) is defined as, nafm S,P (, t 2 ) = max i t2 D S,P i (τ)dτ w i t2 t2 D S,G F (S,P ) i w i D S,P (τ)dτ (τ)dτ (12) Note that the fairness measured as above will approach 10 with any real algorithm when the size of the time interval, t 2, is extremely small At the same time, for most real buffer allocation strategies, the fairness measured as above will approach 0 when the size of the time interval considered is very large Thus, a valid comparison between various allocation algorithms can be made using the above measure only if the sizes of the time intervals being considered are identical Therefore, we now define a bound on the above measure of fairness as a function of the size of the time interval, as follows 13

14 Definition 7 Define the normalized Absolute Fairness Bound, nafb S,P (τ), of an allocation policy P in system S for time intervals of size τ as the upper bound on the normalized absolute fairness measure over an interval of size τ with any possible input traffic arrival pattern In other words, nafb S,P (τ) is the smallest value of Θ(τ) such that, for any input traffic, Θ(τ) max{nafm S,P (t, t + τ)} t We now briefly discuss how one might compute the fairness measures as defined in (12), based only on the determination of the results of the ideal policy G F (S, P ) and without actually simulating the ideal policy Let the time interval (, t 2 ) be a stationary interval Note that, in (12), one only needs to know the service received by flow i under the ideal allocation policy G F (S, P ) over time interval (, t 2 ) Using (7), one can compute, during time interval (, t 2 ), the per-flow cumulative resource demands For the policy P being measured, using (6), we can compute the per-flow cumulative resource dividends and therefore, also the sum of the cumulative resource dividends under the policy Since G F (S, P ) has the same total cumulative resource dividend as the policy P, we can now use (10) to determine the per-flow cumulative resource dividends under policy G F (S, P ) Having determined the dividends under both the ideal policy and the policy being measured, the fairness measured can now be readily computed 34 Simulation Results and Analysis Using the measure developed above, we now evaluate the fairness of some representative buffer allocation policies An analytical determination of nafb S,P (τ) for an allocation strategy P is not trivial This bound, in addition, cannot be determined through simulation since a simulation-based analysis cannot easily guarantee that it will generate the conditions necessary to reach the bound However, through simulation, one can observe the fairness characteristics of an allocation policy by noting the maximum value of the normalized absolute fairness measure for intervals of several different sizes For example, the observed maximum of nafm S,P (t, t + τ) over all t represents a lower bound on the value of nafb S,P (τ) The tightness of this lower bound depends on the input traffic and also on the number of cycles for which data is observed and recorded In this section, we use this maximum as a practical measure of fairness that can be obtained through a simulation study In our study, we have found this measure to provide several valuable insights into the fairness of buffer allocation algorithms, through providing a close approximation to the bound nafb S,P (τ) Our simulation model consists of a shared buffer fed by traffic sources through 8 input buffers, each of infinite capacity Traffic from these 8 flows is headed to the same shared output link via the shared buffer Two sets of traffic sources have 14

15 been used in our study In the first set of simulation experiments, real traffic traces recorded at Internet gateways are used [19] 1 In our second set, we use real video traffic traces coded using MPEG-4 with high quality [20] 2 For each input, one distinct video stream is used, and the starting point within the video stream is randomly selected We implement four different representative policies for the entry scheduler: (i) Drop From Longest Queue (DFLQ), which pushes out packets belonging to the flow with the longest queue occupancy whenever the shared buffer is full, and accepts all packets, otherwise; (ii) Static Threshold (ST), which assigns an equal fixed buffer occupancy threshold to each flow and no flow is allowed to occupy more than this threshold; (iii) Random Early Detection (RED) [16], which drops arriving packets with a probability that is a dynamic function of the average buffer occupancy; and (iv) Fair Buffering Random Early Detection (FB-RED) [17], which is a variant of RED that uses the bandwidth-delay product of a flow to determine the probability with which a packet from the flow is dropped In our simulation studies, all parameters of the RED algorithm follow the recommendation of [21] These four entry schedulers can be categorized into two groups: one including DFLQ and ST, and the other including RED and FB-RED This is because both RED and FB-RED are intended to be congestion avoidance algorithms, and therefore assumed to work in situations where the shared buffer is never full (packets are dropped before the buffer gets full) In our study, a buffer capacity of 100 KB is used when DFLQ and ST are implemented, and a buffer capacity of 10 MB is used with RED and FB-RED Three exit scheduling policies are also implemented: (i) First-Come First-Serve (FCFS), which dequeues packets in the order of their arrival; (ii) Longest Queue First (LQF), which schedules packets from the flow with the longest queue in the shared buffer; and (iii) Deficit Round-Robin (DRR) [7], which is a simple and popular fair round-robin scheduler In our implementation, the DRR quantum is set to the maximum packet size In the scheduling of bandwidth over a link, both FCFS and LQF have an absolute fairness bound of infinity, ie, both are unfair given the max-min notion of fairness DRR, on the other hand, is the representative fair algorithm used here Figs 2 and 3 plot the observed maximum value of nafm S,P (t, t + τ) against τ for different pairs of entry and exit scheduling policies Specifically, Fig 2 plots the observed maximum value of nafm S,P (t, t + τ) against τ using the real gateway 1 The traces are obtained from the Passive Measurement and Analysis project at the National Laboratory for Applied Network Research (NLANR) 2 The traces are collected from the Telecommunication Networks Group at Technical University of Berlin, Germany The 8 video streams selected are Jurassic Park I, Silence of the Lambs, Star Wars IV, Mr Bean, Star Trek, Die Hard III, Aladdin and The Firm The categories covered are diverse, including drama, action and animation 15

16 max{nafm S,P (t,t+τ)} (a) DFLQ + LQF ST + LQF DFLQ + FCFS ST + FCFS DFLQ + DRR ST + DRR max{nafm S,P (t,t+τ)} τ (in cycles) (b) RED + LQF FB RED + LQF RED + FCFS FB RED + FCFS RED + DRR FB RED + DRR τ (in cycles) Fig 2 Observed maximum (over all t) of nafm S,P (t, t+τ) vs τ, when the exit scheduling policy is FCFS, LQF, or DRR and the entry scheduling policy is: (a) DFLQ or ST, (b) RED or FB-RED In this study, the real gateway traces from [19] are used traces as the input traffic, while Fig 3 plots the same for video traces In Figs 2(a) and 3(a), DFLQ and ST are used as the entry scheduler, while in Figs 2(b) and 3(b), RED and FB-RED are used Note that for any fair allocation policy, the denominator in (12) as in Definition 6 should be a bounded value as the time interval (, t 2 ) varies On the other hand, as the interval length t 2 increases, the numerator in (12), t 2 D S,P (τ)dτ, increases Therefore, for any fair allocation policy, the observed maximum over all t of nafm S,P (t, t + τ) should converge to 0 as τ increases In other words, if for a certain allocation policy P, the observed maximum of nafm S,P (t, t + τ) does not converge to 0 as τ increases, one can conclude that this policy P is not a fair allocation policy From Figs 2 and 3, it is observed that, of the three exit scheduling policies examined, LQF is the worst in terms of fairness, since the combinations with LQF always produce the highest naf M S,P (t, t+τ) values On the other hand, the combinations with DRR as the exit scheduler are better than those without The fact that DRR is already fair with respect to bandwidth allocation on a link helps in improving the overall fairness with DRR as the exit policy 16

17 max{nafm S,P (t,t+τ)} (a) DFLQ + LQF ST + LQF DFLQ + FCFS ST + FCFS DFLQ + DRR ST + DRR max{nafm S,P (t,t+τ)} τ (in cycles) (b) RED + LQF FB RED + LQF RED + FCFS FB RED + FCFS RED + DRR FB RED + DRR τ (in cycles) Fig 3 Observed maximum (over all t) of nafm S,P (t, t+τ) vs τ, when the exit scheduling policy is FCFS, LQF, or DRR and the entry scheduling policy is: (a) DFLQ or ST, (b) RED or FB-RED In this study, the video traces from [20] are used Even with DRR as the exit policy, however, the entry policy still plays a critical role in terms of fairness This is observed in combinations of DRR with RED or FB-RED, as in Figs 2(b) and 3(b) When the entry scheduler is such that each active flow does not necessarily have packets waiting in the shared buffer (such as RED and FB-RED), using DRR as the exit scheduler is insufficient to guarantee good fairness It may also be observed that there is little difference between RED and FB-RED in terms of fairness In fact, none of the combinations with either RED or FB-RED provides good fairness In addition, it is observed that FCFS can deliver a sufficiently fair distribution of the shared resources when the entry policy is able to bound the entry burst of each flow (such as in DFLQ and ST) In these cases, note that over longer periods of time, FCFS is almost as fair as DRR This observation, along with the failure of the combinations of RED or FB-RED with DRR to achieve fairness, suggests that the fairness of the entry policy may be more critical than that of the exit policy for overall fairness in the joint allocation of buffer and bandwidth resources 17

18 4 A Discussion on Extension to Systems with Multiple Output Links The previous section focuses on a system model with a single output link and a buffer shared among multiple flows of traffic The buffer management within many routers, where a separate statically allocated buffer is used for each output link, may be captured using this system model However, some switches and routers do use a single shared buffer for packets headed to all of its output links In this case, we have a single shared buffer but multiple output links In other words, multiple flows of traffic share the same buffer but different subsets of the flows share the available bandwidth on different output links There are a couple of important features that differentiate a multiple output link system from a single output link system Firstly, in the presence of multiple output links, there is no common shared set of resources among the flows Flows headed to two different output links only share the buffer; while flows to the same output link share both the buffer and the output link This presents a difficulty in applying the concept of stationary intervals and the Generalized Principle of Fairness directly to the entire system since different sets of flows have a different set of shared resources Secondly, in multiple output link systems, the resource dividend of a flow might not be transferable to another flow In single output link systems, or in the case of scheduling bandwidth on a shared link, when one flow temporarily has a small demand, other flows with larger demands can take advantage and achieve higher throughputs This, however, is not true in multiple output link systems When flows to one output link, say l, are temporarily inactive, other flows destined to a link other than l may not be able to take advantage of this available bandwidth resource on link l and increase their throughputs Thus, the total dividend available for distribution among flows is itself a variable quantity, and poses another challenge to the task of defining fairness in such systems In spite of these difficulties, it turns out that the results obtained in the previous section may be successfully employed to define fairness in the multiple link case and derive practical strategies for switches and routers that conform to this system model A simple observation leads to the strategy: one may decompose the multiple output link system into two classes of subsystems, an unshared link subsystem and several shared link subsystems, and define fairness within each subsystem by applying the FPRA principle Next we first present the system model used in this study and then describe the method of system decomposition in detail 41 System Model Our multiple output link system model consists of a shared buffer, a set of output links l, 1 l L, and a set of flows i, 1 i N When L = 1, this system model 18

19 I 1 (t) I i (t) I N (t) Entry A 1 (t) A i(t) A N (t) (a) Buffer Capacity C(t) Bi(t) D 1 (t) Di(t) D N (t) Exit R 1 (t) (t) R l R L (t) I 1 (t) A 1 (t) (b) D 1 (t) (t) I 2 Entry A 2(t) (t) D 2 Exit R L (t) Fig 4 The multiple output link system model (a) The entire system; (b) an example of one session reduces to that considered in Section 31 Fig 4(a) illustrates the multiple output link system model Let w i be the weight associated with flow i Traffic from each flow is destined to one of the L output links, and several flows may share the same output link Thus, the number of links, L, may be smaller than the number of flows, N The set of flows headed to the same output link are said to belong to the same session Note that each session corresponds to exactly one link and vice-versa The session corresponding to the output link l is denoted by Fig 4(b) shows one session with flows 1 and 2 headed to the same output link L Similar to the system model with a single link, let C(t) be the total capacity of the shared buffer at time instant t Let R l (t) be the maximum possible transmission rate on link l at time instant t Denote by S a multiple output link system as shown in Fig 4(a) At time instant t, for each flow i, the input rate I i (t), the admission rate A S,P i (t), the departure rate D S,P i (t) and the buffer occupancy B S,P i (t) are similarly defined as in the single link system In addition, denote the sum of a per-flow quantity over flows belonging to the same session by using the session label as the subscript For example, I Fl (t) denotes the aggregate input rate, at time instant t, (t) are similarly defined For the same reason described in single link systems, a buffer allocation policy in system S during a time interval is completely determined by the actions of the entry scheduler and the exit scheduler during this time interval, ie, the admission and the departure rates at all instants during this interval of all flows that belong to session The quantities A S,P (t), B S,P (t) and B S,P 42 System Decomposition By definition, all flows which belong to the session, are headed to the same output link l Now, consider this set of flows as one aggregate flow headed to output link l This session, ie, the aggregate flow, has an input rate of I Fl (t) = i I i (t) 19

On Achieving Fairness in the Joint Allocation of Processing and Bandwidth Resources: Principles and Algorithms. Yunkai Zhou and Harish Sethu

On Achieving Fairness in the Joint Allocation of Processing and Bandwidth Resources: Principles and Algorithms. Yunkai Zhou and Harish Sethu On Achieving Fairness in the Joint Allocation of Processing and Bandwidth Resources: Principles and Algorithms Yunkai Zhou and Harish Sethu Technical Report DU-CS-03-02 Department of Computer Science Drexel

More information

Strategies. A Thesis. Submitted to the Faculty. Drexel University. Yunkai Zhou. in partial fulfillment of the. requirements for the degree

Strategies. A Thesis. Submitted to the Faculty. Drexel University. Yunkai Zhou. in partial fulfillment of the. requirements for the degree Resource Allocation in Computer Networks: Fundamental Principles and Practical Strategies A Thesis Submitted to the Faculty of Drexel University by Yunkai Zhou in partial fulfillment of the requirements

More information

Resource allocation in networks. Resource Allocation in Networks. Resource allocation

Resource allocation in networks. Resource Allocation in Networks. Resource allocation Resource allocation in networks Resource Allocation in Networks Very much like a resource allocation problem in operating systems How is it different? Resources and jobs are different Resources are buffers

More information

An Evaluation of Fair Packet Schedulers Using a Novel Measure of Instantaneous Fairness

An Evaluation of Fair Packet Schedulers Using a Novel Measure of Instantaneous Fairness An Evaluation of Fair Packet Schedulers Using a Novel Measure of Instantaneous Fairness Hongyuan Shi, Harish Sethu, and Salil S. Kanhere Department of Electrical and Computer Engineering Drexel University,

More information

Unit 2 Packet Switching Networks - II

Unit 2 Packet Switching Networks - II Unit 2 Packet Switching Networks - II Dijkstra Algorithm: Finding shortest path Algorithm for finding shortest paths N: set of nodes for which shortest path already found Initialization: (Start with source

More information

Scheduling. Scheduling algorithms. Scheduling. Output buffered architecture. QoS scheduling algorithms. QoS-capable router

Scheduling. Scheduling algorithms. Scheduling. Output buffered architecture. QoS scheduling algorithms. QoS-capable router Scheduling algorithms Scheduling Andrea Bianco Telecommunication Network Group firstname.lastname@polito.it http://www.telematica.polito.it/ Scheduling: choose a packet to transmit over a link among all

More information

FB(9,3) Figure 1(a). A 4-by-4 Benes network. Figure 1(b). An FB(4, 2) network. Figure 2. An FB(27, 3) network

FB(9,3) Figure 1(a). A 4-by-4 Benes network. Figure 1(b). An FB(4, 2) network. Figure 2. An FB(27, 3) network Congestion-free Routing of Streaming Multimedia Content in BMIN-based Parallel Systems Harish Sethu Department of Electrical and Computer Engineering Drexel University Philadelphia, PA 19104, USA sethu@ece.drexel.edu

More information

Scheduling Algorithms to Minimize Session Delays

Scheduling Algorithms to Minimize Session Delays Scheduling Algorithms to Minimize Session Delays Nandita Dukkipati and David Gutierrez A Motivation I INTRODUCTION TCP flows constitute the majority of the traffic volume in the Internet today Most of

More information

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching Reduction of Periodic Broadcast Resource Requirements with Proxy Caching Ewa Kusmierek and David H.C. Du Digital Technology Center and Department of Computer Science and Engineering University of Minnesota

More information

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services Overview 15-441 15-441 Computer Networking 15-641 Lecture 19 Queue Management and Quality of Service Peter Steenkiste Fall 2016 www.cs.cmu.edu/~prs/15-441-f16 What is QoS? Queuing discipline and scheduling

More information

Kommunikationssysteme [KS]

Kommunikationssysteme [KS] Kommunikationssysteme [KS] Dr.-Ing. Falko Dressler Computer Networks and Communication Systems Department of Computer Sciences University of Erlangen-Nürnberg http://www7.informatik.uni-erlangen.de/~dressler/

More information

Advanced Computer Networks

Advanced Computer Networks Advanced Computer Networks QoS in IP networks Prof. Andrzej Duda duda@imag.fr Contents QoS principles Traffic shaping leaky bucket token bucket Scheduling FIFO Fair queueing RED IntServ DiffServ http://duda.imag.fr

More information

Prioritized Elastic Round Robin: An Efficient and Low-Latency Packet Scheduler with Improved Fairness. Salil S. Kanhere and Harish Sethu

Prioritized Elastic Round Robin: An Efficient and Low-Latency Packet Scheduler with Improved Fairness. Salil S. Kanhere and Harish Sethu Prioritized Elastic Round Robin: An Efficient and Low-Latency Packet Scheduler with Improved Fairness Salil S. Kanhere and Harish Sethu Technical Report DU-CS-03-03 Department of Computer Science Drexel

More information

Real-Time Protocol (RTP)

Real-Time Protocol (RTP) Real-Time Protocol (RTP) Provides standard packet format for real-time application Typically runs over UDP Specifies header fields below Payload Type: 7 bits, providing 128 possible different types of

More information

Fair Adaptive Bandwidth Allocation: A Rate Control Based Active Queue Management Discipline

Fair Adaptive Bandwidth Allocation: A Rate Control Based Active Queue Management Discipline Fair Adaptive Bandwidth Allocation: A Rate Control Based Active Queue Management Discipline Abhinav Kamra, Huzur Saran, Sandeep Sen, and Rajeev Shorey Department of Computer Science and Engineering, Indian

More information

Wireless Networks (CSC-7602) Lecture 8 (15 Oct. 2007)

Wireless Networks (CSC-7602) Lecture 8 (15 Oct. 2007) Wireless Networks (CSC-7602) Lecture 8 (15 Oct. 2007) Seung-Jong Park (Jay) http://www.csc.lsu.edu/~sjpark 1 Today Wireline Fair Schedulling Why? Ideal algorithm Practical algorithms Wireless Fair Scheduling

More information

Queuing. Congestion Control and Resource Allocation. Resource Allocation Evaluation Criteria. Resource allocation Drop disciplines Queuing disciplines

Queuing. Congestion Control and Resource Allocation. Resource Allocation Evaluation Criteria. Resource allocation Drop disciplines Queuing disciplines Resource allocation Drop disciplines Queuing disciplines Queuing 1 Congestion Control and Resource Allocation Handle congestion if and when it happens TCP Congestion Control Allocate resources to avoid

More information

Router Design: Table Lookups and Packet Scheduling EECS 122: Lecture 13

Router Design: Table Lookups and Packet Scheduling EECS 122: Lecture 13 Router Design: Table Lookups and Packet Scheduling EECS 122: Lecture 13 Department of Electrical Engineering and Computer Sciences University of California Berkeley Review: Switch Architectures Input Queued

More information

Episode 5. Scheduling and Traffic Management

Episode 5. Scheduling and Traffic Management Episode 5. Scheduling and Traffic Management Part 2 Baochun Li Department of Electrical and Computer Engineering University of Toronto Keshav Chapter 9.1, 9.2, 9.3, 9.4, 9.5.1, 13.3.4 ECE 1771: Quality

More information

THERE are a growing number of Internet-based applications

THERE are a growing number of Internet-based applications 1362 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 14, NO. 6, DECEMBER 2006 The Stratified Round Robin Scheduler: Design, Analysis and Implementation Sriram Ramabhadran and Joseph Pasquale Abstract Stratified

More information

Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks. Congestion Control in Today s Internet

Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks. Congestion Control in Today s Internet Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks Ion Stoica CMU Scott Shenker Xerox PARC Hui Zhang CMU Congestion Control in Today s Internet Rely

More information

048866: Packet Switch Architectures

048866: Packet Switch Architectures 048866: Packet Switch Architectures Output-Queued Switches Deterministic Queueing Analysis Fairness and Delay Guarantees Dr. Isaac Keslassy Electrical Engineering, Technion isaac@ee.technion.ac.il http://comnet.technion.ac.il/~isaac/

More information

Promoting the Use of End-to-End Congestion Control in the Internet

Promoting the Use of End-to-End Congestion Control in the Internet Promoting the Use of End-to-End Congestion Control in the Internet Sally Floyd and Kevin Fall IEEE/ACM Transactions on Networking May 1999 ACN: TCP Friendly 1 Outline The problem of Unresponsive Flows

More information

Performance Evaluation of Scheduling Mechanisms for Broadband Networks

Performance Evaluation of Scheduling Mechanisms for Broadband Networks Performance Evaluation of Scheduling Mechanisms for Broadband Networks Gayathri Chandrasekaran Master s Thesis Defense The University of Kansas 07.31.2003 Committee: Dr. David W. Petr (Chair) Dr. Joseph

More information

Chapter 24 Congestion Control and Quality of Service 24.1

Chapter 24 Congestion Control and Quality of Service 24.1 Chapter 24 Congestion Control and Quality of Service 24.1 Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 24-1 DATA TRAFFIC The main focus of congestion control

More information

Congestion Avoidance Overview

Congestion Avoidance Overview Congestion avoidance techniques monitor network traffic loads in an effort to anticipate and avoid congestion at common network bottlenecks. Congestion avoidance is achieved through packet dropping. Among

More information

Basics (cont.) Characteristics of data communication technologies OSI-Model

Basics (cont.) Characteristics of data communication technologies OSI-Model 48 Basics (cont.) Characteristics of data communication technologies OSI-Model Topologies Packet switching / Circuit switching Medium Access Control (MAC) mechanisms Coding Quality of Service (QoS) 49

More information

TDDD82 Secure Mobile Systems Lecture 6: Quality of Service

TDDD82 Secure Mobile Systems Lecture 6: Quality of Service TDDD82 Secure Mobile Systems Lecture 6: Quality of Service Mikael Asplund Real-time Systems Laboratory Department of Computer and Information Science Linköping University Based on slides by Simin Nadjm-Tehrani

More information

A Hierarchical Fair Service Curve Algorithm for Link-Sharing, Real-Time, and Priority Services

A Hierarchical Fair Service Curve Algorithm for Link-Sharing, Real-Time, and Priority Services IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 8, NO. 2, APRIL 2000 185 A Hierarchical Fair Service Curve Algorithm for Link-Sharing, Real-Time, and Priority Services Ion Stoica, Hui Zhang, Member, IEEE, and

More information

Network Model for Delay-Sensitive Traffic

Network Model for Delay-Sensitive Traffic Traffic Scheduling Network Model for Delay-Sensitive Traffic Source Switch Switch Destination Flow Shaper Policer (optional) Scheduler + optional shaper Policer (optional) Scheduler + optional shaper cfla.

More information

Flow Control. Flow control problem. Other considerations. Where?

Flow Control. Flow control problem. Other considerations. Where? Flow control problem Flow Control An Engineering Approach to Computer Networking Consider file transfer Sender sends a stream of packets representing fragments of a file Sender should try to match rate

More information

RECHOKe: A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks

RECHOKe: A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < : A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks Visvasuresh Victor Govindaswamy,

More information

DiffServ Architecture: Impact of scheduling on QoS

DiffServ Architecture: Impact of scheduling on QoS DiffServ Architecture: Impact of scheduling on QoS Abstract: Scheduling is one of the most important components in providing a differentiated service at the routers. Due to the varying traffic characteristics

More information

Networking Issues in LAN Telephony. Brian Yang

Networking Issues in LAN Telephony. Brian Yang Networking Issues in LAN Telephony Brian Yang 5-3-00 Topics Some background Flow Based QoS Class Based QoS and popular algorithms Strict Priority (SP) Round-Robin (RR), Weighted Round Robin (WRR) and Weighted

More information

Congestion Control In the Network

Congestion Control In the Network Congestion Control In the Network Brighten Godfrey cs598pbg September 9 2010 Slides courtesy Ion Stoica with adaptation by Brighten Today Fair queueing XCP Announcements Problem: no isolation between flows

More information

RED behavior with different packet sizes

RED behavior with different packet sizes RED behavior with different packet sizes Stefaan De Cnodder, Omar Elloumi *, Kenny Pauwels Traffic and Routing Technologies project Alcatel Corporate Research Center, Francis Wellesplein, 1-18 Antwerp,

More information

Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015

Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015 Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015 I d like to complete our exploration of TCP by taking a close look at the topic of congestion control in TCP. To prepare for

More information

of-service Support on the Internet

of-service Support on the Internet Quality-of of-service Support on the Internet Dept. of Computer Science, University of Rochester 2008-11-24 CSC 257/457 - Fall 2008 1 Quality of Service Support Some Internet applications (i.e. multimedia)

More information

Priority Traffic CSCD 433/533. Advanced Networks Spring Lecture 21 Congestion Control and Queuing Strategies

Priority Traffic CSCD 433/533. Advanced Networks Spring Lecture 21 Congestion Control and Queuing Strategies CSCD 433/533 Priority Traffic Advanced Networks Spring 2016 Lecture 21 Congestion Control and Queuing Strategies 1 Topics Congestion Control and Resource Allocation Flows Types of Mechanisms Evaluation

More information

On Network Dimensioning Approach for the Internet

On Network Dimensioning Approach for the Internet On Dimensioning Approach for the Internet Masayuki Murata ed Environment Division Cybermedia Center, (also, Graduate School of Engineering Science, ) e-mail: murata@ics.es.osaka-u.ac.jp http://www-ana.ics.es.osaka-u.ac.jp/

More information

MUD: Send me your top 1 3 questions on this lecture

MUD: Send me your top 1 3 questions on this lecture Administrivia Review 1 due tomorrow Email your reviews to me Office hours on Thursdays 10 12 MUD: Send me your top 1 3 questions on this lecture Guest lectures next week by Prof. Richard Martin Class slides

More information

From ATM to IP and back again: the label switched path to the converged Internet, or another blind alley?

From ATM to IP and back again: the label switched path to the converged Internet, or another blind alley? Networking 2004 Athens 11 May 2004 From ATM to IP and back again: the label switched path to the converged Internet, or another blind alley? Jim Roberts France Telecom R&D The story of QoS: how to get

More information

Lecture 5: Performance Analysis I

Lecture 5: Performance Analysis I CS 6323 : Modeling and Inference Lecture 5: Performance Analysis I Prof. Gregory Provan Department of Computer Science University College Cork Slides: Based on M. Yin (Performability Analysis) Overview

More information

Computer Networking. Queue Management and Quality of Service (QOS)

Computer Networking. Queue Management and Quality of Service (QOS) Computer Networking Queue Management and Quality of Service (QOS) Outline Previously:TCP flow control Congestion sources and collapse Congestion control basics - Routers 2 Internet Pipes? How should you

More information

Worst-case Ethernet Network Latency for Shaped Sources

Worst-case Ethernet Network Latency for Shaped Sources Worst-case Ethernet Network Latency for Shaped Sources Max Azarov, SMSC 7th October 2005 Contents For 802.3 ResE study group 1 Worst-case latency theorem 1 1.1 Assumptions.............................

More information

TELE Switching Systems and Architecture. Assignment Week 10 Lecture Summary - Traffic Management (including scheduling)

TELE Switching Systems and Architecture. Assignment Week 10 Lecture Summary - Traffic Management (including scheduling) TELE9751 - Switching Systems and Architecture Assignment Week 10 Lecture Summary - Traffic Management (including scheduling) Student Name and zid: Akshada Umesh Lalaye - z5140576 Lecturer: Dr. Tim Moors

More information

On Generalized Processor Sharing with Regulated Traffic for MPLS Traffic Engineering

On Generalized Processor Sharing with Regulated Traffic for MPLS Traffic Engineering On Generalized Processor Sharing with Regulated Traffic for MPLS Traffic Engineering Shivendra S. Panwar New York State Center for Advanced Technology in Telecommunications (CATT) Department of Electrical

More information

Lecture 21. Reminders: Homework 6 due today, Programming Project 4 due on Thursday Questions? Current event: BGP router glitch on Nov.

Lecture 21. Reminders: Homework 6 due today, Programming Project 4 due on Thursday Questions? Current event: BGP router glitch on Nov. Lecture 21 Reminders: Homework 6 due today, Programming Project 4 due on Thursday Questions? Current event: BGP router glitch on Nov. 7 http://money.cnn.com/2011/11/07/technology/juniper_internet_outage/

More information

Maximizing the Number of Users in an Interactive Video-on-Demand System

Maximizing the Number of Users in an Interactive Video-on-Demand System IEEE TRANSACTIONS ON BROADCASTING, VOL. 48, NO. 4, DECEMBER 2002 281 Maximizing the Number of Users in an Interactive Video-on-Demand System Spiridon Bakiras, Member, IEEE and Victor O. K. Li, Fellow,

More information

Network Layer Enhancements

Network Layer Enhancements Network Layer Enhancements EECS 122: Lecture 14 Department of Electrical Engineering and Computer Sciences University of California Berkeley Today We have studied the network layer mechanisms that enable

More information

Introduction: Two motivating examples for the analytical approach

Introduction: Two motivating examples for the analytical approach Introduction: Two motivating examples for the analytical approach Hongwei Zhang http://www.cs.wayne.edu/~hzhang Acknowledgement: this lecture is partially based on the slides of Dr. D. Manjunath Outline

More information

Episode 5. Scheduling and Traffic Management

Episode 5. Scheduling and Traffic Management Episode 5. Scheduling and Traffic Management Part 2 Baochun Li Department of Electrical and Computer Engineering University of Toronto Outline What is scheduling? Why do we need it? Requirements of a scheduling

More information

Journal of Electronics and Communication Engineering & Technology (JECET)

Journal of Electronics and Communication Engineering & Technology (JECET) Journal of Electronics and Communication Engineering & Technology (JECET) JECET I A E M E Journal of Electronics and Communication Engineering & Technology (JECET)ISSN ISSN 2347-4181 (Print) ISSN 2347-419X

More information

Episode 5. Scheduling and Traffic Management

Episode 5. Scheduling and Traffic Management Episode 5. Scheduling and Traffic Management Part 3 Baochun Li Department of Electrical and Computer Engineering University of Toronto Outline What is scheduling? Why do we need it? Requirements of a scheduling

More information

EP2210 Scheduling. Lecture material:

EP2210 Scheduling. Lecture material: EP2210 Scheduling Lecture material: Bertsekas, Gallager, 6.1.2. MIT OpenCourseWare, 6.829 A. Parekh, R. Gallager, A generalized Processor Sharing Approach to Flow Control - The Single Node Case, IEEE Infocom

More information

Link-sharing and Resource Management Models for Packet Networks

Link-sharing and Resource Management Models for Packet Networks Link-sharing and Resource Management Models for Packet Networks Sally Floyd Lawrence Berkeley Laboratory floyd@ee.lbl.gov February 7, 1995 (Joint work with Van Jacobson. Credits to other members of the

More information

Improving QOS in IP Networks. Principles for QOS Guarantees

Improving QOS in IP Networks. Principles for QOS Guarantees Improving QOS in IP Networks Thus far: making the best of best effort Future: next generation Internet with QoS guarantees RSVP: signaling for resource reservations Differentiated Services: differential

More information

Modular Quality of Service Overview on Cisco IOS XR Software

Modular Quality of Service Overview on Cisco IOS XR Software Modular Quality of Service Overview on Cisco IOS XR Software Quality of Service (QoS) is the technique of prioritizing traffic flows and providing preferential forwarding for higher-priority packets. The

More information

Network Support for Multimedia

Network Support for Multimedia Network Support for Multimedia Daniel Zappala CS 460 Computer Networking Brigham Young University Network Support for Multimedia 2/33 make the best of best effort use application-level techniques use CDNs

More information

Common network/protocol functions

Common network/protocol functions Common network/protocol functions Goals: Identify, study common architectural components, protocol mechanisms Synthesis: big picture Depth: important topics not covered in introductory courses Overview:

More information

Dynamic Wavelength Assignment for WDM All-Optical Tree Networks

Dynamic Wavelength Assignment for WDM All-Optical Tree Networks Dynamic Wavelength Assignment for WDM All-Optical Tree Networks Poompat Saengudomlert, Eytan H. Modiano, and Robert G. Gallager Laboratory for Information and Decision Systems Massachusetts Institute of

More information

Simulation-Based Performance Comparison of Queueing Disciplines for Differentiated Services Using OPNET

Simulation-Based Performance Comparison of Queueing Disciplines for Differentiated Services Using OPNET Simulation-Based Performance Comparison of Queueing Disciplines for Differentiated Services Using OPNET Hafiz M. Asif and El-Sayed M. El-Alfy College of Computer Science and Engineering King Fahd University

More information

Multiplexing. Common network/protocol functions. Multiplexing: Sharing resource(s) among users of the resource.

Multiplexing. Common network/protocol functions. Multiplexing: Sharing resource(s) among users of the resource. Common network/protocol functions Goals: Identify, study common architectural components, protocol mechanisms Synthesis: big picture Depth: Important topics not covered in introductory courses Overview:

More information

Chapter 6 Queuing Disciplines. Networking CS 3470, Section 1

Chapter 6 Queuing Disciplines. Networking CS 3470, Section 1 Chapter 6 Queuing Disciplines Networking CS 3470, Section 1 Flow control vs Congestion control Flow control involves preventing senders from overrunning the capacity of the receivers Congestion control

More information

Quality of Service (QoS)

Quality of Service (QoS) Quality of Service (QoS) The Internet was originally designed for best-effort service without guarantee of predictable performance. Best-effort service is often sufficient for a traffic that is not sensitive

More information

Frame Relay. Frame Relay: characteristics

Frame Relay. Frame Relay: characteristics Frame Relay Andrea Bianco Telecommunication Network Group firstname.lastname@polito.it http://www.telematica.polito.it/ Network management and QoS provisioning - 1 Frame Relay: characteristics Packet switching

More information

Congestion control in TCP

Congestion control in TCP Congestion control in TCP If the transport entities on many machines send too many packets into the network too quickly, the network will become congested, with performance degraded as packets are delayed

More information

2.993: Principles of Internet Computing Quiz 1. Network

2.993: Principles of Internet Computing Quiz 1. Network 2.993: Principles of Internet Computing Quiz 1 2 3:30 pm, March 18 Spring 1999 Host A Host B Network 1. TCP Flow Control Hosts A, at MIT, and B, at Stanford are communicating to each other via links connected

More information

CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS

CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS 28 CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS Introduction Measurement-based scheme, that constantly monitors the network, will incorporate the current network state in the

More information

Modelling a Video-on-Demand Service over an Interconnected LAN and ATM Networks

Modelling a Video-on-Demand Service over an Interconnected LAN and ATM Networks Modelling a Video-on-Demand Service over an Interconnected LAN and ATM Networks Kok Soon Thia and Chen Khong Tham Dept of Electrical Engineering National University of Singapore Tel: (65) 874-5095 Fax:

More information

QoS Configuration. Overview. Introduction to QoS. QoS Policy. Class. Traffic behavior

QoS Configuration. Overview. Introduction to QoS. QoS Policy. Class. Traffic behavior Table of Contents QoS Configuration 1 Overview 1 Introduction to QoS 1 QoS Policy 1 Traffic Policing 2 Congestion Management 3 Line Rate 9 Configuring a QoS Policy 9 Configuration Task List 9 Configuring

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

A Pipelined Memory Management Algorithm for Distributed Shared Memory Switches

A Pipelined Memory Management Algorithm for Distributed Shared Memory Switches A Pipelined Memory Management Algorithm for Distributed Shared Memory Switches Xike Li, Student Member, IEEE, Itamar Elhanany, Senior Member, IEEE* Abstract The distributed shared memory (DSM) packet switching

More information

Scheduling (Chapter 9) Outline

Scheduling (Chapter 9) Outline Scheduling (Chapter 9) An Engineering Approach to Computer Networking S. Keshav (Based on slides of S. Keshav http://www.cs.cornell.edu/home/skeshav/book/slides/index.html and material of J. Liebeherr,

More information

different problems from other networks ITU-T specified restricted initial set Limited number of overhead bits ATM forum Traffic Management

different problems from other networks ITU-T specified restricted initial set Limited number of overhead bits ATM forum Traffic Management Traffic and Congestion Management in ATM 3BA33 David Lewis 3BA33 D.Lewis 2007 1 Traffic Control Objectives Optimise usage of network resources Network is a shared resource Over-utilisation -> congestion

More information

CS551 Router Queue Management

CS551 Router Queue Management CS551 Router Queue Management Bill Cheng http://merlot.usc.edu/cs551-f12 1 Congestion Control vs. Resource Allocation Network s key role is to allocate its transmission resources to users or applications

More information

High Performance Fair Bandwidth Allocation for Resilient Packet Rings

High Performance Fair Bandwidth Allocation for Resilient Packet Rings In Proceedings of the 5th ITC Specialist Seminar on Traffic Engineering and Traffic Management High Performance Fair Bandwidth Allocation for Resilient Packet Rings V. Gambiroza, Y. Liu, P. Yuan, and E.

More information

Adaptive RTP Rate Control Method

Adaptive RTP Rate Control Method 2011 35th IEEE Annual Computer Software and Applications Conference Workshops Adaptive RTP Rate Control Method Uras Tos Department of Computer Engineering Izmir Institute of Technology Izmir, Turkey urastos@iyte.edu.tr

More information

Markov Chains and Multiaccess Protocols: An. Introduction

Markov Chains and Multiaccess Protocols: An. Introduction Markov Chains and Multiaccess Protocols: An Introduction Laila Daniel and Krishnan Narayanan April 8, 2012 Outline of the talk Introduction to Markov Chain applications in Communication and Computer Science

More information

Traffic contract. a maximum fraction of packets that can be lost. Quality of Service in IP networks 1. Paolo Giacomazzi. 4. Scheduling Pag.

Traffic contract. a maximum fraction of packets that can be lost. Quality of Service in IP networks 1. Paolo Giacomazzi. 4. Scheduling Pag. 4. Scheduling Pag. 1 Traffic contract A traffic contract between a provider and a customer includer a TCA and a SLA The TCA specifies the traffic profile The SLA specifies the QoS requirements a delay

More information

Mohammad Hossein Manshaei 1393

Mohammad Hossein Manshaei 1393 Mohammad Hossein Manshaei manshaei@gmail.com 1393 Voice and Video over IP Slides derived from those available on the Web site of the book Computer Networking, by Kurose and Ross, PEARSON 2 Multimedia networking:

More information

Configuring QoS. Understanding QoS CHAPTER

Configuring QoS. Understanding QoS CHAPTER 29 CHAPTER This chapter describes how to configure quality of service (QoS) by using automatic QoS (auto-qos) commands or by using standard QoS commands on the Catalyst 3750 switch. With QoS, you can provide

More information

Back pressure based multicast scheduling for fair bandwidth allocation

Back pressure based multicast scheduling for fair bandwidth allocation ack pressure based multicast scheduling for fair bandwidth allocation Saswati Sarkar and Leandros Tassiulas 2 Department of Electrical Engineering University of Pennsylvania 2 Department of Electrical

More information

CS 556 Advanced Computer Networks Spring Solutions to Midterm Test March 10, YOUR NAME: Abraham MATTA

CS 556 Advanced Computer Networks Spring Solutions to Midterm Test March 10, YOUR NAME: Abraham MATTA CS 556 Advanced Computer Networks Spring 2011 Solutions to Midterm Test March 10, 2011 YOUR NAME: Abraham MATTA This test is closed books. You are only allowed to have one sheet of notes (8.5 11 ). Please

More information

Chapter III. congestion situation in Highspeed Networks

Chapter III. congestion situation in Highspeed Networks Chapter III Proposed model for improving the congestion situation in Highspeed Networks TCP has been the most used transport protocol for the Internet for over two decades. The scale of the Internet and

More information

Topic 4b: QoS Principles. Chapter 9 Multimedia Networking. Computer Networking: A Top Down Approach

Topic 4b: QoS Principles. Chapter 9 Multimedia Networking. Computer Networking: A Top Down Approach Topic 4b: QoS Principles Chapter 9 Computer Networking: A Top Down Approach 7 th edition Jim Kurose, Keith Ross Pearson/Addison Wesley April 2016 9-1 Providing multiple classes of service thus far: making

More information

Optimal Routing and Scheduling in Multihop Wireless Renewable Energy Networks

Optimal Routing and Scheduling in Multihop Wireless Renewable Energy Networks Optimal Routing and Scheduling in Multihop Wireless Renewable Energy Networks ITA 11, San Diego CA, February 2011 MHR. Khouzani, Saswati Sarkar, Koushik Kar UPenn, UPenn, RPI March 23, 2011 Khouzani, Sarkar,

More information

Packet Scheduling with Buffer Management for Fair Bandwidth Sharing and Delay Differentiation

Packet Scheduling with Buffer Management for Fair Bandwidth Sharing and Delay Differentiation Packet Scheduling with Buffer Management for Fair Bandwidth Sharing and Delay Differentiation Dennis Ippoliti and Xiaobo Zhou Department of Computer Science University of Colorado at Colorado Springs Colorado

More information

Router s Queue Management

Router s Queue Management Router s Queue Management Manages sharing of (i) buffer space (ii) bandwidth Q1: Which packet to drop when queue is full? Q2: Which packet to send next? FIFO + Drop Tail Keep a single queue Answer to Q1:

More information

INTERNATIONAL TELECOMMUNICATION UNION

INTERNATIONAL TELECOMMUNICATION UNION INTERNATIONAL TELECOMMUNICATION UNION TELECOMMUNICATION STANDARDIZATION SECTOR STUDY PERIOD 21-24 English only Questions: 12 and 16/12 Geneva, 27-31 January 23 STUDY GROUP 12 DELAYED CONTRIBUTION 98 Source:

More information

Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras

Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras Week 05 Lecture 18 CPU Scheduling Hello. In this lecture, we

More information

Performance Analysis of Cell Switching Management Scheme in Wireless Packet Communications

Performance Analysis of Cell Switching Management Scheme in Wireless Packet Communications Performance Analysis of Cell Switching Management Scheme in Wireless Packet Communications Jongho Bang Sirin Tekinay Nirwan Ansari New Jersey Center for Wireless Telecommunications Department of Electrical

More information

SIMULATION FRAMEWORK MODELING

SIMULATION FRAMEWORK MODELING CHAPTER 5 SIMULATION FRAMEWORK MODELING 5.1 INTRODUCTION This chapter starts with the design and development of the universal mobile communication system network and implementation of the TCP congestion

More information

Promoting the Use of End-to-End Congestion Control in the Internet

Promoting the Use of End-to-End Congestion Control in the Internet Promoting the Use of End-to-End Congestion Control in the Internet IEEE/ACM Transactions on ing, May 3 1999 Sally Floyd, Kevin Fall Presenter: Yixin Hua 1 About Winner of the Communications Society William

More information

Network Management & Monitoring

Network Management & Monitoring Network Management & Monitoring Network Delay These materials are licensed under the Creative Commons Attribution-Noncommercial 3.0 Unported license (http://creativecommons.org/licenses/by-nc/3.0/) End-to-end

More information

Lecture Outline. Bag of Tricks

Lecture Outline. Bag of Tricks Lecture Outline TELE302 Network Design Lecture 3 - Quality of Service Design 1 Jeremiah Deng Information Science / Telecommunications Programme University of Otago July 15, 2013 2 Jeremiah Deng (Information

More information

ECE 610: Homework 4 Problems are taken from Kurose and Ross.

ECE 610: Homework 4 Problems are taken from Kurose and Ross. ECE 610: Homework 4 Problems are taken from Kurose and Ross. Problem 1: Host A and B are communicating over a TCP connection, and Host B has already received from A all bytes up through byte 248. Suppose

More information

Quality of Service (QoS)

Quality of Service (QoS) Quality of Service (QoS) A note on the use of these ppt slides: We re making these slides freely available to all (faculty, students, readers). They re in PowerPoint form so you can add, modify, and delete

More information

Achieving Distributed Buffering in Multi-path Routing using Fair Allocation

Achieving Distributed Buffering in Multi-path Routing using Fair Allocation Achieving Distributed Buffering in Multi-path Routing using Fair Allocation Ali Al-Dhaher, Tricha Anjali Department of Electrical and Computer Engineering Illinois Institute of Technology Chicago, Illinois

More information

QoS for Real Time Applications over Next Generation Data Networks

QoS for Real Time Applications over Next Generation Data Networks QoS for Real Time Applications over Next Generation Data Networks Final Project Presentation December 8, 2000 http://www.engr.udayton.edu/faculty/matiquzz/pres/qos-final.pdf University of Dayton Mohammed

More information