Call Admission Control in IP networks with QoS support

Similar documents
Resource Stealing in Endpoint Controlled Multi-class Networks

Resource Stealing in Endpoint Controlled Multi-class Networks

INTEGRATED SERVICES AND DIFFERENTIATED SERVICES: A FUNCTIONAL COMPARISON

Basics (cont.) Characteristics of data communication technologies OSI-Model

Adaptive-Weighted Packet Scheduling for Premium Service

Internet Quality of Service: an Overview

Advanced Computer Networks

Analysis of the interoperation of the Integrated Services and Differentiated Services Architectures

Lecture 14: Performance Architecture

Telecommunication Services Engineering Lab. Roch H. Glitho

Lecture Outline. Bag of Tricks

Quality of Service II

QoS Services with Dynamic Packet State

CSCD 433/533 Advanced Networks Spring Lecture 22 Quality of Service

A DiffServ IntServ Integrated QoS Provision Approach in BRAHMS Satellite System

Comparison of Shaping and Buffering for Video Transmission

Mohammad Hossein Manshaei 1393

Real-Time Protocol (RTP)

MITIGATION OF SUBSEQUENT REQUEST PROBLEM IN PROBE BASED ADMISSION CONTROL FOR MULTICAST

Domain Based Approach for QoS Provisioning in Mobile IP

QoS in IPv6. Madrid Global IPv6 Summit 2002 March Alberto López Toledo.

EE 122: Differentiated Services

Lecture 13. Quality of Service II CM0256

H3C S9500 QoS Technology White Paper

Quality of Service in the Internet

Quality of Service in the Internet

Quality of Service (QoS)

Lesson 14: QoS in IP Networks: IntServ and DiffServ

Quality of Service Monitoring and Delivery Part 01. ICT Technical Update Module

QoS for Real Time Applications over Next Generation Data Networks

Investigating Bandwidth Broker s inter-domain operation for dynamic and automatic end to end provisioning

Announcements. Quality of Service (QoS) Goals of Today s Lecture. Scheduling. Link Scheduling: FIFO. Link Scheduling: Strict Priority

Presentation Outline. Evolution of QoS Architectures. Quality of Service Monitoring and Delivery Part 01. ICT Technical Update Module

Resource allocation in networks. Resource Allocation in Networks. Resource allocation

Lecture 9. Quality of Service in ad hoc wireless networks

Problems with IntServ. EECS 122: Introduction to Computer Networks Differentiated Services (DiffServ) DiffServ (cont d)

Toward Scalable Admission Control for VoIP Networks

Quality Differentiation with Source Shaping and Forward Error Correction

Improving QOS in IP Networks. Principles for QOS Guarantees

Quality of Service Mechanism for MANET using Linux Semra Gulder, Mathieu Déziel

THE Differentiated Services (DiffServ) architecture [1] has been

CSE 123b Communications Software

Enabling Real-Time All-IP Wireless Networks

Quality of Service in Wireless Networks Based on Differentiated Services Architecture

Aggregation and Scalable QoS: A Performance Study

Quality of Service in the Internet. QoS Parameters. Keeping the QoS. Leaky Bucket Algorithm

Comparative Performance Analysis of RSVP and RMD

Internet Engineering Task Force (IETF) December 2014

On Network Dimensioning Approach for the Internet

Congestion Control and Resource Allocation

Configuring QoS CHAPTER

CS 349/449 Internet Protocols Final Exam Winter /15/2003. Name: Course:

Part1: Lecture 4 QoS

Last time! Overview! 14/04/15. Part1: Lecture 4! QoS! Router architectures! How to improve TCP? SYN attacks SCTP. SIP and H.

Resource reservation in a connectionless network

DiffServ Architecture: Impact of scheduling on QoS

A Preferred Service Architecture for Payload Data Flows. Ray Gilstrap, Thom Stone, Ken Freeman

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services

Presented by: B. Dasarathy OMG Real-Time and Embedded Systems Workshop, Reston, VA, July 2004

Week 7: Traffic Models and QoS

Telematics 2. Chapter 3 Quality of Service in the Internet. (Acknowledgement: These slides have been compiled from Kurose & Ross, and other sources)

CSE 461 Quality of Service. David Wetherall

Page 1. Quality of Service. CS 268: Lecture 13. QoS: DiffServ and IntServ. Three Relevant Factors. Providing Better Service.

A model for Endpoint Admission Control Based on Packet Loss

PERFORMANCE ANALYSIS OF AF IN CONSIDERING LINK UTILISATION BY SIMULATION WITH DROP-TAIL

Converged Networks. Objectives. References

Computer Network Fundamentals Fall Week 12 QoS Andreas Terzis

Principles. IP QoS DiffServ. Agenda. Principles. L74 - IP QoS Differentiated Services Model. L74 - IP QoS Differentiated Services Model

Internet Services & Protocols. Quality of Service Architecture

Internet Draft Resource Management in Diffserv MBAC PHR October Internet Engineering Task Force

Quality of Service Basics

Master Course Computer Networks IN2097

Chapter 24 Congestion Control and Quality of Service 24.1

PARALLEL ALGORITHMS FOR IP SWITCHERS/ROUTERS

Overview. Lecture 22 Queue Management and Quality of Service (QoS) Queuing Disciplines. Typical Internet Queuing. FIFO + Drop tail Problems

End-to-End Mechanisms for QoS Support in Wireless Networks

Master Course Computer Networks IN2097

QoS Provisioning Using IPv6 Flow Label In the Internet

Performance Comparison of TFRC and TCP

Real-Time Applications. Delay-adaptive: applications that can adjust their playback point (delay or advance over time).

CHOKe - A simple approach for providing Quality of Service through stateless approximation of fair queueing. Technical Report No.

Quality of Service (QoS)

Network Working Group Request for Comments: 2996 Category: Standards Track November 2000

RED behavior with different packet sizes

QoS Configuration. Overview. Introduction to QoS. QoS Policy. Class. Traffic behavior

DiffServ Architecture: Impact of scheduling on QoS

Core-Stateless Proportional Fair Queuing for AF Traffic

Improving VoD System Efficiency with Multicast and Caching

Mapping Mechanism to Enhance QoS in IP Networks

User Based Call Admission Control Policies for Cellular Mobile Systems: A Survey

Quality of Service Architectures for Wireless Networks: IntServ and DiffServ Models

A Bandwidth-Broker Based Inter-Domain SLA Negotiation

Differentiated Service Router Architecture - Classification, Metering and Policing

Tutorial 9 : TCP and congestion control part I

PERFORMANCE ANALYSIS OF AF IN CONSIDERING LINK

Sections Describing Standard Software Features

RSVP 1. Resource Control and Reservation

Configuring QoS. Finding Feature Information. Prerequisites for QoS

Resource Control and Reservation

Telematics 2 & Performance Evaluation

Transcription:

Call Admission Control in IP networks with QoS support Susana Sargento, Rui Valadas and Edward Knightly Instituto de Telecomunicações, Universidade de Aveiro, P-3810 Aveiro, Portugal ECE Department, Rice University, Houston TX 77005, USA Abstract 1 This paper addresses the problem of admission control for IP networks with QoS support. Several admission control architectures and algorithms are presented and compared. Special attention is given to the probing algorithm. The stealing problem associated with this mechanism is studied through simulation. I. INTRODUCTION Telecommunications networks are growing and developing very fast. While a few years ago there were separate network infrastructures for the support of voice and data services, the current trend is to integrate new applications and services in a single packet switching network having IP (Internet Protocol) as the unifying protocol. Traditional IP networks only support best-effort services, that is, different services with different QoS (Quality of Service) requirements are treated equally by the network, and it is not possible to differentiate among them nor to assure specific QoS targets for a given service. One of the main elements required in the network to provide QoS is the Call Admission Control (CAC) mechanism. If the network has no control on the number of flows that are active at the same time, then the overall traffic demand may be higher than the one supported by the network, and the flows may be degraded (e.g. the transmission delay and the percentage of lost packets may be higher than required). With CAC support, once a new flow requests permission to use the network, the admission control algorithm calculates the available bandwidth in each link and decides if there are sufficient resources to admit the new traffic flow while providing the requested QoS. The admission control element is the common element to provide QoS to Integrated Services (IntServ) [1] architectures defined by the IETF. IntServ contains a signaling protocol, RSVP (resource ReSerVation Protocol) [2], to carry the reservation requests to all the routers along the path. IntServ has scalability problems since all routers need to keep track of the flows reservations and need to maintain state information of all flows. To solve these scalability problems another architecture was proposed: the Differentiated Services (DiffServ) [3]. In this architecture there are no resource reservations and flows are aggregated in classes according to specific characteristics. Here services have a different treatment according to their class, but there is no admission control mechanism to limit the number of flows in the network. Even when the network is congested a flow can become active and violate the traffic parameters of other flows previously established. Therefore there is no strict QoS guarantee. DiffServ can be implemented without scalability problems, but IntServ has a stronger service model. Instead of having a distributed admission control and having all core routers processing RSVP messages as in IntServ, a Bandwidth Broker (BB) was proposed [4] to concentrate in only one element the admission control functions. Although the functions are pushed to only one element it is difficult for that element to manage all the reservations in a network and store the information about all paths, elements and flows. Trying to profit from the best of the two IETF architectures and reducing the scalability problems, several novel architectures and algorithms have been proposed: DPS (Dynamic Packet State) [5] where the state information is inserted in the packet header; aggregation [6] that only performs admission control for a group of flows, decreasing the number of signaling messages and; egress admission control [7] which is based on passive monitoring of the network. Although these mechanisms provide an efficient service model and scalability, they require specific functionalities in the core and edge routers, like insertion of packet state in the headers, a special scheduler to be implemented in each router and rate monitoring. To prevent the use of a signaling protocol and special packet processing in core nodes, a call admission control mechanism based on probing [8] was proposed, where a test flow is inserted into the network to measure its congestion level. This paper is organized as follows. In section II we describe several call admission control mechanisms. In section III we discuss in more detail the probing mechanism. In section IV we present a set of simulation experiments to assess the performance of the probing mechanism and in section V we conclude the paper. II. CALL ADMISSION CONTROL MECHANISMS In this section we will describe several call admission control mechanisms that assure end-to-end QoS differentiation to a service or a class of service. A. RSVP Signaling The RSVP (resource ReSerVation Protocol) [2] is a protocol that establishes and maintains the resource reservation in an IntServ network. It aims to communicate the resource demands and reservations to each router along the flow s path. It works as follows: (1) the sender sends a PATH message to the receiver announcing the traffic characteristics and QoS requirements of the flow, and each router along the path

retransmit it to the next hop. (2) Upon receiving the PATH message, the receiver sends a RESV message requesting network resources to the flow. Each router along the path can accept or reject the request if there are not available resources in that hop. If the request is rejected in a router, it sends an error message to the receiver and the signaling process ends. If the reservation is accepted, bandwidth is reserved to this flow in each router. Although this signaling protocol is very strong in providing QoS support, it is not scalable, since it is necessary to maintain a flow state in each router along the flow s path, and all routers participate in the signaling protocol. The number of RSVP messages processed is proportional to the number of flows in the network and bandwidth must be reserved in each router on a per-flow basis. Both these disadvantages can lead to poor router performance. B. Bandwidth Broker based Admission Control Bandwidth Brokers (BB) remove the need for QoS reservation states in the core routers, by centrally storing and managing this information. A BB [4] may be a router or a software package installed in a router/switch in the network. The main modules of the BB are the call admission control and routing ones. The former maintains the QoS state of the network domain and is responsible for the admission control and resources reservation. The latter decides the path that the admitted flow will traverse towards the receiver. The BB also contains databases with information about network topology, flows and QoS state in each path and node. Usually there is a BB per network domain. Since the sender and receiver can belong to different domains, the BB from their domains and the intermediate ones must communicate the QoS reservation states between each other. The general description of the call admission control module is as follows. When a new flow with specific traffic parameters, delay and loss requirements requests admission, it sends a QoS request message to the BB. The BB recalculates the available bandwidth in each link, and verifies if there is a path where the new flow can be admitted or not. If the flow is admitted, the BB sends a message to the sender with a positive answer to the flow s request, and updates its database. The available bandwidth in each connection is calculated through information stored in the BB about active flows, their traffic characteristics and their paths. Flows with the same characteristics may be grouped in service classes, such that the BB operations become faster and the number of requested flows that a BB can support increases. Although in this architecture the core routers will be freed from performing admission control decisions, the BB needs to manage the overall network and to store information about all elements, flows and paths in the network. This is very hard for only one element. Therefore, for a large network, a distributed mechanism is preferable. C. Dynamic Packet State (DPS) In the DPS [5] technique, the flow state information (like reserved rate, variables used in the scheduling process) is inserted into packet headers, which overcomes the need for per-flow signaling and state management. The ingress router initializes the state information. Core routers process each incoming packet based on the state carried on it and eventually update its internal state and the state in the packet s header before forwarding it to the next hop. This mechanism uses core stateless scheduling disciplines [5], which calculate the packet s deadline, based only on the state variables of the flow it belongs to. At core nodes packet classification is no longer needed and packet scheduling is based only on the state carried in packet headers. Thus, per flow state can be stored only in the ingress node and the core nodes retrieve it in each core node. The state can be inserted in four bits of the Type of Service (ToS) bytes, which are reserved for experimental use, and into the 13 bits of the ip_off field in the IPv4 header, which is used to support packet fragmentation and reassembly (usually only 0.22% of the packets are fragmented). In terms of admission control, RSVP signaling is used to communicate between the sender and receiver, but RSVP messages are only processed by edge nodes. The ingress node, upon receiving a PATH message, simply forwards it through the domain towards the egress node. The egress node, upon receiving the first RESV message for a flow, forwards the message to the corresponding ingress node, which in turn will send with a special signaling message along the path towards the egress node. Upon receiving this signaling message, each node along the path performs a local admission control test based on the aggregate reservation rate in that node. A simple method for calculation of this aggregate rate is detailed in [5]. When a flow terminates, reservation termination messages are sent in order to release the reserved bandwidth. With this technique, the core routers are freed from maintaining per flow state, but a deterministic service is provided since the admission control is based only on the flow s rate inserted in the packet header. This reduces the utilization. Moreover, it is required that all routers in the flow s path implement the same scheduling discipline. D. Aggregation in Int-Serv Aggregation [6] is a mechanism used to reduce the number of signaling messages in an IntServ architecture. In this technique the admission control is only performed on an aggregated set of flows and therefore core routers need only to maintain the reservation state of each aggregate. The RSVP protocol is used but only for the aggregate. Thus, core routers do not store the reservation state of individual flows. More specifically, when a flow asks for admission, the ingress router performs the admission control decision based solely on its knowledge of the bandwidth occupancy of the aggregate. To allow for load fluctuations, the ingress router

can adjust reservations in the core at slow time scales when compared to the IntServ reservation time scale. Thus, the signaling and the amount of stored state information in the core routers can be highly reduced. The aggregation implies a tradeoff: with more aggregation, more flows are not admitted and the utilization decreases; with small aggregation the decrease in utilization is neglected but the number of signaling messages remains high. If loads are relatively constant, the nodes rarely need to be signaled. Otherwise the signaling will be near to IntServ s one. E. Measurement-Based Admission Control in Egress Routers In this scheme [7], the admission control decisions are only performed by egress routers, without maintaining per-flow state neither in core nor in egress routers. The admission decisions are based only on aggregate measurements collected in the egress router. The key technique is to passively measure the available service in the end-to-end path. Using a black box system model, the measurements can incorporate the cross traffic effects without explicitly measuring it or controlling it. Cross traffic is the traffic that is merged in some links with the traffic that is being measured in the egress, but has a different egress router. For this purpose the measurement-based theory of envelopes [8] is used to characterize and control both arrivals and services in a general way. Arrival envelopes are based on the maximum rate of the arrivals. Service envelopes are based on the minimum service available. By measuring the aggregate rate envelope, the short time scale burstiness of the traffic is captured, which is employed in resource reservation and admission control. Then, measuring the variation of the aggregate rate envelope, characterizes the measurement errors at longer time scales, so the variance of the measured envelope can be used to determine the confidence value of the schedulability condition and estimate the expected fraction of packets that a new flow would have in the system if it would be admitted. In the service envelope the crosstraffic effects are measured using the delay of each packet between the ingress and the egress node. The egress node computes the aggregate arrival envelope and the minimum available service, and then executes the admission control algorithm to accept or deny the new flow. If the minimum available service is sufficient to guarantee a maximum admissible delay for the new flow and to guarantee that the QoS requirements for the already admitted flows are not violated, the flow is admitted. Although the only router that needs to perform admission control is the egress one, only a large-scale prediction of the congestion level is made. The network conditions may change and the QoS requirements may be degraded. F. End-Point Admission Control through Probing In this mechanism [9] the admission of a new flow is performed by the end-hosts or egress/ingress routers through the inference of the network congestion state in the flow s path. Before a new flow is established, the sender sends a packet stream to the flow s path with the same traffic characteristics of the flow that is requesting admission. The packet loss ratio, the delay or delay variation, are measured at the receiver, which verifies the network congestion level. This is called probing. If the measured performance is acceptable (according to the required service QoS), the flow is admitted; otherwise it is rejected. The QoS functionalities in this mechanism are pushed to the end-points, precluding the need of a signaling protocol or special functions in the core or edge routers. The overhead introduced with active probing and the set-up time required to initiate a call are some disadvantages of this technique. In the next section this mechanism will be studied in more detail. III. END-POINT ADMISSION CONTROL AND ITS STEALING PROBLEM The performance metrics for assessing the network congestion level are usually the end-to-end packet delay or the number of lost packets in the path from the sender to the receiver. These measurements can be used to perform admission control. The objective is to learn what is the effect in the network of inserting a new flow. If the network performance with the new flow is still admissible, that is, if the loss packet ratio or the delay is lower than the maximum admissible ones, the new flow can be admitted. To assess the effect of admitting the flow, a sequence of packets is inserted in the flow s path for a small interval, much lower than the mean holding time of the flow. This interval needs only to be large enough to estimate the packet loss ratio or delay with a sufficient confidence level. The end-hosts (or ingress/egress pairs) need to implement a simple software routine to count the number of transmitted or received packets, or to store the mean or maximum packet delay of the probing flow. In the end of the probing period the receiver sends back to the sender a packet with the statistics, and the sender upon processing this packet, decides to admit or not the new flow. If the flow is admitted the end-points already know what is the impact on the network congestion of that new flow. The probing flows will degrade the network utilization. However the probing overhead is very low compared with the overall utilization because the probing period is much lower than the flow s holding time. Although it seems a very simple and efficient mechanism, it can introduce a stealing problem [10]. Consider an example of a fair queueing scheduler and a link with capacity C. Suppose that a first flow requires (3/4)C and is admitted in the system. If a new flow requesting (1/2)C probes the system, it verifies that it can still achieve a loss-free service with the requested throughput and admits itself. The rate of the first flow will reduce to the fair rate of (1/2)C, which violates the service requirements of this flow. Fair Queuing isolates the probing flow, the one with (1/2)C, from the admitted one, the one with (3/4)C, so it can not assess the impact on its acceptance on the other flow and produces

stealing. Stealing can also occur in class-based queuing systems. This type of system can achieve differentiation among classes since different QoS requirements can be assigned to each class. Consider a system where each class has an assigned weight of 1/2, that is, the fair share of each class is 1/2 of the available capacity. If class 1 has no flows and class 2 needs more than its fair share, class 1 will borrow its bandwidth. If at any time, class 1 needs to admit its flows, the user probes the network in class 1 and verifies that class 1 has available resources because there are no class 1 flows in it, and the flow is admitted. However the bandwidth assigned to class 1 may be completely occupied with class 2 flows, and the admission of class 1 flows can steal resources that were previously assigned to other flows. The problem is a lack of observability on other classes. A probe cannot infer its impact in other classes because it does not assess the congestion state there. In next section we study this problem through simulation. IV. EXPERIMENTAL RESULTS In this section, we present a set of simulation experiments with the goal of evaluating the probing schemes presented in section III. The basic scenario consists of a large number of hosts interconnected via a 45 Mb/sec multi-class router. For some experiments, the router contains rate limiters, which drop all of a class packets exceeding the pre-specified rate. We consider several multi-class schedulers including class-based fair queuing, flow-based fair queuing, and rate limiters. We also consider FIFO scheduling for baseline comparisons. We assume that new flows arrive to the system as a Poisson process with mean inter-arrival time 1/λ, and that flow holding times are also exponentially distributed with mean 1/µ. In our cases we consider that λ and µ are the same for all types of flows. All flows probe for a constant time of 2 seconds. Flows send probes at their desired admission rate except for ε-probes, which are transmitted at 64 kb/sec. New flows are admitted if the loss rate of the probes is below the class threshold. All experiments address the problem of resource stealing and multi-class networks. In the first set of experiments, depicted in Fig. 1, we investigate the challenge of simultaneously achieving high utilization and a strong service model without stealing. In this scenario, there are three traffic classes with bandwidth requirements of 512 kb/sec, 1 Mb/sec, and 2 Mb/sec respectively. In the figure the flow-based fair queuing curve, (labeled FQ ) represents the case in which the scheduler allocates bandwidth fairly among flows, i.e., the N-th probing flow measures no loss if its rate is less than C/N. In contrast, the curves labeled Rate Limiters 1, Rate Limiters 2 and CBQ 1 level probing represent class-based scheduling. In the former case, each class is rate limited to C/3 so that all loss occurs in the rate limiters and none in the scheduler. In the latter case, the classes are not rate limited and the scheduler performs class-based fair queuing with each class weight set to 1/3. In all cases, probes are transmitted at the flow s desired rate and ε-probing is not performed. The x- label, load, is the resources demand which is given by λ/µ. Fig. 1 Utilization vs load for various node architectures We make the following observations about the figure. From Fig. 1, it is clear that class-based fair queuing achieves higher utilization than the rate limiters due to the latter s non-workconserving nature. That is, the rate limiters prevent flows from being admitted in a particular class whenever the class total reserved rate is C/3, even if capacity is available in other classes. However, from Fig. 2, it is clear that the higher utilization of class-based fair queuing is achieved at a significant cost: namely, class-based fair queuing incurs stealing in which up to 1.5% of the bandwidth (in the range shown) guaranteed to flows is stolen by flows in other classes. Hence the experiments illustrate that neither technique simultaneously achieves high resource utilization and a strong service model. Moreover, as the resources demanded by a class become mismatched with the preallocated weights, the performance penalty of rate limiters is further increased. That is, if the demanded bandwidth were temporarily 80/10/10 rather than 33/33/33, as is the case for the curve labeled Rate Limiters 2 at a load of 40, then the rate limiters would restrict the system utilization to at most 53% representing a 33/10/10 allocation. Fig. 2 Stealing vs load for various node architectures Second, observe the effects of flow aggregation on system performance. In particular, flow-based fair queuing achieves higher utilization and has higher stealing than class-based fair

queuing. With no aggregation and flow-based queuing, smaller bandwidth flows can always steal bandwidth from higher bandwidth flows resulting in higher utilization, since more flows are admitted (in particular low bandwidth flows), as well as more flows having bandwidth stolen. In contrast, with class-based fair queuing, stealing only occurs when a class exceeds its 1/3 allocation (rather than a flow exceeding its 1/N allocation) and a flow from another class requests admission, an event that occurs with less frequency. V. CONCLUSIONS This paper addresses the problem of admission control for IP networks with QoS support. The following call admission control schemes were discussed: RSVP signaling, Bandwidth Broker based admission control, Dynamic Packet State, Aggregation in Int-Serv, Measurement-based admission control in the egress router and end-point admission through probing. Special attention was given to the probing algorithm. The stealing problem associated with this mechanism was studied through simulation. VI. REFERENCES [1] R. Braden et al. "Integrated Services in the Internet Architecture", Internet RFC 1633, 1994. [2] L. Zhang et al. "RSVP: A New Resource ReSerVation Protocol", IEEE Network, vol. 7, pp. 8-18, September 1993. [3] S. Blake et al. "An Architecture for Differentiated Services", Internet RFC 2475, 1998. [4] Z. Zhang et al. "Decoupling QoS Control from Core Routers: a Novel Bandwidth Broker Architecture for Scalable Support of Guaranteed Services", In Proceedings of ACM SIGCOMM'00, Stockholm, Sweden, August 2000. [5] I. Stoica and H. Zhang, "Providing Guaranteed Services without per Flow Management", In Proceedings of ACM SIGCOMM'99, Cambridge, MA, August 1999. [6] F. Baker, C. Iturralde, F. le Faucher and B. Davie, "Aggregation of RSVP for IPv4 and IPv6 Reservations", Internet Draft, draft-ietf-issll-rsvp-aggr-02.txt, Mach 2000. [7] C. Centinkaya and E. Knightly "Scalable Services via Egress Admission Control", In Proceedings of IEEE INFOCOM'00, Tel Aviv, Israel, March 2000. [8] J. Qiu and E. Knightly "Inter-class Resource Sharing using Statistical Service Envelopes", In Proceedings of IEEE INFOCOM'99, New York, NY, March 1999. [9] V. Elek et al. "Admission Control Based on End-to-End Measurements", In Proceedings of IEEE INFOCOM'00, Tel Aviv, Israel, March 2000. [10] L. Breslau et al. "End-point Admission Control: Architectural Issues and Performance", In Proceedings of ACM SIGCOMM'00, Stockholm, Sweden, August 2000.