2. Integrated Services

Size: px
Start display at page:

Download "2. Integrated Services"

Transcription

1 1. Introduction Today s Internet provides only best-effort service, i.e., the traffic is processed as quickly as possible, but there are no guarantees for Quality of Service, QoS. In this thesis the term QoS refers to the nature of the packet delivery provided by network. QoS can be characterized by parameters such as bandwidth and packet delay. The increased usage of Internet has made it to some extent heavily loaded and at the same time new real-time applications have emerged, demanding much better service quality than a congested network can offer. In addition, the Internet service providers (ISPs) may want to differentiate their services, not only with their pricing structure or connection type, but by other means as well. By providing alternative levels of QoS, ISPs can improve their revenues through differentiated pricing classes. On the other hand, the current Internet model is both workable and simple. The new technologies, e.g., Wavelength Division Multiplexing (WDM), might make bandwidth so cheap that QoS would be automatically delivered and the problem could be solved by simply increasing the capacity of the congested network. That could be one solution, but on the other hand some new applications might need so much bandwidth that the network capacity would be consumed nevertheless and new mechanisms would still be needed to provide QoS. The Internet Engineering Task Force (IETF) has done a lot of work in standardizing mechanisms to provide QoS for IP networks. By now two QoS architectures have been specified: Integrated Services (IntServ) and Differentiated Services (DiffServ). IntServ was the first approach to solve the QoS problem of IP networks. However, the processing overhead and complexity of the IntServ are too heavy for the Internet environment and therefore something simpler and more scalable is needed. Therefore scalability concern became the main principle of the DiffServ architecture. Even if the standardization of the DiffServ architecture is still in progress, some disadvantages relating to the static nature of the DiffServ model have already been noted. It is therefore very tempting to try to combine the advantages of both models to develop a dynamic and scalable architecture, which would be able to offer predictable end-to-end services. 1

2 This thesis describes both architectures and suggests how the co-operation of IntServ and DiffServ could be arranged. In addition, it presents the measurements that illustrates what kind of QoS a particular commercial router can provide in each QoS model. This study is based on Internet standardizing documents, called Request for Comments (RFC). Since many of the issues are so new that RFCs describing them do not yet exist, the text has been partly based on Internet Drafts, the working documents of IETF. When referring to the draft documents one must bear in mind that they have no established position and can be changed at any time. This study consists of eight chapters. Each topic includes a short analysis based on previous studies or the author s own opinions. The IntServ architecture is presented in Chapter 2. Chapter 3 describes the Resource Reservation Protocol (RSVP) that is used to request resource reservations from IntServ network. The development of RSVP is continuing and the new drafts are also discussed in this chapter. This chapter also includes an analysis of RSVP and IntServ. Chapter 4 presents the DiffServ architecture and compares it with the IntServ model. Both architectures are described in order to discuss the fundamentals of the main scope of the thesis Co-operation of Differentiated and Integrated Services. Chapter 5 presents the scenarios on how the models can interoperate. Traffic scheduler mechanisms are described briefly in Chapter 6 in order to explain how the QoS model can be implemented. This chapter also includes the groundwork for the measurements of the next section. The measurements are presented in Chapter 7. The small test network consisted of three commercial routers and several workstations. The routers were configured to implement different kinds of QoS models. The QoS characteristics, such as packet drops and delays, were measured and analyzed. This chapter also presents the implementation status of IntServ/RSVP and DiffServ. The conclusion chapter summarizes the ideas and results presented in this study. 2

3 2. Integrated Services When the first Integrated Services, IntServ, session was held in the November 1993 the main concern was to discuss how the audio and video casts from IETF meetings could be supported by the network [Kilkki 1999, p.53]. From the very beginning the main principle behind Integrated Services was to enable the transmission of real-time data without the need to modify the underlying Internet architecture. The goals of the IntServ working group are (1) to clearly define the services which this enhanced model provides, (2) to define the interfaces between the application and network service, routers and subnetworks, (3) to develop router validation requirements in order to ensure that the proper service is provided [IETF-IntServ]. 2.1 Real-Time Applications As mentioned above, the demand for the new Internet service model started from the incompatibility of real-time applications with the Internet environment. Originally the Internet offers only a very simple Quality of Service (QoS) concept, namely point-to-point best-effort data delivery. In the case of IntServ, the outline of the services that a network should provide comes from the need of real-time applications. Real-time applications generally operate in the following way: a data stream is packed at the source and transported through the network to its destination, where the data is unpacked and played back by the receiving application. The network introduces some variation in the delay, called jitter. Jitter is caused by different queuing delays at network nodes. This jitter can be suppressed if the received data is first buffered and then played back at some fixed offset delay from the departure time. [RFC1633] Real-time applications are divided into two categories: those that are tolerant for jitter and those that are not. An example of the first case is a unidirectional video-streaming application. At the destination the buffered data is played back after some offset, without the user recognizing the jitter caused by network. An example of an intolerant real-time application is an IP telephone, which requires a smaller than 350ms one way end-to-end delay in order to achieve at least medium level voice quality [Isomäki&Tuominen 1999]. In addition to the delay requirements, real-time applications also need enough bandwidth 3

4 from the network, for instance, a video cast might require a considerable amount of bandwidth. The IntServ model supports both of these real-time application types. For jitter tolerant applications IntServ offers controlled-load service and for intolerant ones guaranteed service model. Those services are presented in Sections 2.6 and 2.7, respectively. 2.2 The Philosophy of Integrated Services The fundamental idea behind Integrated Services is that the existing Internet architecture does not have to be modified. IntServ will only provide a set of extensions to the current best-effort data delivery. The two main building blocks of IntServ are resource reservation and admission control. The term QoS in the context of the IntServ (and in this thesis) refers to the nature of the packet delivery provided by network. QoS is characterized by parameters such as achieved bandwidth, packet delay, and packet loss rate [RFC2216]. IntServ assumes that the guarantees for the delivery of real-time data can be achieved only by reserving resources from the network nodes. The term guarantees implies here that the user must be able to get a service whose quality is sufficiently predictable that the application can operate in an acceptable way over a duration of time determined by the user [RFC1633]. This guarantee level requires flow-specific states in the network nodes. These flow-specific states represent a change in the traditional network node model. A flow is defined as a distinguishable stream of related datagrams from a single user. The discrimination of flows in the IntServ model is done by the sender s and receiver s IP addresses, transport layer protocol and port numbers. Thus the allocation of network resources is accomplished on a flow-by-flow basis where many flows share the available bandwidth. This is described as link sharing.[rfc1633] It is quite obvious that some kind of explicit setup mechanism is needed to create and maintain flow-specific reservations in the network. Chapter 3 discusses the reservation setup protocol called the Resource ReSerVation Protocol (RSVP), which is one of the main components of the IntServ architecture. Since reservation implies that some of the users are getting a privileged service, policy and admission controls are needed. Admission control determines whether a new flow can be granted for the requested resource reservation without affecting other already established flows. Thus the main philosophy of 4

5 IntServ argues that it is better to block a connection request than to violate connections that already exist. The term Integrated Services can be defined as an Internet service model that includes besteffort service, real-time service, and controlled link sharing. [RFC1633] 2.3 The General Parameters of Integrated Services The Integrated Services architecture uses two classes of control parameters to characterize the QoS requested by the applications: general and service-specific. The general parameters appear in all QoS control services and they are defined in the same way throughout the QoS control mechanism. [RFC2215] Service-specific parameters are used only with a specific service, i.e. controlled-load or guaranteed service. These control parameters are transported on the network with the help of the reservation setup protocol, which in the case of IntServ is RSVP. This section gives a brief overview of those characterization parameters in order to explain the nature of IntServ. See RFC 2215 [RFC2215] for a more detailed definition of the parameters. NON_IS_HOP parameter provides information about the presence of nodes that do not implement IntServ, in other words there is a break in the chain of network elements required to provide a specified QoS service class. NUMBER_OF_IS_HOPS parameter represents a counter that is cumulatively increased by one at each IntServ aware network node. AVAILABLE_PATH_BANWIDTH parameter provides information about the bandwidth available along the path followed by a data flow. This local parameter is an estimate of the bandwidth that the network element has available for packets following the path. MINIMUM_PATH_LATENCY parameter is the latency of the packet forwarding process associated with the network node. The purpose of this parameter is to provide a baseline minimum path latency for the use of guaranteed service. Together with the queuing delay bound offered by the guaranteed service, this parameter gives the application knowledge of both the minimum and maximum packet-delivery delay. Knowledge about these delays allows the receiving application to compute buffer requirements and thus remove jitter caused by the network. 5

6 PATH_MTU parameter computes the maximum transmission unit (MTU) for packets following a data path. The parameter includes upper-layer and IP headers and informs the sender about the MTU size that can traverse the data path without being fragmented. TOKEN_BUCKET_TSPEC parameter describes the traffic parameters used by the sender to characterize the traffic that it expects to generate. This parameter is also used to describe the reservation request. The parameter takes the form of a token-bucket specification, presented in the next paragraph, plus a peak rate p, a minimum policed unit m, and a maximum packet size M. The minimum policed unit ables the router to estimate per-packet resources needed to process flow s packets. The peak rate makes it possible to compute the maximum packet rate. Maximum packet size is the largest packet that will receive QoScontrolled service. The token-bucket specification itself includes an average token rate r and a bucket depth b. [RFC2215] 2.4 Token Bucket The token bucket is a traffic control and shaping mechanism that dictates when traffic can be transmitted. The transmission is based on the presence of tokens in the bucket. The token bucket is a very useful mechanism in policing data flow. This mechanism can also be used in the Differentiated Services, where the policed traffic flow is aggregated, not individual. The token bucket contains tokens, each of which represents a unit of bytes. When the tokens are present, a flow is allowed to transmit traffic. If the bucket contains no tokens, a flow can not transmit packets. Tokens are continuously generated at rate r, that in the case of IntServ is measured in bytes of IP datagrams per second. When the IP packet of size L is transmitted, the corresponding amount of tokens are decremented from the bucket. The bucket depth b, measured in bytes, determines how bursty the transmitted flow can be. If there are tokens in the bucket it is possible to transmit traffic faster than the token rate assumes, but in the long run the average rate can not be higher than the token rate r. Figure 2.1 shows the nature of the token bucket scheme. [Ferguson&Huston 1998, p 66] 6

7 Figure 2.1 Token bucket 2.5 Traffic Control The standardizing document RFC 1633 [RFC1633] offers an implementation framework, which can help explain the IntServ model. This framework includes four traffic control components: the packet scheduler, the admission control, the classifier and the reservation setup protocol, see Figure 2.2. The router can be divided into two broad functional categories: to the forwarding path below the thick horizontal line and to the background code above the line. The forwarding path of the router is executed for each packet and it is divided into three sections: the input driver, the internet forwarder and the output driver. The background routines control the forwarding path and handle reservation requests. The packet classifier maps each incoming packet to a specific class. The classification may be based on the contents of the packet header, such as source and destination addresses, TCP or UDP port number or their combination. All packets in the same class get the same treatment from the packet scheduler. The packet scheduler controls the forwarding of different packet streams by using a queuing mechanism. The scheduler moves packets from classifier queues to an output queue in order to achieve the reserved QoS service. There are a few scheduler mechanisms, 7

8 which can be used as an IntServ aware router scheduler. Some of them are presented in Chapter 6. The admission control determines whether a new flow can be granted the requested QoS without impacting earlier guarantees. Each time a host requests QoS service, the local admission control routine at each node along the path decides to either accept or reject the request. The admission control also plays an important role in accounting and administrative reporting. It can check if a node is allowed to do the reservation it is requesting and charge the user from the provided QoS service. The admission control is sometimes confused with policing, which in the case of IntServ means one of the packet scheduler functions. As mentioned in Section 2.2 admission control is an essential part of the IntServ model. To set up flow states in the routers along the end-to-end flow-transit path, a resource reservation protocol is needed. The default reservation setup protocol is RSVP, although other protocols are also allowed. In order to state its resource requirements, an application specifies the desired QoS using a list of parameters called flow specification (Flowspec). Flowspec is carried by the reservation setup protocol, passed to admission control and, if accepted, used to parametrize the packet scheduling mechanism. [RFC1633] Figure 2.2 Implementation Model [RFC1633] 8

9 2.6 Guaranteed Service As described in Section 2.1, guaranteed service can be used by intolerant real-time applications. This service is specified in RFC [RFC2212] The service provides guaranteed bandwidth and delay bound for packets delivered along the reserved path. The term guaranteed bandwidth implies that no queuing losses due to buffer overflow will occur if the flow stays within the bounds of its specified traffic parameters. Guaranteed delay refers to the upper bound of end-to-end delay that the datagram will experience on the path from the sender to receiver. Guaranteed service does not attempt to minimize jitter, it merely controls the maximum queuing delay. The data transmission in the IP-networks causes two types of delays: a fixed and a queuing delay. The fixed delay includes transmission delays, the properties of the chosen path. Therefore only the queuing can be controlled by guaranteed service. The guaranteed service framework asserts mathematically that the queuing delay is a function of two factors: the token bucket depth (b) and token data rate (r). Since the application controls these values, it has the knowledge of the queuing delay. As a result the application is able to control the buffer requirements and set its playback point so that all of the packets arrive in time. Furthermore, if the delay is too great, the receiver s application can modify its reservation request to achieve a lower delay. The end-to-end guaranteed service behavior without the error terms conforms to the fluid model. The fluid model is the service that would be provided if there was a wire of bandwidth R between source and receiver. Thus the end-to-end delay bound in guaranteed service is b M p R M + Ctot Dtot, ( p > R r) R p r R (2.1) or in a case where the peak rate is smaller than data rate R M + Ctot Dtot, ( r < p R) R (2.2) 9

10 where r, b, p and M are TOKEN_BUCKET_TSPEC parameters, R is data rate and must be greater than or equal to the token-bucket data rate (r). The rate-dependent error term Ctot represents the end-to-end sum of delays that the datagram might experience due to the rate parameters of the flow, also known as packet serialization. The rate-independent error term Dtot represents the worst case of end-to-end sum of delays in the transit time as the datagram in transferred through the service element, i.e. a router. The guaranteed service is invoked by a sender who specifies the flow s traffic parameters, called TOKEN_BUCKET_TSPEC or in short Tspec. The Receiver requests desired service level with the Reservation specification (Rspec). Rspec contains data rate R and a Slack term S, which signifies the difference between the desired delay and the delay obtained by using the reservation level R. By using the Slack term, a network element can reduce its resource reservation for this flow. The Rspec rate can be larger than the TSpec rate, because higher rates are assumed to reduce queuing delay, see formulas 2.1 and 2.2. If the application needs lower delays, it can request a new reservation with large data rate R. There are two types of policing associated with guaranteed service: simple policing and reshaping. Simple policing is done at the edge of the network. The traffic in a flow is compared against the Tspec, including conformance to the token bucket. The nonconforming packets are treated as best-effort datagrams. Reshaping is done at intermediate nodes within the network. The reshaping mechanics delay the forwarding of datagrams until they are in conformance with the Tspec. Reshaping is done by combining a buffer with a token bucket and a peak rate regulator and buffering flow s traffic until it can be forwarded in conformance with the token-bucket and peak rate parameters [RFC2212]. This kind of reshaping increases total delay, but it can reduce the overall jitter of the flow. Guaranteed service represents one extreme end-to-end delay control for networks. In order to provide this high level of assurance, guaranteed service is typically useful only if every network element along the path supports it. Moreover, the dynamic of this service requires that the set-up protocol is used to request services and to update them to intermediate routers. [RFC2212] 10

11 2.7 Controlled-load Service As described in Section 2.1, controlled-load service is to be used with tolerant real-time applications, e.g. unidirectional audio and video casts, which do not need absolute delay requirements, but are able to buffer the received data. Controlled-load service is specified in RFC [RFC2212] The controlled-load service is designed to provide end-to-end traffic behavior that closely approximates traditional best-effort services in unloaded network conditions. In other words, the service is better than what best-effort can offer in a congested network. The applications using this service may assume that a high percentage of the transmitted packets will be successfully delivered by the network, i.e. the amount of dropped packets should be equivalent to the error rate of the transmission media. The latency introduced into the network will not greatly exceed the minimum delay experienced by any successfully transmitted packet. The controlled-load service does not make use of specific target values for such control parameters as delay or loss. The controlled-load service uses the admission control to assure that service is received even when the network element is overloaded. The receiver s application requests the controlled-load service with the help of the sender s Tspec. The reservation request will be accepted if the admission control process is successful, i.e. if the network element has the capacity to forward the flow s packets as the Flowspec assumes and if the receiver has permission to make such a reservation. The controlled-load service is provided for traffic that conforms to the traffic specification given at flow setup time. The amount of data sent does not exceed r*t+b, where r and b are the token bucket parameters and T is the length of the time period. The nonconforming packets can be forwarded as best-effort traffic. The aim of the controlled-load service is to provide a better service than best-effort but without any hard guarantees. As has already been mentioned this service is sufficient for applications which can operate with best-effort service in a lightly loaded network, but not in a congested one. [RFC2211] 11

12 2.8 Best-Effort Service The default and traditional datagram delivery service is called best-effort. It is the third service class of Integrated Services. This service does not guarantee any kind of QoS: it only delivers the packets as fast as possible. The most commonly used scheduling mechanism is FIFO, which delivers the datagram to the output port in the same order as they came from the input port. This service is sufficient for all applications if the network element is lightly loaded. In a congested network queuing losses and queuing delays are typical and therefore best-effort service is not acceptable for real-time applications anymore. 2.9 Integrated Services Over Specific Link Layer As is well known, the Internet consists of different network technologies. The aim of the Integrated Services over Specific Link Layer (ISSLL) working group is to define the specifications and techniques needed to implement Internet Integrated Services capabilities within specific network technologies [IETF-ISSLL]. A service mapping must be defined for each specific subnetwork technology. ISSLL specifies how the IntServ services, e.g. controlled-load or guaranteed services, are provided by the specific link layer technology and how the network layer reservation setup protocol is implemented or mapped onto the link layer technology. So far the working group has published several RFCs and drafts that describe how IntServ can be mapped to the ATM, point-to-point links, IEEE 802-style networks and low speed networks. [IETF-ISSLL] This thesis does not discuss the link layer technologies but focuses instead on the cooperation of the two IP network layer mechanisms, Integrated and Differentiated services. 12

13 3. Resource Reservation Protocol The fourth component of the Integrated Services, in addition to the admission control, the packet scheduler and the classifier, is the reservation setup protocol. With the reservation setup protocol the applications can communicate their QoS requirements to the nodes along the transit path. In addition, with the help of the reservation setup protocol network nodes can communicate to one another the QoS requirements that must be provided for the particular traffic flows. The reservation setup protocol designed for Integrated Services is called the resource ReSerVation Protocol, RSVP. [RFC1633] RSVP is specified in RFC 2205 [RFC2205], which this chapter is based on. 3.1 General RSVP Protocol Mechanisms RSVP is a signaling protocol, i.e. it does not transport application data. RSVP operates on top of IPv4 or IPv6 1, in the same way as other IP control protocols e.g. ICMP, IGMP do. RSVP has the IP protocol number 46. RSVP sends resource requests in only one direction. It is designed especially for multicast purposes, also with multiple senders, but it works with unicast, too. RSVP is receiver-oriented, i.e it is the receiver who is responsible for requesting specific QoS services and not the sender. This approach accommodates large groups, dynamic group membership and heterogeneous receiver requirements. Each RSVP sender host regularly transmits RSVP Path messages downstream along the uni-/multicast routes provided by the routing tables in the routers. In other words, RSVP is not a routing protocol in itself. Path messages store path information to each RSVP aware node in the traffic path. The path state includes at least the unicast IP address of the previous hop-node, which is used to route the Resv messages hop-by-hop in the reverse direction, as illustrated in Figure 3.1. Since the receiver is responsible for the reservation request, the Resv message specifies the desired QoS and sets up the reservation state to each node in the traffic path, see Figure 3.2. The reservation is made to one direction only but an application can act both as a sender and receiver at the same time. The reservation and path states created by RSVP are called soft states. 1. for the operating systems that do not support raw network I/O encapsulating of RSVP messages in UDP is allowed 13

14 Figure 3.1 RSVP operation Soft states are maintained by periodic refresh messages and the absence of these messages deletes the reservation and path states from the routers as they time out. The refresh messages are Path and Resv messages that are sent periodically by the sender and the receiver, respectively. The soft state is necessary for dynamic resource reservations, because the data path can change. As a result, the reservation path also has to change in order to correspond to the new data path. An old reservation will be deleted after time out. RSVP is designed to work transparently across non-rsvp clouds. Non-RSVP aware routers route the RSVP messages like any other unicast or multicast packet. These non- RSVP capable routers along the path do not affect the operation of the RSVP. Nevertheless, the end-to-end QoS is unpredictable. At each RSVP aware node an RSVP request is passed to two local decision modules: admission control and policy control. The policy control checks if the user has an administrative permission to make a reservation. The admission control determines if the node has enough free resources to supply the requested QoS. If both checks succeed, the parameters from the Filter specification and Flow specification, explained in the next paragraph, are set in the packet classifier and packet scheduler, respectively. This is shown in Figure 3.2. The admission control, packet classifier and packet scheduler together are called the traffic control. These traffic control mechanisms were presented in Chapter 2. [RFC2205] 14

15 Figure 3.2 RSVP in hosts and routers [RFC1633] 3.2 RSVP Messages The RSVP protocol consists of seven messages. The two fundamental RSVP message types are Path and Resv. As mentioned earlier, an RSVP sender transmits Path messages downstream to the unicast or multicast address. The Path message contains information of the previous hop address and three information elements called Sender Template, Sender Tspec, and Adspec. The Sender Template contains information called filter specification (filterspec), which uniquely identifies the sender s flow from other RSVP sessions on the same link. The identification is done by sender IP address and optionally the UDP/TCP sender port. The obligatory Sender TSpec characterizes the traffic the sender expects to generate. The receiver uses this information to choose an appropriate size for the reservation request. The Path message may also carry additional advertising information (AdSpec). AdSpec is updated locally at every RSVP node. The information contained in the Adspec is divided into fragments: each fragment is associated with a specific control service. This way new service classes can be added in the future without modifying the RSVP protocol. The Adspec includes specific information about available resources, delay and bandwidth estimates, the indication of non-intserv aware nodes and also servicespecific information for the guaranteed services. 15

16 The Resv messages follow exactly the reverse path(s) the data packets will use, by using the path states created by Path messages. Resv messages create and maintain reservation states in each node along the path(s). The Resv message contains information about the reservation style (see next paragraph), the appropriate flowspec object, and the filter spec that identifies the sender. The format of the Flowspec depends on the requested service. When a receiver requests controlled-load service, only a Tspec is contained in the Flowspec. When requesting guaranteed service, both a TSpec and RSpec are contained in the Flowspec object. The Flowspec is used to set parameters in a node s packet scheduling process (see Figure 3.2). The filter spec is in the same form as the Sender Template. The session is identified with the help of the destination address, protocol ID and destination port. The filter spec is used to set parameters in the packet-classifier process. Data that does not match any of the filter specs is treated as best-effort traffic. The remaining RSVP message types are path and reservation tear downs (PathTear and ResvTear), path and reservation errors (PathErr and ResvErr) and the confirmation for a requested reservation (ResvConf). RSVP teardown messages remove path or reservation state immediately. It is recommended that a reservation is torn down as soon as an application finishes, although is not necessary to do that explicitly. A teardown request may be initiated also by a router as the reservation state times out. PathTear messages are generated by senders or by routers due to the time-out of path state in any node along the traffic path. Similarly a ResvTear message is generated by receivers or by any node in the traffic path in which the reservation state has timed out. The PathErr and ResvErr messages are simply sent upstream to the sender that created the error. These messages do not modify the path state in the nodes through which they pass. The PathErr and ResvErr indicate an error in the processing of Path or Resv messages, respectively. When a receiver wants to obtain a conformation for its reservation request, it can include a confirmation request object in a Resv message. Each node in the transit path that receives a Resv message containing a reservation confirmation object sends a ResvConf message. Since the receipt of a ResvConf does not give any guarantees, a receiver may receive a ResvConf followed by a ResvErr. [RFC2205] 16

17 3.3 RSVP Reservation Styles Due to the interaction of multicasting and RSVP, some kind of merging of RSVP reservation is required. Multicast uses packet replication when it delivers packets to the different next-hop-nodes. RSVP must merge the reservation request and compute the maximum of their Flowspecs at each replication point. The specific merging guidelines for controlled-load and guaranteed services are presented in the RFC 2210 [RFC2210] and RFC 2211 [RFC2211] respectively. The reservation style is one type of information included in the Resv messages. The reservation styles concern the treatment of reservation for the different senders within the same session (multipoint-to-multipoint) and the selection of senders. There are currently three defined styles, Wildcard-Filter (WF) Style, Fixed-Filter (FF) Style and Shared- Explicit (SE) Style. The WF style creates a single reservation that is shared by the flows from all upstream senders. This reservation is the largest of the resource requests from all receivers. Senders are not explicitly selected and therefore a new sender can automatically join the session. The FF style creates a distinct reservation for data packets from a particular sender. Consequently, the resources are not shared with other senders packets for the same session. Also the senders must be explicitly selected, which is done by the filter spec. The SE style creates one shared reservation, just like WF, but the receiver is allowed to explicitly specify the set of senders to be included. Table 3.1 summarizes the reservation styles. Table 3.1 RSVP reservation styles Sender Selection Distinct Shared Explicit Fixed-Filter (FF) style Shared-Explicit (SE) Style Wildcard (none defined) Wildcard-Filter (WF) Style 17

18 3.4 An Analysis of IntServ and RSVP Integrated Services assumes that QoS can only be achieved by reserving resources for a particular flow. However, in the case of a backbone router, thousands of real-time sessions may exist simultaneously. The information about thousands of reservations needs to be stored, processed and accessed. Thus managing reservation states at the routers with large interfaces causes significant overheads and degrades the performance of a router. In other words, IntServ has a scaling problem. The measurement section of this thesis illustrates this problem, in the case of a low-end router. For more information on the performance of RSVP see [Chiueh&Neogi 1997]. RSVP signaling messages can also generate a considerable amount of control traffic when there are several reservations. Furthermore, the reservation setup process may be delayed or the reservation may be timed out if the PATH or RESV messages are lost because of the congestion. The RSVP working group is preparing a document that presents the mechanisms that reduce the signaling overhead of RSVP [draft-refresh]. The bundle message is used to aggregate multiple RSVP messages within a single PDU. This reduces the amount of RSVP refresh messages. The latency and reliability problems are solved by acknowledgements. These acknowledgement messages can also be bundled, so that they will not produce more signaling traffic. One problem with IntServ is that the hosts and applications should be RSVP aware, i.e. they have to generate RSVP messages and make resource requests. At the moment there are only a few applications that are able to use RSVP. However, many operating systems already support RSVP, see Section 7.5. Fundamentally RSVP is designed to support the delivery of long-lived multicast sessions. For that purpose RSVP is very dynamic and efficient. The support of heterogeneous receiver requests and the ability to merge the reservations of several senders actually make RSVP convenient for multicast sessions in small networks. Since both IntServ and RSVP assume that the application has to know the traffic characters and that reservation is made before the actual data transmission, IntServ/RSVP does not seem to be useful for shortlived sessions such as web browsing. A system of this kind has nevertheless been presented [Kalman et al. 1999]. 18

19 IntServ/RSVP tries to bring the guaranteed connections in to IP-networks. This may be a difficult task to achieve in an environment not based on switching or virtual circuits but routing instead. The guaranteed service might therefore be hard to implement in an IP environment. The thesis An Analysis of the Applicability of RSVP focuses on this point of view [Schwantag 1997]. It seems that using RSVP and IntServ could be a useful solution in an environment where the number of flows is limited, the link is often overloaded and a dynamic admission control is needed. Such an environment could be the access link to ISP s core network. [Isomäki&Tuominen 1999]. The scalability problems of IntServ could be solved with some kind of flow aggregation. Aggregation means that several flows are treated as one. In the case of a backbone router only a small fixed number of service classes are offered. The scalable benefits of this approach have also been mathematically proved [Detti et al. 1999]. The next chapters discuss on that kind of approach. 19

20 4. Differentiated Services Differentiated Services (DiffServ) is a direct extension to the work done by Integrated Services and RSVP working groups. The goal is basically the same for both architectures: to provide quality of service and service differentiation in IP networks. During the year 1997 the concern for the scalability of RSVP increased and it became obvious that a simpler model was needed to provide QoS in the Internet. In August 1997 several presentations related to different service models were made in IETF s IntServ working group session. [Kilkki 1999 p. 65] The Differentiated Services working group was established in February The description of the working group emphasizes the need for simple, but versatile methods of providing service differentiation. DiffServ approach defines a small set of building blocks that will be used and then builds the services from these blocks. [IETF-DiffServ] 4.1 Basic Principles of Differentiated Services DiffServ has benefited from the experiences from the Integrated Services. According to the definition of DiffServ architecture, this new QoS architecture should work with existing applications, should not depend on application signaling and should avoid per-microflow state within core nodes. Whereas IntServ and RSVP were defined merely for real-time multicast applications, DiffServ is suitable for all kinds of IP applications, e.g. Web browsing and IP phone. The main concern of DiffServ is scalability and how it can be maintained in future highspeed networks as well. Scalability is achieved by aggregating the traffic classification state. Core nodes maintain only a few states in which the packets are classified by using the IP-layer packet marking that employs the Type of Service (TOS) field. The TOS field is redefined as a DS field in DiffServ architecture, see Section 4.2. The network nodes employ forwarding behavior to the traffic aggregates only. In addition, the sophisticated classification, marking, policing and shaping operations are implemented only at network boundary nodes, where the traffic volume is lower than in the core network. DiffServ architecture consists of two sets of functional elements: boundary and interior nodes. 20

21 Boundary nodes classify and handle traffic condition functions, while interior nodes only forward packets based on a DS field. The basic principle of DiffServ is to keep the architecture as simple as possible and therefore it provides service differentiation in one direction only. Unlike RSVP, DiffServ is sender orientated, which means that the traffic sender is responsible for the QoS of the transmission. The other main difference between DiffServ and IntServ is that IntServ offers service classes, while DiffServ specifies the building blocks from which the services can be built.[rfc2475] The nature of IntServ QoS is quantitative, i.e. bandwidth and delays are controlled by the receiver. DiffServ can also be characterized in quantitative terms, but basically it is based on relative priority of access to network resources. The IPv4 precedence marking, as defined in RFC791 [RFC791], is one example of this kind of relative priority marking model. Furthermore, DiffServ can be considered as a redefinition to this model. One of the most important advantages of DiffServ is that it takes care of the fragmentation of the Internet, since the Internet consist of several separately administrative domains. When a customer, usually a domain, wants to receive DiffServ services, it must have a Service Level Agreement (SLA) with its Internet service provider. SLA is a contract between a customer and a service provider, that specifies the DiffServ forwarding characters, see Subsection [draft-framework] Figure 4.1 DiffServ architecture 21

22 4.2 DS Field Differentiated Services architecture consist of a small, well-defined set of building blocks, which are deployed in network nodes. One of the most important block, is the DS field, which is used to select a specific packet forwarding treatment. DS field is a replacement header field superseding the existing IPv4 TOS octet and the IPv6 Traffic Class octet. This field is six bits long and the value of the field is called DS codepoint (DSCP). DSCP is used to select the PHB at each node [draft-newterms]. A two-bit currently unused (CU) field must be ignored in DS treatment. This field is used for explicit congestion notification, see [RFC2481], and it is beyond the scope of DiffServ. The six-bit DSCP field is divided into three pools for the purpose of codepoint assignment and management. The pool 1 with 32 codepoints is assigned to Standard Action, the pool 2 of 16 codepoints is reserved for experimental or local use and the pool 3 also of 16 codepoints is initially available for experimental or local use, but can be utilized for standardized assignments if the pool 1 is at some point exhausted. [RFC2474] Table 4.1 DS codepoints [RFC2474] Pool 1 Codepoint space Assignment Policy 1 xxxxx0 Standard Action 2 xxxx11 EXP/LU 3 xxxx01 EXP/LU or Standard Action There are 64 possible DSCP values, but there is no such limit on the number of PHBs. In a given network domain, there is a locally defined mapping between DSCP values and PHBs. Recommended codepoints should map to specific, standardized PHBs, but network operators may also choose alternative mappings [draft-phbid]. 4.3 Per-Hop Behaviors The other main building block of DiffServ is Per-Hop Behavior (PHB). RFC 2475 states that a per-hop behavior is a description of the externally observable forwarding behavior of a DS node applied to a particular DS behavior aggregate. It is important to note that 22

23 PHB is not a service, but reasonable services can be built by implementing similar PHBs in every node along the traffic path. Another significant feature of the PHB is that the forwarding treatment is applied to traffic aggregates, not to individual micro flows. The term micro flow refers to a distinguishable stream of related datagrams from a single user. As a technical term PHB is a combination of forwarding, classification, scheduling and drop behavior at each hop. Moreover, PHB is a term that service providers, administrators and vendors can use for practical discussion. PHBs may be specified in terms of their resource (e.g. bandwidth) priority relative to other PHBs. [Kilkki 1999 p. 73] Default PHB A default PHB is the common best-effort forwarding behavior which is available in existing routers. This PHB is used when the DSCP does not map to any other PHBs. A default PBP can be implemented by a queuing discipline that sends packets of this aggregate whenever the output link is not used by any another PHB. Network dimensioning should ensure that this aggregate will not be starved. This way senders that are not DiffServ aware can continue to use the network in the same manner as they do today. The codepoint for the Default PHB is the bit pattern [RFC2474] Class Selector The IP precedence field was first defined in RFC 791 [RFC791] and was meant to be used in a way similar to a DS field. The three-bit IP precedence field, bits 0-2 of the IPv4 TOS octet, is deployed in the existing routers and the developers of the DiffServ architecture wish to maintain some form of backward compatibility with present uses of the IP Precedence Field. The Class Selector (CS) PHB is defined for that purpose. The DSCP values xxx000 are reserved for Class Selector PHB. These eight CS Codepoints must yield at least two independently forwarded classes of traffic. CS codepoint with a large numerical value should get a higher relative order than a packet marked with a lower numerical value. CS PHB tries to avoid contradictions between the old and new definitions of the DS field. Moreover, it can be used to build a simple prototype DiffServ architecture since the existing routers support the usage of an IP precedence field. [RFC2474] The class Selector PHB can be realized by a variety of queuing disciplines, e.g. WFQ, CBQ or priority queuing, see Chapter 6. The definition of CS PHB does not specify any traffic- 23

24 condition functions (Subsection 4.4.3). It is therefore not reasonable to build services using only a CS PHB group. Instead, a service provider can avoid upgrading all nodes and still offer some level of differentiation by using existing routers with the CS as an interior node and by upgrading only boundary nodes. The nature of this service type is relative and is therefore suitable for applications like Web browsing. [Kilkki 1999 p. 215] Expedited Forwarding The guaranteed service in the IntServ model offers the user guaranteed delay and bandwidth characteristics. The Expedited Forwarding (EF) PHB can be used to build a similar service for the DiffServ architecture. If EF is implemented in each node along the path, the low loss, low latency, low jitter and guaranteed bandwidth end-to-end services through DS domains can be achieved. This kind of point-to-point or virtual leased line connection has also been described as Premium service [RFC2638]. Since packet loss, latency and jitter are all due to the queues that the traffic experiences while traveling in a network, the way to provide low latency and jitter service is to ensure that the traffic aggregate sees no queues at all or only very small ones. The queues are formed when traffic arrival rate exceeds the departure rate. To ensure that no queues will occur for some traffic aggregate, the aggregate s maximum arrival rate has to be smaller than the aggregate s configured departure rate. Thus EF PHB is defined as a forwarding treatment of the traffic aggregate where the configured packet data rate must equal or exceed a real departure data rate. The EF traffic should receive the configured rate independently of the intensity of any other traffic attempting to transit the node. This implies strict bit-rate control in the boundary nodes (i.e. traffic that exceeds the negotiated rate must be discarded) and as quick forwarding in the interior nodes as possible. [RFC2598] EF can be implemented by several different queue scheduling mechanisms. The simple priority queuing will work appropriately as long as the higher priority queue, i.e. EF, does not starve the lower priority queues. This could be accomplished by using a some kind of rate police (e.g. token bucket) in each priority queue that defines how much the queue can starve the other traffic. Chapter 7 studies the scheduling mechanisms that can be used to implement EF PHB. 24

25 The EF PHB can provide a leased line service over the public Internet, so that the users feel that they have a fixed bit rate pipe between their intranets. Since EF provides low latency service it is suitable for several real-time applications, like IP phone and video conference. However, these applications do not usually have fixed end-points and are needed on demand. If EF is provided also for that kind of purpose, the network dimensioning may become very difficult since EF requires that there is always enough capacity for an EF aggregate in the network. That can be difficult to ensure if a service provider wants to maximize utilization of its premium service. If an ISP takes a risky approach to network dimensioning, it is possible that EF PHB definition will not be fulfilled and that the SLAs will be violated. In addition, since DiffServ does not employ application signaling, the customers do not get the feedback that would explain why e.g. an IP phone conversation has such a bad voice quality. Chapter 5 suggests how these dynamic guaranteed connections could be achieved Assured Forwarding Assured Forwarding (AF) PHB group reflects the inmost ideas of Differentiated Services. On the other hand the definition of the AF PHB group is rather ambiguous and lets a lot of freedom to the implementation of this PHB group. [Kilkki 1999 p. 232] Assured Forwarding (AF) group offers different levels of forwarding assurances. The current AF specification provides delivery of IP packets in four classes, each with three drop precedence levels. Packets within one class must be forwarded independently of the packets in other AF classes and a DiffServ node must allocate resources to each implemented AF class, but all classes are not required to be implemented. The level of the forwarding assurance of an IP packet depends on the amount of bandwidth and buffer space that has been configured to the AF class, the current load of the AF class and the drop precedence of the packet. Another important feature of the AF PHB group is that the IP packets of the same microflow should not be reordered if they belong to same AF class. Like the other PHBs, AF is not an end-to-end service model, but a building block for service building. Since QoS depends on the other traffic entering the node, the service will likely to be relative. As a result, the AF PHB group does not guarantee any quantitative characteristics. On the other hand, by over provisioning an AF class even the low loss and low latency services can be implemented. [RFC2597] 25

26 Figure 4.2 AF Classes The drop precedences can be used to police customer s traffic that exceeds the committed bit rate. The packets within a subscribed profile are marked with the lowest drop precedence whereas the packets which exceed the bit rate agreed in SLA are marked with a higher drop precedence. Using drop precedences is also useful when UDP and TCP traffic share the same AF PHB class. By assigning different drop precedences for in- and out profile UDP and TCP packets, the TCP flow can be protected from non-responsive UDP packets and some level of fairness can be achieved. For more information, see Study of TCP and UDP Interaction for the AF PHB [draft-tcpudpaf]. The definition of the AF PHB group does not specify any queuing discipline for the implementation. However, the CBQ queuing mechanism allocates bandwidth for different queues and is therefore exactly what AF requires. The drop precedence inside the class needs active queue management. The Random Early Drop (RED) algorithm with multiple threshold levels is able to handle that, see Section 6.5 The implementation of AF PHB group would thus consist of CBQ scheduler that divides the bandwidth for PHB classes and a RED algorithm which drops packets inside the queues according to the drop precedences. In addition to queuing management, traffic marking and metering is needed to implement the AF PHB group. The boundary nodes meter the incoming packet stream and traffic that exceed the defined in SLA is marked for lower importance level. Unlike EF PHB, the excess or bursty traffic is not immediately dropped but is instead delivered with smaller forwarding probability [RFC2597]. 26

27 AF PHB group can be used to build end-to-end services for all types of applications. An Assured service, presented in RFC 2638 [RFC2638] is one example of an end-to-end service. Network dimensioning has still a major role in service realization. Since mathematical modelling is very difficult, network configuration and resource allocation will probably be based on practical experience. It may therefore be very difficult to have reasonable end-to-end services since the service providers may have different opinions e.g. about the kind of packet loss characteristic AF1 class should provide. On the other hand, this PHB group does not give any guarantees and allows some packet drops and jitter variations even if the traffic sender stays within the specifications of SLA. In addition, the AF PHB group seems to meet the requirements of the current internet applications, like web browsing, quite well. There is no need for permanent connections and the applications can recover from small amounts of packet drops. 4.4 Architecture of Differentiated Services As mentioned in the previous chapters, DiffServ consists of small building blocks. The most important key elements are the DS field and per-hop behavior. The other aspects of DiffServ include interoperation of separate domains and the separation of the roles of routers inside a domain. This paragraph will discuss these issues Service Level Agreement and Service Level Specification Service Level Agreement (SLA) is a service contract between a customer and a service provider. It specifies the forwarding service that a customer should receive. A customer may be an organization, another DS domain, another ISP or, in the future, even an individual. According to the new DiffServ terminology draft, Service Level Specification (SLS) term should be used when referring to the DiffServ agreements [draft-newterms]. SLA will represent other than DiffServ related agreements between a customer and ISP. The characteristics of SLA might be e.g., pricing, authentication mechanisms and availability/reliability aspects. [RFC2475] An important part of SLS is Traffic Conditioning Specification (TCS), which describes classifier rules, traffic profile and other traffic parameter values such as expected bandwidth, drop probability and latency. SLSs may be static or dynamic. Static SLSs are 27

28 long lasting and they may be negotiated between humans. Dynamic SLSs can change frequently and thus require an automated agent and protocol to handle SLS requests. Dynamic SLSs increase the network complexity since they require automated resource provisioning and mature billing systems. In dynamic SLSs, resource allocation is closely related to the signaling process. This process may be needed when the destination can be located anywhere on the Internet or when the forwarding guarantees are needed on demand. For more information on the arrangement of dynamic SLSs, see Section 4.5. [RFC2475] DiffServ Domain DiffServ separates the nodes inside a domain according to their functions. Traffic enters and leaves the DiffServ domain through the boundary nodes. The boundary nodes can be further divided to the ingress node and the egress node. Traffic enters a DS domain through an ingress node and leaves it through an egress node. Thus the same boundary node can simultaneously act both as an ingress node for one type of traffic and an egress node for another type of traffic. A DS ingress node has the task of ensuring that the traffic entering the DS domain conforms to any TCS between it and the other domain from where traffic entered. An ingress node may also perform traffic conditioning functions for traffic that traverses it. The traffic conditioning functions are presented in the next chapter. The main function of the DS interior nodes is to forward packets according to their DS codepoints. Each DS codepoint is associated with a PHB which implements the traffic differentiation. Since the core network links are very fast, the nodes should perform only very simple operations with the packets. [RFC2475] Traffic Classification and Conditioning DiffServ employs sophisticated traffic handling functions only at the network boundaries. Traffic conditioning implements the rules for traffic handling which are specified in TCS and configured to the particular boundary nodes. The traffic conditioning operations include metering, marking, shaping and dropping, which are illustrated in Figure

29 Figure 4.3 Packet classifier and traffic conditioner [RFC2475] A packet classifier selects the packets in a traffic stream based on the contents of the packet header. The classifiers are either Behavior Aggregate (BA) classifiers or Multi-Field (MF) classifiers. MF classifiers are typically needed at the first hop routers, because they select the packets on the basis of several fields, such as source and destination address, DS field, protocol ID and source and destination ports. BA classifiers use only DSCP for packet selection. These classifiers are used at the routers where traffic volume is very high, i.e. at interior routers. Traffic conditioning functions may be needed in addition to the first hop routers, also at the network egress nodes. Packets may have to be re-marked so that they will conform to the traffic profiles in the next domains or the aggregate may have to be shaped so that bursty traffic does not violate the TCS and packets will not be dropped at the next domain ingress node. Traffic conditioning can be divided into independent functions. Traffic meter measures the temporal properties of the packet stream that has been selected by a classifier against a traffic profile specified in a TCS. A traffic profile can be specified with token bucket parameters, including the average data rate and burst size. These measurements are used by a marker, a shaper and a dropper depending on whether the packet is in- or out-of-profile. A packet marker sets the DS field of a packet to a particular codepoint. The marking is based on packet classifying or measurement results of the meter. For example all packets coming from a particular IP source address are marked with a particular DSCP or in the other case the packets are re-marked due to the too high data rate. In Assured Service, packets from an aggregate can be marked with a higher drop probability instead of dropping them immediately. 29

30 A dropper discards packets in order to make the stream comply with the traffic profile. This is also called stream policing. The other way to police is shaping. The shaper delays the packets by holding them in a buffer before forwarding them. This way the bursts can be reduced and traffic can be made acceptable for the next ingress router. [RFC2475] Packets can also be marked by using the so called three color markers: Single Rate Three Color Marker (SRTCM) [draft-srtcm] or Two Rate Three Color Marker (TRTCM) [drafttrtcm]. These markers meter an IP packet stream and mark its packets either green, yellow, or red. The marking is based on single or two bit rates and their corresponding burst sizes. The marked color depends on the SLS and current IP bit rate. These markers can be used with an AF PHB class when the color is coded as the drop precedence of the packet. 4.5 Resource Allocation DiffServ customers can have two types of SLSs between their ISPs, dynamic or static. In the case of a static SLS, customer pays e.g. a fixed amount of money every month to his ISP to get some resources from the ISP s network, e.g. bandwidth. Even if the entire bandwidth is not being used all the time, the customer has to pay for the unused bits as well. This fixed bandwidth allocation may be appropriate for a web service provider who wants to make their web-site seem fast or for a virtual-leased-line service. Static SLSs and manually configured boundary nodes are the first step to Differentiated Services. In the future, when also individual users can have SLSs with their ISP, network management, including boundary nodes configuring, might become very difficult. Furthermore, if ISPs decide to use dynamic SLSs, the systems will become even more complex. The dynamic resource allocation can be realized by the agent called Bandwidth Broker (BB) [RFC2638]. A BB can be a host, a router or a software process at a router. It can be configured with organizational policies to keep track of the current allocation of AF and EF resources. It can also reject the new PHB resource requests due to the lack of resources. Moreover, a BB can be the only instance that can configure the leaf routers. The configuring is done according to the policy database of the BB. This will reduce the heaviness of the management burden inside the network domain. [RFC2638] 30

31 The use of BB can be divided into three categories: BBs are used only in network configuration, they make admission control decisions and are used also in dynamic resource allocation Centralized Network Configuration In the simplest case a BB is used only to configure the boundary routers in a domain. Customer can call or send an to the service provider who then adds a new SLS or modifies the old one at a policy database. SLSs are therefore rather static. A BB reads the database and then sets the necessary configurations into the boundary router(s). These configurations or profiles might be set into routers by using RSVP, SNMP, COPS or some vendor-specific mechanism. The SLS updating process could also be done more dynamically. In that case a customer, individual user or a BB of peer domain, sends e.g. an RSVP message to the BB of the domain which saves the profile into a database and configures necessary boundary routers in a similar way. In that case SLS are very dynamic. Figure 4.4 illustrates the situation where an individual user makes an SLS request to the domain s administrator. Figure 4.4 Usage of bandwidth broker to configure boundary routers 31

32 4.5.2 Dynamic Admission Control In a more sophisticated model a BB is used as an admission controller. Here the reasonable way to communicate between a customer and the local domain BB is the use of RSVP, but out-of-band signaling is also possible. When a customer wants a guarantee that there is enough capacity for an end-to-end connection, it sends an RSVP message to BB of the domain requesting for some DSCP or service for its traffic. At first the BB checks if there is enough capacity left for this request inside its own domain. If there is, the BB sends the message to the BB of the next domain, otherwise the customers will get an error message. The peer BB processes the request in a similar way and this signaling continues until the request message reaches the BB of the receiver s domain. The signaling process is interrupted if there is not enough capacity available and an error massage is sent to the sender. If the message reaches the receiver s BB, an OK message is sent to the BB of the previous domain and this signaling ends when the sender host gets the OK message from the local BB. Now the sender can be sure that there is enough capacity along the traffic path for the data transmission to succeed. The messages between the BBs can be e.g. RSVP messages. [RFC2638] Dynamic Boundary Node Configuration In the most complex system the BBs located along the traffic path are also used to configure necessary boundary routers. The signaling process starts in a way similar to the previous case. RSVP messages can be used in the signaling between BBs, see Figure 4.5. When the PATH message reaches the BB of the receiver domain, the BB makes an admission control decision and if the request is accepted, it configures the domain s boundary node. Then the BB will generate a RESV message to the previous domain s BB that will further configure its own domain s boundary router. When the RESV message comes to the sender s BB, the first hop router is configured and the RESV message is returned to the sender. Data transmission can be started and there is enough capacity along the path to forward the sender s traffic. 32

33 Figure 4.5 BBs and dynamic SLSs Characteristics of Bandwidth Brokers Even if the BBs use RSVP as a signaling protocol, the process is significantly different from the signaling process in IntServ/RSVP. It is the sender who requests the resources, not the receiver. Secondly, a request can be rejected when the BB receives the PATH message from the sender, while in the IntServ process a request is rejected only when a router receives the RESV message. Thirdly, in the DiffServ model with BBs each domain behaves like a single node, represented by a BB. Core routers are not involved in this process, thus the signaling is likely to be domain-to-domain signaling, not hop-to-hop. [Xipeng&Lionel] The scaling benefits of the BBs and the IntServ approach have been compared in the study Supporting RSVP in a Differentiated Service Domain: an Architectural Framework and a Scalability Analysis [Detti et al 1999]. The IntServ/DiffServ mapping issues are discussed in Section 5.2. In dynamic SLSs, the BBs are used to keep resource state information. The states could be per-flow based, but it is more likely that the BBs aggregate multiple requests and make a single request to the next domain BB. This kind of message bundling specification is in progress in the RSVP working group [draft-refresh]. In this case the states are kept only in one node (BB) per domain, which reduces the processing overhead as can be seen when 33

34 comparing BBs to the IntServ model. In addition, states concern only about boundary nodes. With the use of BB, more predictable services can be built. In addition, the network resources can be utilized better, since the network can be oversubscribed and if the network is congested a new connection can be blocked and a sender will get a busy signal. On the other hand the DiffServ architecture would become much more complex if BBs are used. Problems may arise for example when the domain of one ISP is connected to several others. If there are dynamic SLSs involved and alternative routes exists, the BB may not be able to choose the correct egress node for data traffic or at least the help of routing protocols will be needed. The definition of the Bandwidth Broker concept in the RFC 2638 [RFC2638] is very general, which lets a lot of freedom to implement them, but at the same time complicates the building of consistent end-to-end services. Furthermore, BBs do not solve the problem that still might occur: namely the congestion inside the core network which might be because of oversubscribed network dimensioning. On the other hand Bandwidth Brokers seem to become an essential part of DiffServ, since the largest DiffServ test network implementation project has deployed BBs into the network, see Section 7.5. Some BB implementations have also been presented and further specification of the BB concept for this particular network is in process [QBone-BB] 4.6 An Analysis of DiffServ The Differentiated Services takes into account the structure of the current Internet and its requirements. The architecture is scalable: routers handle only traffic aggregates, services are based on a relative differentiation and no hard guarantees are provided. SLSs are made between large entities, domains or ISPs. Since the DiffServ does not (yet) specify services, ISPs have a very important role in realization of DiffServ. The risk is that ISPs implement PHBs differently so that predictable end-to-end services can not be achieved. In addition, the network provisioning might be a very difficult task, since no signaling is used and nodes have to be pre-configured. Especially EF might be very problematic if dynamic end-points are allowed. On the other hand, AF provides more elastic possibilities, but the services built from AF can be too ambiguous. When DiffServ services are provided for end users it is quite clear, that technical terms such as AF or EF are not mentioned. It might be difficult to sell these 34

35 services (assured service) if their clear advantages, e.g., guaranteed bandwidth, can not be demonstrated. One interesting issue of the DiffServ analysis will be TCP behavior, since DiffServ is specified to be only unidirectional. If the both end points do not have a similar QoS service, the presumptive service will not be realized when some acknowledgements are dropped. Some simulations [Köhler&Schäfer 1999] have already been made and they imply that also acknowledgements need some QoS service. This section has not presented multicast in DiffServ networks because the work is still in progress [draft-multicast] and no agreements have been made on its arrangement. This problem is due to the sender orientation of DiffServ, i.e. it is the sender who pays for QoS. A multicast session may grow inside the DS domain and will consequently consume more network resources than a sender s SLS expects. In addition, the usage of the network resources is unpredictable since receivers can join and leave a multicast session dynamically and ISPs can not predict resource usages. One possible solution is that a multicast should have its own PHB [draft-lbe]. Although DiffServ is a very simple and robust architecture, it has one side-effect: its static nature. The TCSs are pre-configured to the boundary routers and the interior nodes have to be pre-dimensioned. The section Resource Allocation already discussed how RSVP can be used in DiffServ network. The next chapter will analyze the co-operation of IntServ/ RSVP and DiffServ and discusses the advantages of this combination. 35

36 5. Co-operation of Differentiated and Integrated Services Both QoS architectures, Integrated and Differentiated services, have their own advantages and drawbacks. Integrated Services together with RSVP signaling provide dynamic admission control, end-to-end predictable services and an efficient use of a network resources. On the other hand, the processing overhead in the high speed networks will be unacceptable. Differentiated Services offers a scalable way to provide QoS also in large networks. The lack of a dynamic admission control mechanism and difficulties in resource allocation can make the services quite unpredictable. By combining the advantages from both models it might be possible to build a scalable system which would provide predictable services. This section presents the possibilities of using the IntServ/RSVP methods in parallel with the DiffServ. 5.1 Motivation for the Co-operation Model Since DiffServ focuses on the needs of a large network, it is clear that DiffServ approach should be used in high speed transit networks. As has already been mentioned, the lack of explicit admission control may cause the distortion of existing traffic flows when new flows join to the aggregate and the resources of the aggregate have already been consumed. As a result, none of the sessions will obtain satisfactory service, when in fact there are resources available for the most of the sessions. When the explicit admission control is used, the new flows are blocked and the senders are informed about the network condition. When the sender receives a rejection signal, it might respond by requesting smaller traffic profile. [draft-diffservrsvp] Another use for IntServ and RSVP might be found in the bottleneck links of the Internet, for example in access networks. Since IntServ isolates flows from each other, it provides efficient protection against misbehaving flows. Consequently, quantitative services can also be offered over the low-speed bottleneck links. Since a narrow bandwidth link can not offer several simultaneously reserved connections, scalability will not be the problem here. By using IntServ and RSVP in links like this, the link utilization increases and several customers can be offered quantitative services on demand. 36

37 One important issue of a QoS system will be management. If the packet marking is done more accurately than per-interface based, i.e. packet marking is not based on a physical interface of a router, then microflow classification is needed. Typical criteria are source IP address and application port number. Nevertheless, the classification criteria may change, since users can be assigned different IP addresses by DHCP or the applications can use transient ports. If the configurations are done statically via manual configuration or via automated scripts, the management of the classification information will become very difficult. An alternative to the static configuration is to allow a host to signal classification criteria to the router on behalf of users and applications. RSVP is the most reasonable choice for that task. [draft-diffservrsvp] The models presented in this chapter assume that DiffServ is used in transit network and that IntServ/RSVP handles admission control to the DiffServ network. The differences between the models concern the dynamics of SLSs, the dynamics of the network provision and the aggregate handling on the DiffServ network. 5.2 Resource Based Admission Control The network architecture that provides end-to-end quantitative QoS consists of IntServ regions at the periphery of the network and diffserv regions in the core of the network. Nevertheless a network administrator is free to choose which regions of the network act as IntServ regions and which ones act as DiffServ regions. Basically IntServ regions are customers for DiffServ regions, which offer transport services, see Figure 5.1 When IntServ and RSVP are used to perform admission control to DiffServ network, the most important node in the network is the edge router in the boundary between the network regions. The router can be seen as consisting of two halves; the standard RSVP half, which interfaces the stub networks and the DiffServ half, which interfaces the transit network. The RSVP half is able to process PATH and Resv messages, but it is not expected to store full RSVP state. In the simplest case admission control has the knowledge of how much bandwidth has been used and how much bandwidth is still left. By using this information and the token bucket parameters in RSVP Resv message, the router is able to decide if a new connection can be granted. If the request is accepted the traffic is mapped to an appropriate PHB and the related DSCP is marked in the packet header. 37

38 Figure 5.1 The Combination of RSVP, IntServ and Diffserv Mapping rules have not been specified yet, but it is reasonable to assume that the guaranteed service is mapped into the EF PHB and the controlled-load service is mapped to the highest priority AF class. [draft-diffservrsvp] The signaling process to obtain end-to-end quantitative QoS starts when the sending host generates an RSVP Path message. The Path message is carried towards the receiving host. In the IntServ network the standard IntServ processing is applied at capable network elements. At the edge router, the Path state is installed in the router and the message is sent towards the DiffServ network region. In the DiffServ network the Path message is ignored by routers and then processed at the receiver s network according to the standard RSVP processing rules if there are IntServ capable nodes. When the Path message reaches the receiving host, it generates an RSVP Resv message. The Resv message is carried back towards the DiffServ network region and the sending host. The request may of course be rejected at any RSVP node in the IntServ region according to standard IntServ admission control. At the edge router the Resv message triggers admission control processing. The node compares the resources requested in the Resv message to the corresponding DiffServ service level. If there are enough resources in the DiffServ network and the request fits in the customer s SLS, the request is approved. Then the Resv message is allowed to continue upstream towards the sender. Again, any RSVP node in the region may reject the 38

39 reservation request. If the request fails, the Resv message is not forwarded and the appropriate RSVP error messages are sent. When the sender receives the Resv message, the message indicates that a specified traffic flow has been admitted for the specified IntServ service type and for the corresponding DiffServ service level. The signaling process is presented in Figure 5.1. The packets that enter the DiffServ network can be marked with appropriate DSCP in the edge router or in the sending host. The RSVP Resv message may carry DSCP information to the sender by using the DCLASS object [draft-dclass]. DCLASS object contains the DSCP, which the traffic sender should use to achieve the corresponding service level also in the DiffServ network. This way traffic marking is shifted to the hosts, which reduces edge routers load. [draft-diffserrsvp] 5.3 Dynamically Provisioned Network The previous model presumes that the DiffServ network is statically provisioned and that there is no signaling between the IntServ network region and the DiffServ network region, i.e. RSVP Path and Resv messages are transparent for the DiffServ nodes. Only the SLS is negotiated between the regions and the resource availability in the edge router is based on that contract. It might be difficult to make efficient use of the resources in the DiffServ region, since the admission control does not consider the availability of resources inside the DiffServ region along the specific traffic path. The first approach to the re-provisioning of the core routers uses edge devices which configure the core routers according to the resource requests. The configuration may be done by using Bandwidth Brokers or by using the aggregate RSVP signaling, which is presented in next section. The other possible way of re-provisioning DiffServ core router could be based on per-flow RSVP signaling. The network administrator can configure the strategic devices within the DiffServ network to process RSVP signaling. These RSVP aware devices do not accomplish standard RSVP processing, but rather listen to the RSVP signaling. When there seems to exist more or less reservation requests than the node is currently providing, the devices can configure themselves to provide a more accurate amount of the resources for a particular aggregate class. In addition, the network administrator can improve the utilization of the DiffServ network resources by placing these devices more densely, or 39

40 reduce the processing overhead by enabling these devices more sparsely. It should be noted, that regardless of the use of per-flow RSVP signaling that has been used in the reprovisioning of the diffserv network, the actual traffic handling is always done per aggregate. The core nodes should know the mapping rules in order to provide resources for the correct PHB. The provisioning does not have to be per-flow accurate, but some kind of triggers and trend analysis can be used. The signaling process and admission control operations are similar to the previous model. [Windows-QoS] Signaling in the core network increases processing overhead but the study Performance Analysis of an RSVP-Capable Router [Chiueh&Neogi 1997] argues, that most of the RSVP overhead is caused by the microflow traffic handling, not by the microflow signaling. 5.4 Aggregation of RSVP Reservations The first model in this chapter presumes that RSVP reservations are mapped into DiffServ PHBs at the edge router. The draft Aggregation of RSVP for IPv4 and IPv6 Reservations [draft-rsvpaggr] suggest that single RSVP reservations are aggregated to other RSVP reservations across a transit network. The classification and scheduling states in the transit region are managed according to the principles of DiffServ. This concept is very similar to the use of Virtual Paths in an ATM network. The model for the aggregation of RSVP reservations is rather complex and therefore only the main principles of the concept are presented in this thesis. This model names the routers in the boundary of a transit network to an aggregator and a deaggregator. The aggregator is a router at the ingress of an aggregation region. Its task is to aggregate the standard RSVP end-to-end Path message and compose larger aggregate Path messages. The deaggregator is a router at the egress of an a transit network and it deaggregates aggregated Path messages and consumes aggregate Resv messages. An aggregate router of some flows may act as an deaggregate router for some other flows and vice versa. The architecture is shown in Figure

41 Figure 5.2 RSVP aggregation messages The aggregate reservations are built between particular aggregator and deaggregator pairs. Those routers have to be known before the aggregate reservation can be done. In addition, the DSCP, which is used to differentiate reservation in the transit network, has to be decided to the aggregate reservation. DSCP, the aggregator and deaggregator are resolved by RSVP -type signaling mechanisms Again the process for achieving end-to-end guaranteed QoS starts when a sending host generates a RSVP Path message. When the Path message arrives at an aggregator, the deaggregator is not yet known and the aggregator does not know which aggregate the new flow is associated. The aggregator changes the protocol number of the RSVP message and forwards it in normal way, thus in the aggregation region the Path message traverses as a normal IP datagram. The change of the protocol number disables micro flow reservations inside the aggregation region. When the end-to-end RSVP Path message arrives at a deaggregating router, deaggregator associates the flow with an existing aggregate reservation. If there does not exist a correct aggregate reservation, the deaggregating router may generate a PathErr with the code NEW-AGGREGATE-NEEDED back to the aggregating router. For the end-to-end RSVP Path message the deaggregator changes the protocol number to the standard RSVP, updates the ADSPEC object and forwards the Path message towards the receiver. After the aggregator has received the NEW-AGGREGATE- NEEDED message, it generates an RSVP AGGREGATE PATH message to the 41

42 deaggregator. When the deaggregator receives the aggregate Path message it responds by the aggregate Resv message. The interior routers preform the admission control and the resource allocation processes according to the standard IntServ/RSVP rules. Finally the aggregator associates a new or a larger aggregate reservation between it and the particular deaggregator. After the deaggregator has sent the end-to-end Path message towards to receiver it expects to receive an end-to-end Resv for the session. When the deaggregator receives the end-to-end Resv message, it simply sends the message to the aggregating router. This end-to-end Resv message does not effect to the reservation inside the aggregation region, since there does not exist a corresponding path state. This end-to-end Resv message contains also a DCLASS object that indicates the DSCP the deaggregating router expects that the aggregator will use. When the aggregating router receives the endto-end Resv message it ensures that the reservation is associated with the appropriate aggregate. Now all things are in place and the end-to-end Resv is forwarded to the sender and data transmission can be started. Figure 5.2 illustrates the network model and signaling process when there is a need for a new aggregate reservation. The process is simpler when there already exists an appropriate reservation between the aggregator and deaggregator. In that case, the end-to-end reservation is only associated to the aggregate reservation. Even if interior routers maintain some RSVP reservation states per aggregate reservation, the classification and scheduling states are maintained according to the DiffServ principles. For example, if there are several guaranteed service aggregate reservations in the network core and the guaranteed service is mapped into the EF PHB, only the EF DSCP is needed to inspect at each interior router and only a single queue is used for all EF traffic. The size of the aggregate reservation needs to be greater or equal to the sum of the bandwidth of the end-to-end reservations. If the aggregate reservation is rebuilt each time the underlying end-to-end reservation changes, the benefits of the aggregation is lost. Therefore there has to be a policy, which takes into account the sum of the bandwidths of its underlying end-to-end reservations and is able to somehow predict the needed aggregate reservation size. This may require some level of trend analysis. In addition, oscillation must be avoided, e.g. by configuring triggering threshold values to introduce some hysteresis. [draft-rsvpaggr] 42

43 The RSVP aggregation model solves traditional IntServ/RSVP microflow traffic handling problem and simultaneously enables dynamic admission control process and an efficient resource utilization. The drawback is increased complexity and the need of the predictive management of bandwidth. 5.5 Null Service Chapter 2 introduced the two IntServ services, the guaranteed service and the controlledload service. The newest proposed Integrated Service class is called Null Service [draftnullservice]. This service is designed for the applications, which may need better than besteffort service but cannot specify their resource requirements, e.g. ERP applications. This service is defined particularly for networks which combine IntServ and DiffServ. The applications are allowed to identify themselves to the QoS policy agent using reservation setup protocol but without the traffic profile specification. The QoS policy agent may then return DSCP in an RSVP Resv message using the DCLASS object. The host then marks the packet by the received DSCP and will get a certain level of service from the network according to the value of DSCP. 5.6 Thoughts about the Co-operation Model This section has discussed about the possibilities how the IntServ/RSVP principles can be used in parallel with DiffServ principles. Since DiffServ is basically based on static network configurations and admission control, the use of RSVP signaling can bring a lot of flexibility into the DiffServ architecture. The signaling enables also more accurate and dynamic policing, since the admission control may be based on policing of either a sender, a receiver or an application. The greatest benefit of the combined architecture are the end-to-end services. The RSVP signaling provides knowledge about the availability of resources along the traffic path. In this way predictable end-to-end services can be guaranteed. Usage of DiffServ principles disables the need of pre-flow states along the traffic path. The discussed models illustrated different accuracy of network provisioning. The drawback of the accurate network provisioning is the increased complexity and the processing 43

44 overhead. The specification of DiffServ has solved one of the most fundamental problem of IntServ, aggregate handling. When the IntServ was specified appropriate method for identifying aggregated traffic was not known. The DS field in the IP packet header has now solved this problem. The network provisioning is a compromise of processing overhead and achieved end-toend QoS. Even if the co-operation model seems to be tempting, a lot of experience is needed from pure DiffServ behaving, before ISPs will employ IntServ/RSVP mechanisms. If more dynamic and accurate network provisioning are needed, the co-operation model is suitable for that purpose. 44

45 6. Queuing Mechanisms This chapter presents the queuing disciplines which can be used to implement Integrated and Differentiated Services and which are used in the measurements in Chapter 7. In a packet switched network, such as the Internet, queuing discipline has a very important role especially when quality of service is concerned. Queuing is the act of storing packets into a place where they are held for subsequent processing. A typical router has one process for each input and output port. Forwarding process between the ports determines the destination interface to which a packet should be passed. Thus the role of the router is to bind the input processes to the forwarding and scheduling processes and then to the output processes. This is done by the help of the input and output queues, as shown in Figure 6.1. [Ferguson&Huston 1998 p. 55] The choice of the scheduling algorithm is one of the most fundamental for providing quality of service. Each queueing discipline has its own strengths and drawbacks. In addition, the configuration of the scheduler and queue sizes can be extraordinarily difficult. If the queue is too deep, latency and jitter will increase. On the other hand, too shallow queue may discard significant amount of packets. Figure 6.1 Router queueing and scheduling [Ferguson&Huston 1998 p. 56] 45

46 6.1 FIFO Queuing FIFO (First In, First Out) queuing is considered to be the standard method for store-andforward traffic handling. Packets that enter the input interface queue are placed into the appropriate output interface queue in the order in which they are received. FIFO queuing is fast, since it does not cause computational overhead for the packet processing. If the network operates with a sufficient level of transmission capacity and traffic bursts do not cause packet discard, FIFO queueing is very efficient. When the load on the network increases, the traffic bursts cause queuing delay and if the queue is fully populated, all subsequent packets are discarded. Since FIFO can not offer service differentiation, realtime data will suffer from network congestion and adequate QoS can not be achieved. [Ferguson&Huston 1998 p. 57] 6.2 Priority Queuing Priority queueing is considered as a primitive form of traffic differentiation. The priority queuing is based on the concept where certain type of traffic is always transmitted ahead of other type of traffic. Ordering is based on user-defined criteria and several level of priorities can also exist. Priority queuing offers very efficient service differentiation and it does not need complex computing processing. On the other hand, if the amount of high priority traffic is high, the normal traffic has to wait in a queue, which can cause starvation for besteffort traffic. The packets from normal traffic queue are dropped due to the buffer overflow and significant amount of latency will occur. [Ferguson&Huston 1998 p. 58] 6.3 Class-Based Queuing Class-Based Queuing (CBQ) or custom queuing is a variation of priority queuing. The CBQ operates like a Weighted Round Robin (WRR). The scheduler can be configured to forward a specified number of bytes from a queue in each service rotation time. When a particular queue is being processed, packets are sent until the number of bytes sent exceeds the queue byte count or the queue is empty. CBQ provides mechanism to allocate dedicated portions of bandwidth to specific types of traffic. Unlike the priority queueing, CBQ 46

47 protects low priority queues from starvation. The drawback of CBQ is that it requires some amount of computational processing. [Ferguson&Huston 1998 p. 60] 6.4 Weighted Fair Queuing Weighted Fair Queueing (WFQ) is a flow-based algorithm, which provides fair bandwidth allocation. WFQ is based on the fair queuing scheme, which operates according to the round-robin principle. A drawback of the round-robin scheme is that short packets are penalized: more capacity are consumed by the flows with longer average packet size compared to flows with shorter average packet size. This can be overcame by using so called Bit-Round Fair Queuing (BRFQ) scheme. For more information about BRFQ see e.g. [Stallings 1998 p.327] On the other hand BRFQ can not provide different amounts of the capacity to different flows. WFQ has been developed for that purpose. In other words WFQ is the enhancement of BRFQ. Like fair queuing, WFQ divides flows into their own bounded queues so that they can not unfairly consume network resources from the other traffic flows. The weight is assigned to each flow to guarantee the specific service level. Weight can be assigned e.g. according to the TOS field in the IP packet header or using RSVP to carry token bucket parameters. Since WFQ requires quite a complex calculation it is not suitable for very fast links. On the other hand the isolations of flows can provide service guarantees and very sophisticated service differentiation [Ferguson&Huston 1998 p. 62]. 6.5 Random Early Detection Random Early Detection (RED) provides a mechanism to avoid a total congestion of TCP flows and the problem of global synchronization, which can be caused by simultaneously TCP slow starts. RED operates by monitoring the queue depth. The average queue size is compared to two thresholds, a minimum threshold and a maximum threshold. When the average queue size is less than minimum threshold no packets are marked to be dropped. When the average queue size is greater than maximum threshold every arriving packet is marked to be dropped. If the queue size is between the minimum threshold and the maximum threshold, arriving packets are marked with a dropping probability, which is a function of the queue size. When packets from arbitrary flows are randomly started to drop, 47

48 the TCP receiver gets the signal to slow down. By this way the total congestion and buffer overflow can be avoided. The threshold value(s) at which RED begins to drop and the dropping probability, i.e. the rate of dropping, are configurable by the network administrator. [Floyd&Jacobson 1993] The RED algorithm can also be extended to support DiffServ. Since an AF class includes packets with three drop precedence, the RED algorithm needs to be modified to support them. Now each drop precedence, indicated by DSCP, is associated to a dropping probability function, see figure 6.2. For example, the highest priority packets are started to drop when the queue is almost full, while all packets arriving with the lowest priority are dropped when only half of the queue is occupied. Since RED does not do active queuing management, i.e. packets are not reorded, it produces much less overhead than e.g. CBQ or WFQ. Therefore RED scales high speed networks and is remarkably effective in comparison to queuing mechanisms mentioned above. [Ferguson&Huston 1998 p. 77] On the other hand RED can not priories packets by urgency. Basically there exists inside the router only one FIFO queue where all packets stand. Service differentiation is done by the drop probability. RED is able to drop normal packets earlier than higher priority, but delay of the higher priority packets will still depend on packets of other flows. Figure 6.2 Drop probabilities for a RED extension 48

49 7. Measurements This chapter presents measurements, which explore what kind of quality of service the combination of Integrated and Differentiated Services can offer. The scenario used in the measurements assumes that RSVP and IntServ are used in access networks. DiffServ is used in core networks. The architecture model is similar as in Section 5.2. The other purpose of the measurement was to study scaling problem of RSVP and IntServ and to illustrate when processing overhead becomes too heavy for a particular router model. 7.1 Test Environment The test network presented in Figure 7.1 contained three routers and six workstations. Router 1 and router 2 were Cisco Systems 2500-series routers, which is a very common access router model in today s corporate networks [Isomäki&Tuominen 1999]. Internetwork Operating System (IOS) 12.4 was installed in the routers. The third router was a Cisco 4700 router with IOS 11.2, but its role was quite minimal in the measurements. The traffic senders and receivers were SUN workstations with Solaris operating system. Workstations supported RSVP with the ISI implementations. The link between router 1 and router 2 was a serial link with the speed of 2Mbit/s and it simulated an access link to the ISP s network. Routers 2 and 3 were connected via 10Base- T Ethernet and this link simulated high-speed core network. A Cisco 2500-series router might not be a reasonable choice for a core router, but by this way the scalability issues could be easily noticed. All the other links were also 10Base-T Ethernet. All the traffic was generated and received by a public domain software package MGEN [MGEN], which understands RSVP. A sender process inserts time stamps and sequence numbers when sending the packets, while the receiver adds arrival time stamp to the packets and writes the received data to a file. Calculations of the packet delay and loss statistics were made off-line by MCALC program, which is a part of the MGEN package. Time synchronization of the workstations was done using Network Time Protocol (NTP). 49

50 Figure 7.1 Test network 7.2 Test Cases The measurements consisted of three cases. At first, the test network was lightly loaded and about 15% of the link capacity was real-time traffic. In the second and the third cases the network was overloaded consisting of 30% and 50% of real-time traffic, respectively. Table 7.2 presents the detailed traffic characteristics of the network in each case. The real-time and best-effort traffic that was used in the measurements was generated by MGEN program. MGEN allows to determine the packet size, packet rate, transmission pattern and RSVP Tspec and Flowspec. The real-time data emulated IP phone calls and IP video streams. The characteristics of the IP phone streams were collected from IP phone measurements. The generated traffic emulated two different voice codecs: adpcm and gsm. The speeds of the video streams were 200kbit/s on the average and represented bursty data streams. The traffic characteristics are summarized in table

51 Table 7.1 The properties of the generated flows Stream Packet size [bytes] Protocol Bit-rate Packet rate [packets/s] Date rate [kbp/s] VoIP (adpcm) 520 UDP constant a VoIP (gsm) 400 UDP constant Video 1250 UDP variable BE 1500 UDP variable see table 7.2 see table 7.2 a. Three-minute discussion without any silence may not be very realistic at least in Finland Table 7.2 The description of the traffic used in the test cases Data flows case 1 case 2 case 3 Number of 41.6 kbit adpcm VoIP flows in the access link Number of 41.6 kbit adpcm VoIP flows in the core link Number of 14.4 kbit gsm flows in the access link Number of 14.4 kbit gsm flows in the core link Number of 200 kbit video flows in the access link Number of 200 kbit video flows in the core link Best effort data rate in the access link [kbit/s] Best effort data rate in the core link [kbit/s] Total amount of real-time data in the access link [kbit/s] Total amount of real-time data in the core link [kbit/s] Total amount of traffic in the access link [kbit/s] overload overload Total amount of traffic in the core link [kbit/s] 4464 ~9100 a ~10500 Percent of real-time data in the access link Percent of real-time data in the core link a. If the router 2 was badly overloaded it shut down the serial link. For this reason the core link is only slightly overloaded 51

52 7.3 Network Configurations In each test case the test network was configured to represent four different kinds of QoS architecture. The network types were 1) best-effort, 2) IntServ, 3) access link IntServ and core link DiffServ and 4) access link IntServ and core link best-effort. In addition, in the case when the core link was DiffServ, two different scheduler types were used: priority queuing and class based queuing, similar to what is presented in RFC 2598 [RFC 2598] Best-Effort Configuration Best-effort network used FIFO queuing with the queue size of 40 packets (default value). All flows were equal and no reservations for real-time flows were made IntServ Configuration The current implementation of Cisco s RSVP uses WFQ as the scheduling algorithm. WFQ isolates each flow and therefore builds own queue for each flow. This means that also every best-effort flow gets its own queue, which may be appropriate in the low-speed links, but not in the high-speed core. Isolation of each flow would cause considerable processing overhead. In order that RSVP would not suffer too much from this implementation specific solution, this measurement used only two large best-effort streams, one generated from workstation 2 and the other from workstation 3. This solution models the router concept, where best-effort traffic is first scheduled by FIFO and then all RSVP queues and one (in this case two) best-effort queues are scheduled by the WFQ mechanism. In other words, the RSVP flows are isolated and the best-effort flows use one shared queue. The service class for the real-time flows in the IntServ network was chosen to be controlled-load, because earlier measurements indicated that there was no difference between Cisco s implementation of guaranteed and controlled-load service. WFQ was configured to use packet discard threshold 64 (default value) and the routers assigned the flows weight factor depending on their reservation parameters, i.e. Flowspecs. Table 7.3 presents the used IntServ reservation parameters. 52

53 Table 7.3 IntServ reservation parameters Flow Token rate Bucket depth 41.6kbit/s adpcm VoIP 42 kbit/s 2600 bytes 14.4kbit/s gsm VoIP 15 kbit/s 2000 bytes 200kbit/s video stream 200 kbit/s 6250 bytes DiffServ Configuration The DiffServ interface of router 2 emulated EF PHB, which is an appropriate choice for IP phone applications. As described in [RFC2598] this forwarding behavior can be implemented by several types of queue scheduling mechanisms. This measurement employed both priority queuing (PQ) and Cisco s custom queuing, which is similar to CBQ. The classification in both cases was based on UDP port numbers, since these queueing disciplines did not yet understand the values of the TOS field. Priority queuing separated real-time and best-effort traffic into two queues. The higher priority real-time queue was served first and if that queue was empty, the normal priority best-effort queue was then served. The queue size was configured to be 80 packets for the real-time traffic and 140 packets for best-effort traffic. Custom queuing operates according to the same principle as weighted round robin or CBQ. [Cisco] In these measurements the scheduler was configured to give 40% of the link capacity to real-time flows. The ratio was calculated by using the smallest packet size of the real-time traffic, 400 bytes. The queue sizes were 100 packets and 200 packets for real-time and best-effort traffic, respectively. 7.4 Test Results and Analysis The results were collected by monitoring the individual 42 kbit/s microflow, which travelled from workstation 1 to workstation 6, see Figure 7.1 There was no other traffic between the workstations, which could affect the behavior of these hosts. The load of the access link, the real-time and the best-effort traffic, was generated by workstation 2. Workstation 3 and 4 sent the load to the core link and workstation 6 received all traffic 53

54 generated by WS2, WS3 and WS4. The purpose of that traffic was only to load routers 1 and 2. The QoS parameters of the background traffic was not measured The measurements consisted of three cases and five different combinations of QoS architectures. In each test case the achieved data rate, delay values and packet loss rate for an individual microflow were measured. Presented values are averages of five individual measurements, whose duration was 180 seconds. Tables 7.4, 7.5 and 7.6 provide the numerical results for cases 1, 2 and 3, respectively. Table 7.4 Case 1, test results Case 1 BE RSVP RSVP-DIFF (PQ) RSVP-DIFF (CPQ) RSVP-BE Recv data rate [kbps] Ave. Delay [ms] Max. Delay [ms] Min. Delay [ms] Pkts dropped The table 7.4 shows that when the network is lightly loaded all mechanisms provided almost equal services. The delay values are from application to application, thus including also processing delays caused by the end hosts. The absolute values are not important here, but the relative changes due to the network configurations and loads contain the essential information. Table 7.5 Case 2, test results Case 2 BE RSVP RSVP-DIFF (PQ) RSVP-DIFF (CPQ) RSVP-BE Recv data rate [kbps] Ave. Delay [ms] Max. Delay [ms] Min. Delay [ms] Pkts dropped

55 Table 7.5 shows results from test case 2, when the network was overloaded and about 30% of the network load was real-time traffic. The pure best-effort network could not provide sufficient service for real-time traffic anymore. The pure RSVP and combination of RSVP and DiffServ implemented by priority queuing were still able to offer reasonable QoS. The CBQ implementation increased delays, but bandwidth was still large enough for the IP phone. The most interesting observation is that the combination of RSVP and best-effort configurations seemed to offer quite satisfactory QoS. This is because only the access link was considerably overloaded and there RSVP offered the reservation. The core link was only slightly overloaded and queue depth was kept short. Therefore only a few packets were dropped and delays remain small. If an application can recover from some packet drops, but needs small delays, the configuration is very reasonable for that kind of network condition. Table 7.6 presents performance results, when the network was heavily overloaded and there was much real-time traffic. The most interesting feature here is the behavior of RSVP network. When the number of flows increased, the processing overhead became probably too high and some packets were dropped. However, the priority queue configuration did not drop any packets and the average delay stayed within a reasonable value, even if the size of the aggregate were almost 4Mbit/s. The CBQ implementation of DiffServ configuration also gave quite sufficient performance, although the delays could be too large for some delay sensitive applications. When the core link was also overloaded the best-effort configuration was not reasonable anymore. The delays stayed small due to the small queue depth, but the bandwidth of the micro flow decreased too much. The Figures illustrate the delays in each cases. All figures are in the same scale and the packet delays are presented as a function of the time. The packet drops are not visible. Table 7.6 Case 3 test results Case 3 BE RSVP RSVP-DIFF (PQ) RSVP-DIFF (CBQ) RSVP-BE Recv data rate [kbps] Ave. Delay [ms] Max. Delay [ms] Min. Delay [ms] Pkts dropped

56 Figure 7.2 Delay values in the unloaded network, case 1 Figure 7.2 presents the delays in the unloaded network and is the baseline for the other results and shows the reference delay characteristics for the other measurements. The rest of the figures present the delay values in selected network configurations. Figure 7.3 presents the worst case delay characteristics. The best-effort network is overloaded and both FIFO queues, at the access node and at the core node, are constantly full. Figure 7.3 Delays values in best-effort configuration, case 3 56

57 Figure 7.4 Delay values of IntServ/RSVP network, case 3 In the RSVP configuration, the jitter is considerably greater than in the reference case, see Figure 7.2. However, the average delay is still in a reasonable level, at least when comparing to the best-effort network configuration. The priority queuing implementation of EF PHB gave the best QoS values for the test flow. The packets were not dropped and the delays stayed quite close to the base case, which can be seen by comparing Figures 7.2 and 7.5. The few delay peaks might be caused by bursts of large video stream packets inside the real-time aggregate flow. Figure 7.5 Delay values of combined network, when EF PHB was implemented by PQ, case 3 57

58 Figure 7.6 Delay values of combined network, when EF PHB was implemented by CBQ, case 3 Figure 7.6 shows that when the size of the aggregate flow was equal to the configured rate of the CBQ scheduler, the delays and jitter increased remarkably. The jitter might be caused by the bursty video stream inside the aggregate. Actually there should be a traffic shaper at the edge node, which smooths the traffic stream before it enters into DiffServ network. Thus the situation in the test network was not totally realistic and therefore CBQ gave worse result than expected. By comparing Figures 7.5 and 7.6 it can be seen that PQ gives a lot better QoS values for real-time traffic than CBQ, which is not surprising. Figure 7.7 Delay values of combined network, when the core link was best-effort, case 3 58

Institute of Computer Technology - Vienna University of Technology. L73 - IP QoS Integrated Services Model. Integrated Services Model

Institute of Computer Technology - Vienna University of Technology. L73 - IP QoS Integrated Services Model. Integrated Services Model Integrated Services Model IP QoS IntServ Integrated Services Model Resource Reservation Protocol (RSVP) Agenda Integrated Services Principles Resource Reservation Protocol RSVP Message Formats RSVP in

More information

Design Intentions. IP QoS IntServ. Agenda. Design Intentions. L73 - IP QoS Integrated Services Model. L73 - IP QoS Integrated Services Model

Design Intentions. IP QoS IntServ. Agenda. Design Intentions. L73 - IP QoS Integrated Services Model. L73 - IP QoS Integrated Services Model Design Intentions Integrated Services Model IP QoS IntServ Integrated Services Model Resource Reservation Protocol (RSVP) The Internet was based on a best effort packet delivery service, but nowadays the

More information

Real-Time Applications. Delay-adaptive: applications that can adjust their playback point (delay or advance over time).

Real-Time Applications. Delay-adaptive: applications that can adjust their playback point (delay or advance over time). Real-Time Applications Tolerant: can tolerate occasional loss of data. Intolerant: cannot tolerate such losses. Delay-adaptive: applications that can adjust their playback point (delay or advance over

More information

RSVP and the Integrated Services Architecture for the Internet

RSVP and the Integrated Services Architecture for the Internet RSVP and the Integrated Services Architecture for the Internet N. C. State University CSC557 Multimedia Computing and Networking Fall 2001 Lecture # 20 Roadmap for Multimedia Networking 2 1. Introduction

More information

Principles. IP QoS DiffServ. Agenda. Principles. L74 - IP QoS Differentiated Services Model. L74 - IP QoS Differentiated Services Model

Principles. IP QoS DiffServ. Agenda. Principles. L74 - IP QoS Differentiated Services Model. L74 - IP QoS Differentiated Services Model Principles IP QoS DiffServ Differentiated Services Architecture DSCP, CAR Integrated Services Model does not scale well flow based traffic overhead (RSVP messages) routers must maintain state information

More information

Quality of Service II

Quality of Service II Quality of Service II Patrick J. Stockreisser p.j.stockreisser@cs.cardiff.ac.uk Lecture Outline Common QoS Approaches Best Effort Integrated Services Differentiated Services Integrated Services Integrated

More information

Resource Control and Reservation

Resource Control and Reservation 1 Resource Control and Reservation Resource Control and Reservation policing: hold sources to committed resources scheduling: isolate flows, guarantees resource reservation: establish flows 2 Usage parameter

More information

RSVP 1. Resource Control and Reservation

RSVP 1. Resource Control and Reservation RSVP 1 Resource Control and Reservation RSVP 2 Resource Control and Reservation policing: hold sources to committed resources scheduling: isolate flows, guarantees resource reservation: establish flows

More information

Internet Services & Protocols. Quality of Service Architecture

Internet Services & Protocols. Quality of Service Architecture Department of Computer Science Institute for System Architecture, Chair for Computer Networks Internet Services & Protocols Quality of Service Architecture Dr.-Ing. Stephan Groß Room: INF 3099 E-Mail:

More information

Lecture 13. Quality of Service II CM0256

Lecture 13. Quality of Service II CM0256 Lecture 13 Quality of Service II CM0256 Types of QoS Best Effort Services Integrated Services -- resource reservation network resources are assigned according to the application QoS request and subject

More information

RSVP Petri Jäppilä Nokia Telecommunications P.O Box Nokia Group, Finland

RSVP Petri Jäppilä Nokia Telecommunications P.O Box Nokia Group, Finland RSVP Petri Jäppilä Nokia Telecommunications P.O Box 330 0004 Nokia Group, Finland Email: petri.jappila@nokia.com Abstract Resource ReSerVation Protocol, RSVP, is a protocol to provide resources reservation,

More information

Basics (cont.) Characteristics of data communication technologies OSI-Model

Basics (cont.) Characteristics of data communication technologies OSI-Model 48 Basics (cont.) Characteristics of data communication technologies OSI-Model Topologies Packet switching / Circuit switching Medium Access Control (MAC) mechanisms Coding Quality of Service (QoS) 49

More information

CSCD 433/533 Advanced Networks Spring Lecture 22 Quality of Service

CSCD 433/533 Advanced Networks Spring Lecture 22 Quality of Service CSCD 433/533 Advanced Networks Spring 2016 Lecture 22 Quality of Service 1 Topics Quality of Service (QOS) Defined Properties Integrated Service Differentiated Service 2 Introduction Problem Overview Have

More information

Mohammad Hossein Manshaei 1393

Mohammad Hossein Manshaei 1393 Mohammad Hossein Manshaei manshaei@gmail.com 1393 Voice and Video over IP Slides derived from those available on the Web site of the book Computer Networking, by Kurose and Ross, PEARSON 2 Multimedia networking:

More information

Lesson 14: QoS in IP Networks: IntServ and DiffServ

Lesson 14: QoS in IP Networks: IntServ and DiffServ Slide supporting material Lesson 14: QoS in IP Networks: IntServ and DiffServ Giovanni Giambene Queuing Theory and Telecommunications: Networks and Applications 2nd edition, Springer All rights reserved

More information

Improving QOS in IP Networks. Principles for QOS Guarantees

Improving QOS in IP Networks. Principles for QOS Guarantees Improving QOS in IP Networks Thus far: making the best of best effort Future: next generation Internet with QoS guarantees RSVP: signaling for resource reservations Differentiated Services: differential

More information

QoS in IPv6. Madrid Global IPv6 Summit 2002 March Alberto López Toledo.

QoS in IPv6. Madrid Global IPv6 Summit 2002 March Alberto López Toledo. QoS in IPv6 Madrid Global IPv6 Summit 2002 March 2002 Alberto López Toledo alberto@dit.upm.es, alberto@dif.um.es Madrid Global IPv6 Summit What is Quality of Service? Quality: reliable delivery of data

More information

Quality of Service Monitoring and Delivery Part 01. ICT Technical Update Module

Quality of Service Monitoring and Delivery Part 01. ICT Technical Update Module Quality of Service Monitoring and Delivery Part 01 ICT Technical Update Module Presentation Outline Introduction to IP-QoS IntServ Architecture DiffServ Architecture Post Graduate Certificate in Professional

More information

Presentation Outline. Evolution of QoS Architectures. Quality of Service Monitoring and Delivery Part 01. ICT Technical Update Module

Presentation Outline. Evolution of QoS Architectures. Quality of Service Monitoring and Delivery Part 01. ICT Technical Update Module Quality of Service Monitoring and Delivery Part 01 ICT Technical Update Module Presentation Outline Introduction to IP-QoS IntServ Architecture DiffServ Architecture Post Graduate Certificate in Professional

More information

QoS for Real Time Applications over Next Generation Data Networks

QoS for Real Time Applications over Next Generation Data Networks QoS for Real Time Applications over Next Generation Data Networks Final Project Presentation December 8, 2000 http://www.engr.udayton.edu/faculty/matiquzz/pres/qos-final.pdf University of Dayton Mohammed

More information

Quality of Service in the Internet

Quality of Service in the Internet Quality of Service in the Internet Problem today: IP is packet switched, therefore no guarantees on a transmission is given (throughput, transmission delay, ): the Internet transmits data Best Effort But:

More information

Differentiated Services

Differentiated Services Diff-Serv 1 Differentiated Services QoS Problem Diffserv Architecture Per hop behaviors Diff-Serv 2 Problem: QoS Need a mechanism for QoS in the Internet Issues to be resolved: Indication of desired service

More information

A Preferred Service Architecture for Payload Data Flows. Ray Gilstrap, Thom Stone, Ken Freeman

A Preferred Service Architecture for Payload Data Flows. Ray Gilstrap, Thom Stone, Ken Freeman A Preferred Service Architecture for Payload Data Flows Ray Gilstrap, Thom Stone, Ken Freeman NASA Research and Engineering Network NASA Advanced Supercomputing Division NASA Ames Research Center Outline

More information

Advanced Computer Networks

Advanced Computer Networks Advanced Computer Networks QoS in IP networks Prof. Andrzej Duda duda@imag.fr Contents QoS principles Traffic shaping leaky bucket token bucket Scheduling FIFO Fair queueing RED IntServ DiffServ http://duda.imag.fr

More information

ITBF WAN Quality of Service (QoS)

ITBF WAN Quality of Service (QoS) ITBF WAN Quality of Service (QoS) qos - 1!! Scott Bradner Quality of Service (QoS)! the ability to define or predict the performance of systems on a network! note: predictable may not mean "best! unfair

More information

Quality of Service (QoS)

Quality of Service (QoS) Quality of Service (QoS) A note on the use of these ppt slides: We re making these slides freely available to all (faculty, students, readers). They re in PowerPoint form so you can add, modify, and delete

More information

(RSVP) Speaker: Dr. Whai-En Chen

(RSVP) Speaker: Dr. Whai-En Chen Resource ReSerVation Protocol (RSVP) Speaker: Dr. Whai-En Chen Assistant Professor Institute of Computer Science and Information Engineering National Ilan University (NIU) Email: wechen@niu.edu.tw The

More information

Real-Time Protocol (RTP)

Real-Time Protocol (RTP) Real-Time Protocol (RTP) Provides standard packet format for real-time application Typically runs over UDP Specifies header fields below Payload Type: 7 bits, providing 128 possible different types of

More information

Lecture 14: Performance Architecture

Lecture 14: Performance Architecture Lecture 14: Performance Architecture Prof. Shervin Shirmohammadi SITE, University of Ottawa Prof. Shervin Shirmohammadi CEG 4185 14-1 Background Performance: levels for capacity, delay, and RMA. Performance

More information

Implementing QoS in IP networks

Implementing QoS in IP networks Adam Przybyłek http://przybylek.wzr.pl University of Gdańsk, Department of Business Informatics Piaskowa 9, 81-824 Sopot, Poland Abstract With the increasing number of real-time Internet applications,

More information

Quality of Service in the Internet

Quality of Service in the Internet Quality of Service in the Internet Problem today: IP is packet switched, therefore no guarantees on a transmission is given (throughput, transmission delay, ): the Internet transmits data Best Effort But:

More information

CS 268: Integrated Services

CS 268: Integrated Services Limitations of IP Architecture in Supporting Resource Management CS 268: Integrated Services Ion Stoica February 23, 2004 IP provides only best effort service IP does not participate in resource management

More information

Quality of Service in the Internet. QoS Parameters. Keeping the QoS. Leaky Bucket Algorithm

Quality of Service in the Internet. QoS Parameters. Keeping the QoS. Leaky Bucket Algorithm Quality of Service in the Internet Problem today: IP is packet switched, therefore no guarantees on a transmission is given (throughput, transmission delay, ): the Internet transmits data Best Effort But:

More information

CS High Speed Networks. Dr.G.A.Sathish Kumar Professor EC

CS High Speed Networks. Dr.G.A.Sathish Kumar Professor EC CS2060 - High Speed Networks Dr.G.A.Sathish Kumar Professor EC UNIT V PROTOCOLS FOR QOS SUPPORT UNIT V PROTOCOLS FOR QOS SUPPORT RSVP Goals & Characteristics RSVP operations, Protocol Mechanisms Multi

More information

Congestion Control and Resource Allocation

Congestion Control and Resource Allocation Problem: allocating resources Congestion control Quality of service Congestion Control and Resource Allocation Hongwei Zhang http://www.cs.wayne.edu/~hzhang The hand that hath made you fair hath made you

More information

CSE 123b Communications Software

CSE 123b Communications Software CSE 123b Communications Software Spring 2002 Lecture 10: Quality of Service Stefan Savage Today s class: Quality of Service What s wrong with Best Effort service? What kinds of service do applications

More information

Resource Reservation Protocol

Resource Reservation Protocol 48 CHAPTER Chapter Goals Explain the difference between and routing protocols. Name the three traffic types supported by. Understand s different filter and style types. Explain the purpose of tunneling.

More information

Advanced Lab in Computer Communications Meeting 6 QoS. Instructor: Tom Mahler

Advanced Lab in Computer Communications Meeting 6 QoS. Instructor: Tom Mahler Advanced Lab in Computer Communications Meeting 6 QoS Instructor: Tom Mahler Motivation Internet provides only single class of best-effort service. Some applications can be elastic. Tolerate delays and

More information

Differentiated Services

Differentiated Services 1 Differentiated Services QoS Problem Diffserv Architecture Per hop behaviors 2 Problem: QoS Need a mechanism for QoS in the Internet Issues to be resolved: Indication of desired service Definition of

More information

Page 1. Quality of Service. CS 268: Lecture 13. QoS: DiffServ and IntServ. Three Relevant Factors. Providing Better Service.

Page 1. Quality of Service. CS 268: Lecture 13. QoS: DiffServ and IntServ. Three Relevant Factors. Providing Better Service. Quality of Service CS 268: Lecture 3 QoS: DiffServ and IntServ Ion Stoica Computer Science Division Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley,

More information

Multimedia Networking. Network Support for Multimedia Applications

Multimedia Networking. Network Support for Multimedia Applications Multimedia Networking Network Support for Multimedia Applications Protocols for Real Time Interactive Applications Differentiated Services (DiffServ) Per Connection Quality of Services Guarantees (IntServ)

More information

Internet Engineering Task Force (IETF) December 2014

Internet Engineering Task Force (IETF) December 2014 Internet Engineering Task Force (IETF) Request for Comments: 7417 Category: Experimental ISSN: 2070-1721 G. Karagiannis Huawei Technologies A. Bhargava Cisco Systems, Inc. December 2014 Extensions to Generic

More information

Multi-Protocol Label Switching

Multi-Protocol Label Switching Rheinisch-Westfälische Technische Hochschule Aachen Lehrstuhl für Informatik IV Prof. Dr. rer. nat. Otto Spaniol Multi-Protocol Label Switching Seminar: Datenkommunikation und Verteilte Systeme SS 2003

More information

Quality of Service for Multimedia over Next Generation Data Networks

Quality of Service for Multimedia over Next Generation Data Networks Quality of Service for Multimedia over Next Generation Data Networks Mohammed Atiquzzaman Department of Electrical & Computer Engineering University of Dayton Dayton, OH 45469. Tel: (937) 229 3183, Fax:

More information

Telecommunication Services Engineering Lab. Roch H. Glitho

Telecommunication Services Engineering Lab. Roch H. Glitho 1 Quality of Services 1. Terminology 2. Technologies 2 Terminology Quality of service Ability to control network performance in order to meet application and/or end-user requirements Examples of parameters

More information

Lecture Outline. Bag of Tricks

Lecture Outline. Bag of Tricks Lecture Outline TELE302 Network Design Lecture 3 - Quality of Service Design 1 Jeremiah Deng Information Science / Telecommunications Programme University of Otago July 15, 2013 2 Jeremiah Deng (Information

More information

CS519: Computer Networks. Lecture 5, Part 5: Mar 31, 2004 Queuing and QoS

CS519: Computer Networks. Lecture 5, Part 5: Mar 31, 2004 Queuing and QoS : Computer Networks Lecture 5, Part 5: Mar 31, 2004 Queuing and QoS Ways to deal with congestion Host-centric versus router-centric Reservation-based versus feedback-based Window-based versus rate-based

More information

Quality of Service (QoS) Computer network and QoS ATM. QoS parameters. QoS ATM QoS implementations Integrated Services Differentiated Services

Quality of Service (QoS) Computer network and QoS ATM. QoS parameters. QoS ATM QoS implementations Integrated Services Differentiated Services 1 Computer network and QoS QoS ATM QoS implementations Integrated Services Differentiated Services Quality of Service (QoS) The data transfer requirements are defined with different QoS parameters + e.g.,

More information

Telematics 2. Chapter 3 Quality of Service in the Internet. (Acknowledgement: These slides have been compiled from Kurose & Ross, and other sources)

Telematics 2. Chapter 3 Quality of Service in the Internet. (Acknowledgement: These slides have been compiled from Kurose & Ross, and other sources) Telematics 2 Chapter 3 Quality of Service in the Internet (Acknowledgement: These slides have been compiled from Kurose & Ross, and other sources) Telematics 2 (WS 14/15): 03 Internet QoS 1 Improving QOS

More information

Queue Overflow. Dropping Packets. Tail Drop. Queues will always sometimes overflow. But Cause more variation in delay (jitter)

Queue Overflow. Dropping Packets. Tail Drop. Queues will always sometimes overflow. But Cause more variation in delay (jitter) Queue Overflow Queues will always sometimes overflow Can reduce chances by allocating more queue memory But Cause more variation in delay (jitter) So Often want only short queues Just enough to cope with

More information

Unit 2 Packet Switching Networks - II

Unit 2 Packet Switching Networks - II Unit 2 Packet Switching Networks - II Dijkstra Algorithm: Finding shortest path Algorithm for finding shortest paths N: set of nodes for which shortest path already found Initialization: (Start with source

More information

ip rsvp reservation-host

ip rsvp reservation-host Quality of Service Commands ip rsvp reservation-host ip rsvp reservation-host To enable a router to simulate a host generating Resource Reservation Protocol (RSVP) RESV messages, use the ip rsvp reservation-host

More information

EECS 122: Introduction to Computer Networks Resource Management and QoS. Quality of Service (QoS)

EECS 122: Introduction to Computer Networks Resource Management and QoS. Quality of Service (QoS) EECS 122: Introduction to Computer Networks Resource Management and QoS Computer Science Division Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley,

More information

Configuring QoS. Understanding QoS CHAPTER

Configuring QoS. Understanding QoS CHAPTER 29 CHAPTER This chapter describes how to configure quality of service (QoS) by using automatic QoS (auto-qos) commands or by using standard QoS commands on the Catalyst 3750 switch. With QoS, you can provide

More information

IP Differentiated Services

IP Differentiated Services Course of Multimedia Internet (Sub-course Reti Internet Multimediali ), AA 2010-2011 Prof. 7. IP Diffserv introduction Pag. 1 IP Differentiated Services Providing differentiated services in IP networks

More information

Master Course Computer Networks IN2097

Master Course Computer Networks IN2097 Chair for Network Architectures and Services Prof. Carle Department for Computer Science TU München Chair for Network Architectures and Services Prof. Carle Department for Computer Science TU München Master

More information

Computer Network Fundamentals Fall Week 12 QoS Andreas Terzis

Computer Network Fundamentals Fall Week 12 QoS Andreas Terzis Computer Network Fundamentals Fall 2008 Week 12 QoS Andreas Terzis Outline QoS Fair Queuing Intserv Diffserv What s the Problem? Internet gives all flows the same best effort service no promises about

More information

QUALITY of SERVICE. Introduction

QUALITY of SERVICE. Introduction QUALITY of SERVICE Introduction There are applications (and customers) that demand stronger performance guarantees from the network than the best that could be done under the circumstances. Multimedia

More information

Master Course Computer Networks IN2097

Master Course Computer Networks IN2097 Chair for Network Architectures and Services Prof. Carle Department for Computer Science TU München Master Course Computer Networks IN2097 Prof. Dr.-Ing. Georg Carle Christian Grothoff, Ph.D. Chair for

More information

Problems with IntServ. EECS 122: Introduction to Computer Networks Differentiated Services (DiffServ) DiffServ (cont d)

Problems with IntServ. EECS 122: Introduction to Computer Networks Differentiated Services (DiffServ) DiffServ (cont d) Problems with IntServ EECS 122: Introduction to Computer Networks Differentiated Services (DiffServ) Computer Science Division Department of Electrical Engineering and Computer Sciences University of California,

More information

QoS Guarantees. Motivation. . link-level level scheduling. Certain applications require minimum level of network performance: Ch 6 in Ross/Kurose

QoS Guarantees. Motivation. . link-level level scheduling. Certain applications require minimum level of network performance: Ch 6 in Ross/Kurose QoS Guarantees. introduction. call admission. traffic specification. link-level level scheduling. call setup protocol. reading: Tannenbaum,, 393-395, 395, 458-471 471 Ch 6 in Ross/Kurose Motivation Certain

More information

Overview. Lecture 22 Queue Management and Quality of Service (QoS) Queuing Disciplines. Typical Internet Queuing. FIFO + Drop tail Problems

Overview. Lecture 22 Queue Management and Quality of Service (QoS) Queuing Disciplines. Typical Internet Queuing. FIFO + Drop tail Problems Lecture 22 Queue Management and Quality of Service (QoS) Overview Queue management & RED Fair queuing Khaled Harras School of Computer Science niversity 15 441 Computer Networks Based on slides from previous

More information

Part1: Lecture 4 QoS

Part1: Lecture 4 QoS Part1: Lecture 4 QoS Last time Multi stream TCP: SCTP Multi path TCP RTP and RTCP SIP H.323 VoIP Router architectures Overview two key router functions: run routing algorithms/protocol (RIP, OSPF, BGP)

More information

Multicast and Quality of Service. Internet Technologies and Applications

Multicast and Quality of Service. Internet Technologies and Applications Multicast and Quality of Service Internet Technologies and Applications Aims and Contents Aims Introduce the multicast and the benefits it offers Explain quality of service and basic techniques for delivering

More information

Last time! Overview! 14/04/15. Part1: Lecture 4! QoS! Router architectures! How to improve TCP? SYN attacks SCTP. SIP and H.

Last time! Overview! 14/04/15. Part1: Lecture 4! QoS! Router architectures! How to improve TCP? SYN attacks SCTP. SIP and H. Last time Part1: Lecture 4 QoS How to improve TCP? SYN attacks SCTP SIP and H.323 RTP and RTCP Router architectures Overview two key router functions: run routing algorithms/protocol (RIP, OSPF, BGP) forwarding

More information

Protocols. End-to-end connectivity (host-to-host) Process-to-Process connectivity Reliable communication

Protocols. End-to-end connectivity (host-to-host) Process-to-Process connectivity Reliable communication Protocols Tasks End-to-end connectivity (host-to-host) Process-to-Process connectivity Reliable communication Error detection Error recovery, e.g. forward error correction or retransmission Resource management

More information

Networking Quality of service

Networking Quality of service System i Networking Quality of service Version 6 Release 1 System i Networking Quality of service Version 6 Release 1 Note Before using this information and the product it supports, read the information

More information

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services Overview 15-441 15-441 Computer Networking 15-641 Lecture 19 Queue Management and Quality of Service Peter Steenkiste Fall 2016 www.cs.cmu.edu/~prs/15-441-f16 What is QoS? Queuing discipline and scheduling

More information

IntServ and RSVP. Overview. IntServ Fundamentals. Tarik Cicic University of Oslo December 2001

IntServ and RSVP. Overview. IntServ Fundamentals. Tarik Cicic University of Oslo December 2001 IntServ and RSVP Tarik Cicic University of Oslo December 2001 Overview Integrated Services in the Internet (IntServ): motivation service classes Resource Reservation Protocol (RSVP): description of the

More information

INSE 7110 Winter 2009 Value Added Services Engineering in Next Generation Networks Week #2. Roch H. Glitho- Ericsson/Concordia University

INSE 7110 Winter 2009 Value Added Services Engineering in Next Generation Networks Week #2. Roch H. Glitho- Ericsson/Concordia University INSE 7110 Winter 2009 Value Added Services Engineering in Next Generation Networks Week #2 1 Outline 1. Basics 2. Media Handling 3. Quality of Service (QoS) 2 Basics - Definitions - History - Standards.

More information

The Assured Forwarding PHB group

The Assured Forwarding PHB group 9. Other Diffserv PHBs Pag. 1 A typical application using the AF PHB is that of a company which uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets

More information

Quality of Service Basics

Quality of Service Basics Quality of Service Basics Summer Semester 2011 Integrated Communication Systems Group Ilmenau University of Technology Content QoS requirements QoS in networks Basic QoS mechanisms QoS in IP networks IntServ

More information

CS 356: Computer Network Architectures. Lecture 24: IP Multicast and QoS [PD] Chapter 4.2, 6.5. Xiaowei Yang

CS 356: Computer Network Architectures. Lecture 24: IP Multicast and QoS [PD] Chapter 4.2, 6.5. Xiaowei Yang CS 356: Computer Network Architectures Lecture 24: IP Multicast and QoS [PD] Chapter 4.2, 6.5 Xiaowei Yang xwy@cs.duke.edu Overview Two historic important topics in networking Multicast QoS Limited Deployment

More information

Internet QoS : A Big Picture

Internet QoS : A Big Picture Internet QoS : A Big Picture Xipeng Xiao and Lionel M. Ni, M, Michigan State University IEEE Network, March/April 1999 Oct 25, 2006 Jaekyu Cho Outline Introduction IntServ/RSVP DiffServ MPLS Traffic Engineering/CBR

More information

EPL606. Quality of Service and Traffic Classification

EPL606. Quality of Service and Traffic Classification EPL606 Quality of Service and Traffic Classification 1 Multimedia, Quality of Service: What is it? Multimedia applications: network audio and video ( continuous media ) QoS network provides application

More information

Configuring QoS CHAPTER

Configuring QoS CHAPTER CHAPTER 34 This chapter describes how to use different methods to configure quality of service (QoS) on the Catalyst 3750 Metro switch. With QoS, you can provide preferential treatment to certain types

More information

Network Support for Multimedia

Network Support for Multimedia Network Support for Multimedia Daniel Zappala CS 460 Computer Networking Brigham Young University Network Support for Multimedia 2/33 make the best of best effort use application-level techniques use CDNs

More information

September General Characterization Parameters for Integrated Service Network Elements. Status of this Memo

September General Characterization Parameters for Integrated Service Network Elements. Status of this Memo Network Working Group Request for Comments: 2215 Category: Standards Track S. Shenker J. Wroclawski Xerox PARC/MIT LCS September 1997 General Characterization Parameters for Integrated Service Network

More information

Quality of Service (QoS)

Quality of Service (QoS) Quality of Service (QoS) What you will learn Techniques for QoS Integrated Service (IntServ) Differentiated Services (DiffServ) MPLS QoS Design Principles 1/49 QoS in the Internet Paradigm IP over everything

More information

Marking Traffic CHAPTER

Marking Traffic CHAPTER CHAPTER 7 To service the growing numbers of customers and their needs, service provider networks have become more complex and often include both Layer 2 and Layer 3 network devices. With this continued

More information

QoS Configuration. Overview. Introduction to QoS. QoS Policy. Class. Traffic behavior

QoS Configuration. Overview. Introduction to QoS. QoS Policy. Class. Traffic behavior Table of Contents QoS Configuration 1 Overview 1 Introduction to QoS 1 QoS Policy 1 Traffic Policing 2 Congestion Management 3 Line Rate 9 Configuring a QoS Policy 9 Configuration Task List 9 Configuring

More information

Quality of Service Mechanism for MANET using Linux Semra Gulder, Mathieu Déziel

Quality of Service Mechanism for MANET using Linux Semra Gulder, Mathieu Déziel Quality of Service Mechanism for MANET using Linux Semra Gulder, Mathieu Déziel Semra.gulder@crc.ca, mathieu.deziel@crc.ca Abstract: This paper describes a QoS mechanism suitable for Mobile Ad Hoc Networks

More information

Differentiated Service Router Architecture - Classification, Metering and Policing

Differentiated Service Router Architecture - Classification, Metering and Policing Differentiated Service Router Architecture - Classification, Metering and Policing Presenters: Daniel Lin and Frank Akujobi Carleton University, Department of Systems and Computer Engineering 94.581 Advanced

More information

Towards Service Differentiation on the Internet

Towards Service Differentiation on the Internet Towards Service Differentiation on the Internet from New Internet and Networking Technologies and Their Application on Computational Sciences, invited talk given at Ho Chi Minh City, Vietnam March 3-5,

More information

RSVP Support for RTP Header Compression, Phase 1

RSVP Support for RTP Header Compression, Phase 1 RSVP Support for RTP Header Compression, Phase 1 The Resource Reservation Protocol (RSVP) Support for Real-Time Transport Protocol (RTP) Header Compression, Phase 1 feature provides a method for decreasing

More information

Lecture 24: Scheduling and QoS

Lecture 24: Scheduling and QoS Lecture 24: Scheduling and QoS CSE 123: Computer Networks Alex C. Snoeren HW 4 due Wednesday Lecture 24 Overview Scheduling (Weighted) Fair Queuing Quality of Service basics Integrated Services Differentiated

More information

Dr.S.Ravi 1, A. Ramasubba Reddy 2, Dr.V.Jeyalakshmi 3 2 PG Student- M.Tech. VLSI and Embedded System 1, 3 Professor

Dr.S.Ravi 1, A. Ramasubba Reddy 2, Dr.V.Jeyalakshmi 3 2 PG Student- M.Tech. VLSI and Embedded System 1, 3 Professor RSVP Protocol Used in Real Time Application Networks Dr.S.Ravi 1, A. Ramasubba Reddy 2, Dr.V.Jeyalakshmi 3 2 PG Student- M.Tech. VLSI and Embedded System 1, 3 Professor Dept. Electronics and Communication

More information

HUAWEI NetEngine5000E Core Router V800R002C01. Feature Description - QoS. Issue 01 Date HUAWEI TECHNOLOGIES CO., LTD.

HUAWEI NetEngine5000E Core Router V800R002C01. Feature Description - QoS. Issue 01 Date HUAWEI TECHNOLOGIES CO., LTD. V800R002C01 Issue 01 Date 2011-10-15 HUAWEI TECHNOLOGIES CO., LTD. 2011. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written

More information

Author : S.chandrashekhar Designation: Project Leader Company : Sasken Communication Technologies

Author : S.chandrashekhar Designation: Project Leader Company : Sasken Communication Technologies White Paper On Sasken IP Quality of Service Integrated Services Operation Over Differentiated Service Networks & Policy Based Admission Control in RSVP Author : S.chandrashekhar Designation: Project Leader

More information

Request For Comments: 2211 Category: Standards Track September Specification of the Controlled-Load Network Element Service

Request For Comments: 2211 Category: Standards Track September Specification of the Controlled-Load Network Element Service Network Working Group J. Wroclawski Request For Comments: 2211 MIT LCS Category: Standards Track September 1997 Specification of the Controlled-Load Network Element Service Status of this Memo This document

More information

Integrated Services - Overview

Integrated Services - Overview Multicast QoS Need bandwidth/delay guarantees On many links unknown to sender Fortunately QoS development after multicast Takes multicast into account RSVP reservations from receivers toward sender rules

More information

The Assured Forwarding PHB group

The Assured Forwarding PHB group Course of Multimedia Internet (Sub-course Reti Internet Multimediali ), AA 2010-2011 Prof. 9. Other Diffserv PHBs Pag. 1 A typical application using the AF PHB is that of a company which uses the Internet

More information

Internet Service Quality: A Survey and Comparison of the IETF Approaches

Internet Service Quality: A Survey and Comparison of the IETF Approaches Internet Service Quality: A Survey and Comparison of the IETF Approaches María E. Villapol and Jonathan Billington Cooperative Research Centre for Satellite Systems University of South Australia SPRI Building,

More information

INTEGRATED SERVICES AND DIFFERENTIATED SERVICES: A FUNCTIONAL COMPARISON

INTEGRATED SERVICES AND DIFFERENTIATED SERVICES: A FUNCTIONAL COMPARISON INTEGRATED SERVICES AND DIFFERENTIATED SERVICES: A FUNCTIONAL COMPARON Franco Tommasi, Simone Molendini Faculty of Engineering, University of Lecce, Italy e-mail: franco.tommasi@unile.it, simone.molendini@unile.it

More information

DiffServ Architecture: Impact of scheduling on QoS

DiffServ Architecture: Impact of scheduling on QoS DiffServ Architecture: Impact of scheduling on QoS Abstract: Scheduling is one of the most important components in providing a differentiated service at the routers. Due to the varying traffic characteristics

More information

IP Quality of Service (QoS)

IP Quality of Service (QoS) IP Quality of Service (QoS) Muhammad Jaseemuddin Dept. of Electrical & Computer Engineering Ryerson University Toronto, Canada References 1. Larry L. Peterson, Bruce S. Davie, Computer Networks: A Systems

More information

Multimedia Networking

Multimedia Networking CMPT765/408 08-1 Multimedia Networking 1 Overview Multimedia Networking The note is mainly based on Chapter 7, Computer Networking, A Top-Down Approach Featuring the Internet (4th edition), by J.F. Kurose

More information

Telematics 2 & Performance Evaluation

Telematics 2 & Performance Evaluation Telematics 2 & Performance Evaluation Chapter 2 Quality of Service in the Internet (Acknowledgement: These slides have been compiled from Kurose & Ross, and other sources) 1 Improving QoS in IP Networks

More information

TDDD82 Secure Mobile Systems Lecture 6: Quality of Service

TDDD82 Secure Mobile Systems Lecture 6: Quality of Service TDDD82 Secure Mobile Systems Lecture 6: Quality of Service Mikael Asplund Real-time Systems Laboratory Department of Computer and Information Science Linköping University Based on slides by Simin Nadjm-Tehrani

More information

Protocols for Multimedia on the Internet

Protocols for Multimedia on the Internet Protocols for Multimedia on the Internet Network Columbus, OH 43210 Jain@CIS.Ohio-State.Edu http://www.cis.ohio-state.edu/~jain/ 12-1 Overview Integrated services Resource Reservation Protocol: RSVP Integrated

More information