International Workshop NGNT 31. DiffServ and MPLS. Tímea Dreilinger

Similar documents
Internetworking with Different QoS Mechanism Environments

Multi-Protocol Label Switching

Copyright (C) The Internet Society (2002). All Rights Reserved.

Resilience-Differentiated QoS Extensions to RSVP and DiffServ to Signal End-to-End IP Resilience Requirements

Supporting Differentiated Services in MPLS Networks

IP Differentiated Services

6 MPLS Model User Guide

DiffServ over MPLS: Tuning QOS parameters for Converged Traffic using Linux Traffic Control

DiffServ over MPLS: Tuning QOS parameters for Converged Traffic using Linux Traffic Control

MPLS Multi-protocol label switching Mario Baldi Politecnico di Torino (Technical University of Torino)

Integrating Network QoS and Web QoS to Provide End-to-End QoS

Improve the QoS by Applying Differentiated Service over MPLS Network

MultiProtocol Label Switching - MPLS ( RFC 3031 )

Principles. IP QoS DiffServ. Agenda. Principles. L74 - IP QoS Differentiated Services Model. L74 - IP QoS Differentiated Services Model

Ahmed Benallegue RMDCN workshop on the migration to IP/VPN 1/54

MPLS Multi-protocol label switching Mario Baldi Politecnico di Torino (Technical University of Torino)

Trafffic Engineering 2015/16 1

Computer Network Architectures and Multimedia. Guy Leduc. Chapter 2 MPLS networks. Chapter 2: MPLS

A MPLS Simulation for Use in Design Networking for Multi Site Businesses

Table of Contents. Cisco MPLS FAQ For Beginners

COMP9332 Network Routing & Switching

سوي يچينگ و مسيريابي در شبكه

Lecture 13. Quality of Service II CM0256

Quality of Service Monitoring and Delivery Part 01. ICT Technical Update Module

QUALITY OF SERVICE ARCHITECTURES APPLICABILITY IN AN INTRANET NETWORK

Tag Switching. Background. Tag-Switching Architecture. Forwarding Component CHAPTER

Presentation Outline. Evolution of QoS Architectures. Quality of Service Monitoring and Delivery Part 01. ICT Technical Update Module

Multiprotocol Label Switching Overview

MPLS Multi-Protocol Label Switching

Internet Engineering Task Force (IETF) December 2014

MULTIPROTOCOL LABEL SWITCHING PROTOCOL

LARGE SCALE IP ROUTING LECTURE BY SEBASTIAN GRAF

Operation Manual MPLS. Table of Contents

A Preferred Service Architecture for Payload Data Flows. Ray Gilstrap, Thom Stone, Ken Freeman

Quality of Service II

CS High Speed Networks. Dr.G.A.Sathish Kumar Professor EC

Multiprotocol Label Switching (MPLS) on Cisco Routers

Multi Protocol Label Switching

THE EFFICIENCY OF CONSTRAINT BASED ROUTING IN MPLS NETWORKS

IP QoS Support in the Internet Backbone OCTOBER 2000

Basics (cont.) Characteristics of data communication technologies OSI-Model

MPLS MULTI PROTOCOL LABEL SWITCHING OVERVIEW OF MPLS, A TECHNOLOGY THAT COMBINES LAYER 3 ROUTING WITH LAYER 2 SWITCHING FOR OPTIMIZED NETWORK USAGE

QoS for Real Time Applications over Next Generation Data Networks

RSVP-TE daemon for DiffServ over MPLS under Linux Features, Components and Architecture

Internet Quality of Service: an Overview

Table of Contents Chapter 1 MPLS Basics Configuration

Configuring QoS CHAPTER

Real-Time Applications. Delay-adaptive: applications that can adjust their playback point (delay or advance over time).

Evaluation of Performance for Optimized Routing in MPLS Network

Marking Traffic CHAPTER

Syed Mehar Ali Shah 1 and Bhaskar Reddy Muvva Vijay 2* 1-

2D1490 p MPLS, RSVP, etc. Olof Hagsand KTHNOC/NADA

H3C S9500 QoS Technology White Paper

Telematics Chapter 7: MPLS

MPLS/Tag Switching. Background. Chapter Goals CHAPTER

Real-Time Protocol (RTP)

Configuring QoS. Understanding QoS CHAPTER

Quality of Service (QoS) Computer network and QoS ATM. QoS parameters. QoS ATM QoS implementations Integrated Services Differentiated Services

Introduction to MPLS APNIC

Improving the usage of Network Resources using MPLS Traffic Engineering (TE)

Sections Describing Standard Software Features

Da t e: August 2 0 th a t 9: :00 SOLUTIONS

MPLS (Multi-Protocol Label Switching)

Multiprotocol Label Switching (MPLS)

Multi Protocol Label Switching (an introduction) Karst Koymans. Thursday, March 12, 2015

Introduction to MPLS. What is MPLS? 1/23/17. APNIC Technical Workshop January 23 to 25, NZNOG2017, Tauranga, New Zealand. [201609] Revision:

QoS Performance Analysis in Deployment of DiffServ-aware MPLS Traffic Engineering

MPLS Intro. Cosmin Dumitru March 14, University of Amsterdam System and Network Engineering Research Group ...

MPLS Multi-protocol label switching Mario Baldi Politecnico di Torino (Technical University of Torino)

Overview of the RSVP-TE Network Simulator: Design and Implementation

Danny Yip. BASc, University of British Columbia, 1999 PROJECT SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF

Protocols. End-to-end connectivity (host-to-host) Process-to-Process connectivity Reliable communication

ENTERPRISE MPLS. Kireeti Kompella

MPLS etc.. MPLS is not alone TEST. 26 April 2016 AN. Multi-Protocol Label Switching MPLS-TP FEC PBB-TE VPLS ISIS-TE MPƛS GMPLS SR RSVP-TE OSPF-TE PCEP

Quality of Service in the Internet

Configuring QoS. Finding Feature Information. Prerequisites for QoS

Internet Engineering Task Force (IETF) Request for Comments: 6178

Quality of Service in the Internet. QoS Parameters. Keeping the QoS. Leaky Bucket Algorithm

Multiprotocol Label Switching (MPLS)

Problems with IntServ. EECS 122: Introduction to Computer Networks Differentiated Services (DiffServ) DiffServ (cont d)

Implementing MPLS Forwarding

PERFORMANCE ANALYSIS OF AF IN CONSIDERING LINK UTILISATION BY SIMULATION WITH DROP-TAIL

Quality of Service. Understanding Quality of Service

Configuring MPLS and EoMPLS

Multiprotocol Label Switching (MPLS) on Cisco Routers

Quality of Service in the Internet

Configuring QoS CHAPTER

The Assured Forwarding PHB group

Configuring Quality of Service for MPLS Traffic

Quality of Service (QoS): Managing Bandwidth More Effectively

Category: Standards Track Juniper Networks E. Rosen Cisco Systems, Inc. August MPLS Upstream Label Assignment and Context-Specific Label Space

Measuring MPLS overhead

Progress Report No. 3. A Case Study on Simulation Scenario

Generic Architecture. EECS 122: Introduction to Computer Networks Switch and Router Architectures. Shared Memory (1 st Generation) Today s Lecture

Information and Communication Networks. Communication

MMPLS: Modified Multi Protocol Label Switching

Network Working Group. Juniper Networks January Maximum Transmission Unit Signalling Extensions for the Label Distribution Protocol

Inter-Domain LSP Setup Using Bandwidth Management Points

Sections Describing Standard Software Features

MPLS LSP Ping Traceroute for LDP TE and LSP Ping for VCCV

Transcription:

International Workshop NGNT 31 DiffServ and MPLS Tímea Dreilinger Abstract Multi Protocol Label Switching (MPLS) technology enables Internet Service Providers to scale their current offerings, and exercise more control over their growing networks. Furthermore, they demand new ways of service differentiation, because of the emerging new applications, which require guaranteed service qualities (like voice or video transmission). This problem can be solved using the Differentiated Services (DiffServ) technology. Thus, the combination of DiffServ and MPLS seems to be a very attractive strategy for backbone network providers. The Internet Engineering Task Force (IETF) [10] has standardized the MPLS support of DiffServ in the recent past [1], but the exertion of it has not been published yet. This research has attempted to underline the necessity the integration of MPLS and DiffServ. Keywords: MPLS, DiffServ 1. Introduction The growth of the Internet over the last several years has placed a tremendous strain on the besieged superhighway: in the continual increase in users, connection speeds, backbone traffic, Internet Service Provider (ISP) networks and new applications. Additionally, significant pressure has arisen from the expectations of Internet users. Since nowadays Internet is a worldwide commercial data network, it has been the proving ground for commerce, managed public data services including intranets and a broadcast medium, all of which presuppose a level of service that includes dependability, predictability and consistent performance. As a result, carriers and Internet providers today face the challenge to scale the capacity, performance and predictability of their network infrastructure and to offer enhanced data services to support customers TCP/IP applications and emerging markets. In addition to providing large data pipes for their customers, service providers are looking for ways to offer additional services. As their networks grow along with their customer base, their key concerns are to scale service offerings, offer new services and manage their networks for optimal performance. MPLS technology enables Internet Service Providers to offer additional services for their customers, scale their current offerings, and exercise more control over their growing networks. Traditionally, ISPs offered the same level of performance to all of their customers, which is called the Best-Effort service. Most differentiation among customers has been probably only in the connectivity type. In recent years though, ISPs increasingly demand new ways of service differentiation, because of new applications emerged, which require other service qualities. Moreover, business users won't use the Internet to transfer their strategically important information if it cannot assure a required quality of service (QoS). On the other hand, ISPs could improve their revenues by applying a differentiated pricing scheme: for higher level of service, higher rates can be charged. The so-called Differentiated Services (DiffServ) architecture delivers this distinctive service functionality for network service providers. Thus, the combination of DiffServ and MPLS seems to be a very attractive strategy for backbone network providers. This article is structured as it follows. Section 2 details the architecture of MPLS. In section 3 I describe the main features of DiffServ. Section 4 reviews the synergies between MPLS and DiffServ. Section 5 sets forth some implementation issues, while section 6 offers a survey of analysis of results. One can read the conclusions and future work in section 7. 2. Introduction to MPLS Each router of a connectionless network makes an independent forwarding decision for the packets traveling from one router to the next. The forwarding decision can be

International Workshop NGNT 32 thought of as composition of analyzing the packet s header and running a network layer routing algorithm. Based on the analysis and the results of running the algorithm, each router chooses the next hop for the packet. In conventional IP forwarding, a particular router will typically consider two packets to be in the same Forwarding Equivalence Class (FEC) if there is some address prefix X in that router s routing tables such that X is the longest prefix math for each packet s destination address. In contrast, in MPLS [7], the assignment of a particular FEC is done just once, as the packet enters the network. The FEC to which the packet is assigned is encoded as a short fixed length value known as a label. When a packet is forwarded to its next hop, the label is sent along with it. At subsequent hops there is no further analysis of the packet s network layer header, but the label is used as an index into a table, which specifies the next hop and a new label. The old labels are replaced with the new one and the packet is forwarded to its next hop (see Figure 1). Forwarding based on IP Address Forwarding based on Labels Forwarding based on IP Address Sender Ingress LSR LSR Egress LSR MPLS Domain Receiver IP Packet IP Packet Label IP Packet Label IP Packet Label IP Packet Figure 1: Packet labeling 2.1. Labels Because MPLS is designed to use different link layers, the label format will reflect the characteristics of the link layer used. AN MPLS shim header can be seen in Figure 2. Layer 2 header 20 bits 3 bits 1 bit 8 bits Label CoS S TTL... MPLS Shim Headers IP packet 32 bits or 4 bytes Figure 2: MPLS shim header structure? The Label field (20-bits) carries the actual value of the MPLS label.? The CoS field (3-bits) can affect the queuing and discard algorithms applied to the packet as it is transmitted through the network.? The Stack (S) field (1-bit) supports a hierarchical label stack.? The TTL (time-to-live) field (8-bits) provides conventional IP TTL functionality 2.2. Label distribution A Label Distribution Protocol (LDP) is used between nodes in an MPLS network to establish and maintain the label bindings. In order to MPLS to operate properly, label distribution information need to be transmitted reliably, and the LDP messages pertaining to a particular FEC need to be transmitted in sequence. 2.3. Label handling It is useful if a labeled packet carries a number of labels, organized as a last-in, firstout stack, referred to as a label stack [8]. (An unlabeled packet can be thought of as a packet whose label stack is empty.) If a labeled packet is forwarded, the so-called Next Hop Label Forwarding Entity (NHLFE) is used. It contains the following information:

International Workshop NGNT 33 the packet s next hop; the operation to perform on the packet s label stack. These operations can be one of the following: - replace the label at the top of the label stack with a specified new label; - or pop the label stack; - or replace the label at the top of the stack with a specified new label and then push one or more given new labels onto the label stack. What to do with the NHLFE? The Incoming Label Map (ILM) maps each incoming label to a set of NHLFEs. It is used when forwarding packets that arrived as labeled packets. The FEC-to-NHLFE (FTN) maps each FEC to a set of NHLFEs. It is used when forwarding packets that arrived unlabeled, but which are to be labeled before being forwarded. To forward a labeled packet, the procedures below should be followed: 1. The LSR examines the label at the top of the stack. Using the information in the NHLFE, it determines where to forward the packet. 2. Performs an operation on the packet s label stack. 3. Encodes the new label stack onto the packet and forwards the result. To forward an unlabeled packet, the procedures below should be followed: 1. The LSR analyses the network layer header to determine the packet s FEC. 2. Using the information in the NHLFE, it determines where to forward the packet. 3. Performs an operation on the packet s label stack. 4. Encodes the new label stack onto the packet and forwards the result. 2.4. Route selection Route selection refers to the method used for selecting the LSP for a particular FEC. The architecture presented in [1] supports two options for route selection: Hop-by-hop routing allows each node to independently choose the next hop for each FEC. In an explicitly routed LSP the LSP ingress or the egress specifies several (or maybe all) of the LSRs in the LSP. If a single LSR specifies the entire LSP, the LSP is strictly explicitly routed. If a single LSP specifies only some of the LSRs, the LSP is loosely explicitly routed. The explicit route needs to be specified at the time that labels are assigned, but the explicit route does not have to be specified with each IP packet. 3. DiffServ architecture Traditionally, Internet Service Providers offered the same level of performance to all of their customers, which is called the Best-Effort service. Most differentiation among customers has been probably only in the connectivity type. In recent years though, ISPs increasingly demand new ways of service differentiation, because of new applications emerged, which require other service qualities. On the other hand, ISPs could improve their revenues by applying a differentiated pricing scheme: for higher level of service, higher rates can be charged. The DiffServ architecture delivers this distinctive service functionality for network service providers. The DiffServ architecture [1] is based on a simple model where traffic entering a network is classified and possibly conditioned at the boundaries of the network, and assigned to different behavior aggregates (BAs), with each BA being identified by a single DiffServ Code-Point (DSCP). Within the core of the network, packets are forwarded according to a Per-Hop Behavior (PHB) associated with the DSCP. The smallest autonomic unit of DiffServ is called a DiffServ domain, where services are assured by identical principles. A domain consists of two types of nodes: boundary (or edge) routers and core routers. Core nodes only forward packets, they do no signaling. This architecture achieves scalability by implementing complex classification and condi-

International Workshop NGNT 34 tioning functions only at network boundary nodes, and core routers store no information about individual flows. The IETF Differentiated Services Working Group defined the DS field for both IPv4 and IPv6 [5]. In IPv4, the Type of Service (ToS) octet is used for this purpose, and in IPv6, the Traffic Class byte is defined to include the DS field. Six bits of the field are used for DSCP, the remaining two are currently unused and ignored by DiffServ-compliant nodes. 4. Synergies between MPLS and DiffServ MPLS simplifies the routing process used in IP networks, since in an MPLS domain, when a stream of data traverses a common path, a Label Switched Path can be established using MPLS signaling protocols. A packet will typically be assigned to a FEC only once, when it enters the network at the ingress edge Label Switch Router, where each packet is assigned a label to identify its FEC and is transmitted downstream. At each LSR along the LSP, only the label is used to forward the packet to the next hop. In a Differentiated Service domain, all the IP packets crossing a link and requiring the same DiffServ behavior are said to constitute a behavior aggregate. At the ingress node of the DiffServ domain, the packets are classified and marked with a DiffServ Code Point, which corresponds to their Behavior Aggregate. At each transit node, the DSCP is used to select the Per Hop Behavior that determines the queue and scheduling treatment to use and, in some cases, drop probability for each packet [1]. From these, one can see the similarities between MPLS and DiffServ: an MPLS LSP or FEC is similar to a DiffServ BA or PHB, and the MPLS label is similar to the DiffServ Code Point in some ways. The difference is that MPLS is about routing (switching) while DiffServ is rather about queuing, scheduling and dropping. Because of this, MPLS and DiffServ appear to be orthogonal, which means that they are not dependent on each other, they are both different ways of providing higher quality to services. Further, it also means that it is possible to have both architectures working at the same time in a single network, but it also possible to have only one of them, or neither of them, depending on the choice of the network operator. The LSR at the ingress edge of the LSP performs an additional role: it controls which traffic is permitted to use a given LSP. The classifier element needed to select traffic applicable for that given LSP is very similar in function to that required for DiffServ traffic conditioning. If the DiffServ and MPLS domains are identical, then the same function within the ingress node may be used to perform both the DiffServ traffic conditioning as well as the MPLS eligibility determination. Furthermore, in many cases, an LSP carries an aggregation of many customers flows within the network, just like a Behavior Aggregate of DiffServ generally carries multiple micro-flows. 5. Implementation issues The starting point of work was the following. Murphy has developed a DiffServ and an MPLS patch for ns [4]. [9] describes a flexible solution for support of DiffServ over MPLS networks, but the method described there hasn t been verified yet. [3] presents the recent advances in MPLS and DiffServ integration, which focused on E-LSPs and L-LSPs. The topology and simulated traffic described there are idealistic, as the authors underline Future work should concentrate on simulations with more realistic topologies and different sources So the objective of this project was to highlight the necessity of integration of DiffServ and MPLS. The motivating reason behind this is the fast re-routing and traffic engineering ability of MPLS and the guaranteed quality of service offered by DiffServ. The way to hit this target is detailed below. When an MPLS-capable router sends a packet to its next hop, it sends the label along with it. Subsequent LSRs in the network do not need to analyze the packet header; they simply read the MPLS label, which is indexed to a routing path table maintained by each LSR and that specifies the next hop. Now, when packets that have been marked by

International Workshop NGNT 35 DiffServ code points arrive at an MPLS network, there needs to be a way to transfer information provided by the code points onto the MPLS label somehow. This needs to be done if MPLS has to be able to make decisions that respect the differentiated service requirements for which the packets have been marked because with MPLS in the network, IP headers are not examined when the packets are MPLS-labeled. Therefore, packets cannot be differentiated based on their DSCPs, since the DSCP is part of the IP header. DiffServ must be provided in a different way in order to make MPLS DiffServ capable. [1] defines two methods of placing the DSCP information onto the MPLS header. The first one suggests using the 3-bit long EXP field of the MPLS header (EXP-field-inferred- LSP or E-LSP). This way, a maximum of eight Behavior Aggregate classes can be separated, and an EXP value is mapped to a full PHB description : queue/scheduling and drop precedence. If there are more than eight service classes, the RFC proposes another LSP type. Here, the DSCP is mapped to a <Label, EXP> pair. The label gives exclusive information about queue and scheduling (Label-inferred-LSP or L-LSP). In the middle of an LSP, BA classification is based on the EXP and/or label fields of the MPLS header rather than the DSCP stored in the IP header. Buffer management and queue scheduling are identical with or without MPLS. Therefore, the per-hop behavior of the packets is also identical. This is desirable because it means that ISPs that provide QoS with MPLS can easily inter-operate with ISPs that provide QoS without MPLS. Second, whether MPLS is involved or not in providing QoS is transparent to end-users. 5.2. Implementation The implementation was done using ns-2 version 2b18 [6], which includes MPLS and DiffServ modules [4], too. The first step was to develop the DS-MPLS module with the extension and modification of MPLS and DS modules. 5.2.1. Modifications in DS module The main point in the modification of DS module was to suit the modules to MPLS architecture. MPLS architecture differentiates the packets belonging to different flows according to their labels, so in the modified module the basis of the packet differentiation are the label and EXP fields of the Shim header. Quality of service parameters of the packets can be defined per microflow-basis, where a microflow is all the packets traveling between a given source and destination node. These microflows in our case represent the cumulative traffic of a neighboring domain (for example a network operated by another service provider). If a LSR has got flow-addition capability, for the newly built up flow it must be available the sum of the resources reserved for the independent flows. It was also necessary to develop the dynamic resource reservation and release in order to avoid the unnecessary resource reservation in case of flow-addition or reroute. 5.2.2. Modifications in MPLS module It covers up the handling of the type of LSPs (E-LSP and L-LSP). This module has the role of configuration of DiffServ links, too. The following points are considered during the modification of module: In a DS-MPLS network there must be only predefined LSPs in order to observe the previously defined quality of service parameters. The signaling messages used in case of creating and deleting LSPs must be an extension of a well-known and commonly used protocol. The packets transferred on an LSP have to have strictly defined PHBs. There should be an ability to assign alternative paths to an LSP, which are used in case of link failure. One LSP should be able to carry more than one microflow. These restrictions narrow down the possibilities of making LSPs.

International Workshop NGNT 36 5.2.3. Backbone network implementation I have chosen a backbone network topology because the integration of the architectures can be observed in this kind of environment. Of backbone network it is typical, that it provides multiple path- and link-protection in order to provide high level of reliability. The used network topology can be seen in Figure 3. The backbone network nodes are denoted by blue color. Data sources are displayed in red, while destinations are depicted in green color. 3 10 14 0 20 6 15 22 1 4 7 18 24 11 2 5 12 16 23 19 21 8 9 13 17 Figure 3: Backbone network topology 6. Result analysis 6.1. Methodology verification The following three points describe briefly the verification of the applied methodology. I wanted to verify trivial things, for example if the delay is two times greater, than the LSP creation time must be also two times greater, etc. Correspondence between the LSP creation time and link delay: it is verified, that if the delay is two or three times greater, than the mean value of LSP creation time is also two and three times larger. Correspondence between the LSP creation time and number of LSPs on a path: it is verified, that the LSP creation time is independent from the number of LSPs on a given path. Correspondence between the LSP creation time and the number of common nodes in an LSP: that the LSP creation time is independent from the number common nodes in an LSP. 6.2. Usage verification In this section I detail the achieved goal: the results underline the advantages of MPLS and DiffServ integration. The simulation results are in the form of bandwidth graphs with bandwidth in Mbps in Y-axes and time in seconds in the X axes. The applied topology is the one in Figure 3. Figure 4 shows a case where only UDP is present from node 0 to node 24, while Figure 5 displays the pure TCP traffic traveling from node 20 to 23. The TCP bandwidth is measured from the source, while UDP is measured at the destination. Figure 6 shows the results of mixing UDP and TCP traffic between nodes 1 and 6. It shows that TCP is negatively affected of increase of UDP traffic. It is due to the following: TCP source undergoes congestion control phase when it senses congestion in the network.

International Workshop NGNT 37 bandwidth [mbps] 9.0 8.0 7.0 6.0 5.0 4.0 3.0 2.0 1.0 0.0 0.000000 1.540000 3.080000 4.620000 6.160000 7.700000 9.240000 10.780000 12.320000 13.860000 15.400000 16.940000 18.480000 time [sec] Figure 4: Pure UDP traffic bandwidth [Mbps] 40.0 35.0 30.0 25.0 20.0 15.0 10.0 5.0 0.0 0.00 1.54 3.08 4.62 6.16 7.70 9.24 10.78 12.32 13.86 15.40 16.94 18.48 time [sec] Figure 5: Pure TCP traffic 25.0 20.0 15.0 10.0 5.0 0.0 0.00 1.54 3.08 4.62 6.16 7.70 9.24 10.78 12.32 13.86 15.40 16.94 18.48 time [sec] Figure 6: TCP with UDP traffic Figure 7 shows the effect of the MPLS-based rerouting scheme proposed by Haskin [2]. There is a link failure between nodes 1 and 6 starting at the 5.5 th second and finished at 15 th second. When the link failure happens, MPLS routes the traffic through the preconfigured alternative LSP. It makes possible to transport continuous data flow even in the presence of link failure in the network.

International Workshop NGNT 38 25.0 bandwidth [Mbps] 20.0 15.0 10.0 5.0 0.0 0.00 1.54 3.08 4.62 6.16 7.70 9.24 10.78 12.32 13.86 15.40 16.94 time [sec] Figure 7: Link failure 7. Conclusion and future work A study of the advantages of MPLS and DiffServ integration was presented. The advantages were highlighted using simulation methodology the simulation focused on MPLS-based rerouting. The simulation experiments underline the benefits of integration, because it made possible to transport continuous data flow with guaranteed quality of service in case of link failure, too. Simulation results in the form of graphs were presented. In order to make more articular the necessity of integration of MPLS and DiffServ, I plan to make more simulations and the results will be analyzed statistically. In these simulations I plan to use different network topologies and different TCP sources, like HTTP web traffic, too. Finally I find worth to consider the MPLS signaling processing overhead and memory storage overhead at the LSRs for maintaining per-lsp state information. References: [1] S. Blake, et al.: Architecture for Differentiated Services, IETF RFC-2475, December, 1999. [2] D. Haskin, A Method for Setting an Alternative Label Switched Paths to Handle Fast Reroute, draft-haskin-mpls-fast-reroute-05, work in progress, May, 2001. [3] R. Law, S. Raghavan, DiffServ and MPLS Concepts and Simulation [4] S. Murphy, The ns MPLS/DiffServ patch, http://www.teltec.dcu.ie/~murphys/nswork/mpls-diffserv/index.html [5] Nichols, et al.: Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers, IETF RFC-2474, December 1998. [6] Ns-2, The Network Simulator ns-2, http://www.isi.edu/nsnam [7] E. Rosen, A. Viswanathan, R. Callon, Multiprotocol Label Switching Architecture, IETF RFC-3031, January 2001. [8] E. Rosen, Y Rekhter, D. Tappan, G. Fedorkow, D. Farinacchi, A. Conta, MPLS Label Stack Encoding, IETF RFC-3032, January 2001. [9] L. Wu, B. Davie, S. Davari, P. Vaananen, R. Krishnan, P. Cheval, J. Heinanen, Multi- Protocol Label Switching (MPLS) Support of Differentiated Services, IETF RFC-3270, May, 2002. [10] www.ietf.org Author: Tímea Dreilinger, MSc, Budapest University of Technology and Economics, Department of Telecommunication and Telematics, High Speed Networks Laboratory H-1117, Budapest, Magyar Tudósok Körútja 2. Phone: 463 4391 Fax: 463 3107 e-mail: dreiling@ttt-atm.ttt.bme.hu