Multiprotocol Label Switching (MPLS) Petr Grygárek rek 1
Why MPLS? integrates various traditional applications on single setvice provider platform Internet, L3 VPN, L2 VPN, L2 virtual P2P lines, Voice (->QoS, fast reconvergence), Wide range of traffic-engineering and node/link protection options provides greater flexibility in the delivery of (new) transport services new routing services may be added without change to the forwarding paradigm Multiple VRF-based VPNs (with address overlap), traffic-engineering, improves the scalability of the network layer eliminating huge IP routing tables by establishing forwarding hierarchy improves the price/performance of network layer routing MPLS switching algorithm might be simpler and faster than traditional IP routing (longest match) Processor-intensive packet analysis and classification happens only once at the ingress edge But MPLS should not be primarily considered a method to make routers much faster anymore today integrates IP routing with VC-based networks (like ATM) 2
Technology in Brief Inserts underlying label-based forwarding layer under traditional network layer routing label forwarding + label swapping similar to ATM/FR Forwarding tables (switching paths) may be constructed and uploaded by various mechanisms which gives enormous flexibility switching tables constructed using IP routing protocol(s) or some other mechanism Completely decouples data plane forwarding from path determination (control plane) Packet forwarding does not depends only on routing protocols that search for shortest path for particular L3 routed protocol based on particular IGP metric Any type of both L3 or L2 traffic can be forwarded Integrates advantages of traditional packet switching and circuit switching worlds 3
Frame Mode and Cell Mode Frame mode frame switching, used today in service provider's and other core networks encapsulates IP or any other payloads (even L2 frames) Cell mode Used to integrate connectionless packet forwarding applications with connection-oriented networks (ATM) Mostly historical, not used anymore today 4
MPLS position in OSI RM MPLS operates between link and network layer Can deals with L3 routing/addressing when establishing virtual paths (LSPs) Uses L2 labels for fast switching Additional shim headers placed between L2 and L3 headers it s presence indicated in L2 header Ethernet EtherType, PPP Protocol field, Frame Relay NLPID, 8847 unicast, 8848 multicast Inherent labels of some L2 technologies ATM VPI/VCI, Frame Relay DLCI, optical switching lambdas, 5
Label-based packet forwarding Packet marked with labels at ingress MPLS router (label imposition) various rules can be used to impose labels destination network prefix, QoS, policy routing (traffic engineering), VPNs, labels in general imply both routes (IP destination prefixes) and service attributes (QoS, TE, VPN, ) Multiple labels can be imposed (label stack) Utlized by lot of applications (MPLS/VPN, hierarchical MPLS forwarding over multiple clouds, segment routing) Packet quickly forwarded according to labels through MPLS core uses only label swapping, no IP routing IP routing information may be used only to build forwarding tables, not for actual (potentially slow) IP routing Label is removed at egress router and packet forwarded further using standard L3 IP routing table lookup In reality, penultimate hop removes topmost label to avoid double lookup on egress device Inner label can imply destination VRF/VSI 6
Components of MPLS architecture Forwarding Component (data plane) brute force forwarding using label forwarding information base (LFIB) Control Component (control plane) Control plane implementation for MPLS-based IP routing using LDP: Creates and updates label bindings (LFIB) <IP_prefix, label> LSR has to participate in routing protocol (IGP or static routing) and/or some other signalling mechanism including ATM switches in MPLS cell-mode Labels assignment is distributed to other MPLS peers using some sort of label distribution protocol (LDP) Control and forwarding functions are separated 7
Label-Switch Router (LSR) MPLS Devices Any router/switch participating on label assignment and distribution that supports label-based packet/cell switching LSR Classification Core LSR (P-Provider) Edge LSR (PE-Provider Edge) (Often the same kind of device, but configured differently) Frame-mode LSR MPLS-capable router with Ethernet interfaces Cell-mode LSR ATM switch with added functionality (control software) 8
Functions of Edge LSR Any LSR on MPLS domain edge, i.e. with non-mpls neighboring devices Performs label imposition and disposition Packets classified and label imposed Classification based on routing and policy requirements Traffic engineering, policy routing, QoS-based routing Information of L2/L3 (and above) headers inspected only once at edge of the MPLS domain 9
Forwarding Equivalence Class (FEC) Packets classified into FECs at MPLS domain edge LSR according unicast routing destinations, QoS class, VPN, multicast group, traffic-engineered traffic class, L2 pseudowire traffic, FEC is a class of packets to be MPLS-switched the same way 10
Label switching path (LSP) Sequence of LSRs between ingress and egress (edge) LSRs + sequence of assigned labels (local significance) Unidirectional (!) Reverse path can take completely different route For every forward equivalence class May diverge from IGP shortest path Path established by traffic engineering using explicit routing and label switching paths tunnels 11
Upstream and downstream neighbors From perspective of some particular LSR Related to particular destination (and FEC) Infrastructure routing protocol s Next-hop address typically determines downstream neighbor for IP over MPLS applications Upstream neighbor is closer to data source whereas downstream neighbor is closer to the destination network 12
Label and label stack Label format (and length) is dependent on particular L2 technology Labels have local-link significance, each LSR creates it s own label mappings although not a rule, same label is often propagated from different links for the same destination Multiple labels may be imposed, forming the label stack Label bottom indicated by s bit Label stacking allows special MPLS applications (VPNs, segment routing etc.) Packet switching is always based on the label on the top of stack 13
MPLS header Between L2 and L3 header MPLS header presence indicated in EtherType/PPP Protocol ID/Frame Relay NLPID 4 octets (32b) 20 bits label value 3 bits Exp (experimental) used for QoS today 8 bits MPLS TTL (Time to Live) 1 bit S bit indicates bottom of stack 14
MPLS Operation basic IP routing Control Plane: Standard IP routing protocol used in MPLS routing domain (OSPF, IS-IS, ) <IP prefix, label > mapping created by egress router i.e. router at MPLS domain edge used as exit point for that IP prefix Label distribution protocols used to distribute label bindings for IP prefixes between adjacent neighbors in direction to potential sources label always has local significance Data Plane: Ingress LSR receives IP packets Performs classification and imposes label Forwards labeled packet to MPLS core Core LSRs switch labeled packets based on label value Egress router removes label before forwarding packet out of MPLS domain Then performs normal L3 routing table lookup 15
MPLS and IP routing interaction in LSR Incoming unlabeled packets IP routing process IP routing table routing information exchange (routing protocol) Outgoing unlabelled packets Incoming labeled packets MPLS Signalling protocol Control plane Label forwarding table Data plane label bindings exchange Outgoing labeled packets 16
Interaction of neighboring MPLS LSRs IP routing process IP routing table MPLS Signalling Protocol Routing information exchange label bindings exchange IP routing process IP routing table MPLS Signalling Protocol Label forwarding table Labeled packets Label forwarding table 17
Operation of edge LSR Resolving of recursive routes Incoming unlabeled packets Incoming labeled packets IP routing process IP routing table MPLS Signalling protocol IP forwarding table Label disposition and L3 lookup Label forwarding table routing information exchange ge label bindings exchange Outgoing unlabeled packets Outgoing labeled packets 18
Penultimate hop behavior Label at the top of label stack is removed not by egress routes at MPLS domain edge (as could be expected), but by it s upstream neighbor (penultimate hop) On egress router, packet could not be label-switched anyway Egress router has to perform L3 lookup to find more specific route commonly, egress router advertises single label for summary route Label-based lookup and disposition of label imposed by egress router s upstream neighbor would introduce unnecessary overhead For that reason, upstream neighbor of egress router always pops label and sends packet to egress router unlabeled Egress LSR requests popping of label through label distribution protocol advertises implicit-null label for particular FEC In some cases, helper 2 nd level label is added if penultimate hop device cannot handle passenger loaod header type (e.g. 6PE) 19
Label Bindings Distribution 20
Label Distribution Protocol Functionality Used to advertise <IP_prefix, label> bindings Still not available for IPv6 on lot of platforms Used to create Label Information Base (LIB) and Label Forwarding Information Base (LFIB) LIB maintains ALL prefixes and labels advertised by individual LDP neighbors FIB (HW copy of routing table) may contain label to be imposed for particular destination network LFIB maintains only labels advertised by next hops for individual prefixes i.e. those actually used for label switching next-hop is typically determined by traditional IGP LFIB is used for actual label switching, LIB maintains labels which may be useful if IGP routes change 21
Label Retention Modes Liberal mode (mostly used in Frame mode) LSR retains labels for FEC from all neighbors Requires more memory and label space Improves latency after IP routing paths change Conservative mode Only labels from next-hop for IP prefix are maintained next-hop determined from IP routing protocol Saves memory and label space 22
Label Distribution Modes Independent LSP control LSR binds labels to FECs and advertises them whether or not the LSR itself has received a label from it s next-hop for that FEC Most common in MPLS frame mode LDP is typical example of this approach Ordered LSP control LSR only binds and advertises label for FEC if - it is the egress LSR for that FEC or - it received a label binding from next-hop LSR - RSVP-base signalling also falls to this category 23
Label allocation Labels are unque per device / per interface For all or just for specified prefixes Label range may be explicitly specified Even for different types of service Separate label range per physical device may simplify troubleshooting 24
Protocols for Label Distribution Label Distribution Protocol (LDP) IETF standard TCP port 646 RSVP-TE used for MPLS traffic engineering (or explicit control of transport paths) BGP Between PE routers of various types of MPLS VPNs PIM enables MPLS-based multicasts Tag Distribution Protocol (TDP) Cisco proprietary, obsolete LDP predecestor TCP port 711 Label bindings are exchanged between neighboring routers in special cases also between non-neighboring routers targeted LDP session e.g. MPLS-based pseudowire, Martini signalling 25
Label Distribution Protocol (LDP): Message Types Discovery messages (hellos) UDP/646 Used to discover and continually check for presence of LDP peers Once a neighbor is discovered, LDP session is established over TCP/646 messages to establish, maintain and terminate session label mappings advertisement messages create, modify, delete error notification message LDP Neighbor ID Corresponding address must be reachable from LDP peer i.e. visible in IGP 26
Frame-mode Label Distribution (LDP) Unsolicited downstream Labels distributed automatically to upstream neighbors Downstream LSR advertises labels for particular FECs to upstream neighbors Independent control of label assignment Label assigned as soon as new IP prefix appears in IP routing table (may be limited by ACL) Mapping stored into LIB LSR may send (switch) labeled packets to next hop even if next- hop itself does not have label for switching that FEC further In some cases it may forward packet further based on traditional IP routing, but there is a problem if there are some inner MPLS labels Liberal retention mode All received label mappings are retained 27
MPLS Applications IP header and forwarding decision decoupling allows for better flexibility and new applications 28
Some Popular MPLS Applications BGP-Free core 6PE Carrier Supporting Carrier MPLS Traffic engineering L3 MPLS VPN (IPv4 & Ipv6) L2 pseudowires and VPLS Segment routing Various SDN multitenant transport models Including MPLS over GRE Integration of IP and ATM obsolete today or with other connection-oriented network 29
BGP-Free Core Design of transit AS without BGP running on transit (internal) routers BGP sessions between PE routers only full mesh or using route reflector(s) P routers know only routes to networks inside core including PE loopback interfaces LDP creates LSPs into individual networks in the core (especially to PEs' loopbacks) Explicit singalling of LSPs using RSVP can be also used PEs' loopbacks are used as next hops of BGP routes passed between PE routers 30
6PE (1) Interconnection of IPv6 islands over MPLS non-ipv6-aware core PE routers has to support both IPv6 and IPv4, but P routers do not need to be upgraded (can be MPLS + IPv4 only) Outer label identifies destination PE router loopback (IPv4 BGP next hop), inner label identifies particular IPv6 route Inner label serves as 'index' into egress PE's IPv6 routing table IPv6 prefixes plus associated (inner) labels are passed between PE routers through MP-BGP (using TCP/IPv4) Inner label needed because of PHP, even if egress PE needs to do IPv6 route table lookup anyway penultimate hop cannot handle now exposed IPv6 header Technical implementation: inner label not unique per-route, but one of 16 reserved labels is chosen and L3 Ipv6 lookup is done on egress router single reserved value is not enough because of load balancing 31
6PE (2) BGP Next Hop attribute is the IPv4-mapped IPv6 address of egress 6PE router Only LDP for IPv4 is required LDP for IPv6 not implemented yet Does not support multicast traffic Only proposed standard RFC 4798 (Cisco, 2007), but implemented by multiple vendors See http://www.netmode.ntua.gr/presentations/6pe%20-%20ipv6%20ov for further details 32
6VPE VRF-aware 6PE Allows to build MPLS IPv6 VPNs on IPv4-only MPLS core See http://sites.google.com/site/amitsciscozone/ho me/important-tips/mpls-wiki/6vpe-ipv6-over- mpls-vpn for configuration example (Cisco) 33
Carrier Supporting Carrier (1) Hierarchical application of label switching concept A MPLS super-carrier provides connectivity between regions (super-carrier's POPs) for others MPLS- based customer carriers Concept of MPLS VPN in super-carrier networks CSC-P, CSC-PE, CSC-CE Enables global MPLS/VPN (over multiple MPLS- based service providers' networks) 34
Carrier Supporting Carrier (2) Utilizes label stack with multiple labels sub-carrier's labels are untouched during transport over super-carrier Customer carriers do not exchange their customer's routes with super-carrier, just loopback interfaces of PE routers Good scalability 35
Segment routing Used for explicit routing path specification including service insertion Labels in MPLS label stack specify exact hops on the path inserted by source edge device strict or loose way service instance (like FW, IPS, ) can be inserted into the path that way Labels are generated (by individual LSRs) for Each individual link Each individual segment routing MPLS LSR Segments between non-neighboring LSRs explicitly specified by device labels are traversed based on IGP 36
MPLS Traffic Engineering 37
MPLS TE Goals Minimizes network congestion, improve network performance Spreads flows to multiple paths i.e. diverges them from shortest path calculated by IGP More efficient usage of network resources (bandwidth on links on suboptimal paths) Completely hidden from customers' IP routing in underlying infrastructure 38
MPLS TE Principle Originating LSR (headend) sets up a TE LSP to terminating LSR (tailend) through a explicitly specified path defined by sequence of intermediate LSRs either strict or loose explicit route dynamic (IGP-based path is also an option) LSP is calculated automatically using constraint- based routing or manually using some sort of central management tool in large networks 39
MPLS-TE Mechanisms Link information distribution Path computation (constrained SPF) or manual specification list of hops LSP signalling RSVP-TE accomplishes label assignment during MPLS tunnel creation signalling needed even if path calculation is performed manually Selection of traffic that will take the TE-LSP by QoS class or another policy routing criteria static routes, policy routing, autoroute, forwarding adjacency (OSPF),... 40
Link Information Distribution Utilizes extensions of OSPF or IS-IS to distribute links current states and attributes OSPF LSA type 10 (opaque) Maximum bandwidth, reservable bandwidth, available bandwidth, flags (aka attributes or colors), TE metric Constraint-based routing Takes into account links current states and attributes when calculating routes Constraint-based SPF calculation first excludes links that do not comply with required LSP parameters bandwidth, affinity bits (link colors ), Uses TE-metric instead of IGP metric (if defined on individual links) 41
RSVP Signalling Resource reservation Protocol (RFC 2205) was originally developed in connection with IntServ, but should be understood as completely independent signalling protocol Reserves resources for unidirectional (unicast/multicast) L4 flows soft-state must be refreshed periodically May be used with MPLS/TE to signal DiffServ QoS PHB over the path 42
RSVP Messages Message Header (message type) Resv, Path, ResvConfirm, ResvTeardown PathTeardown, PathErr,ResvErr Variable number of objects of various classes TLVs including sub-objects Support for message authentication and integrity check 43
Basic RSVP Operation PATH message travels from sender to receiver(s) from TE tunnel headend to tailend in our case allows intermediate nodes to build soft-state information regarding particular session includes flow characteristics (flowspec) RESV message travels from receiver interested in resource reservation towards the sender from TE tunnel tailend back to headend actually causes reservation of intermediate nodes' resources provides labels to upstream routers Soft state has to be periodically renewed 44
LSP Preemption Support for creation of LSPs of different priorities with preemption option setup and holding priority setup priority is compared with holding priority of existing LSPs 0 (best) 7 (worst) Preemption modes Hard just tears preempted LSP down Soft signalls pending preemption to the headend (PathTear/ResvTear) of existing LSP to give it an opportunity to reroute traffic 45
LSP Path Calculation in Multiarea Environment Splitting network into multiple areas limits state information flooding Headend specifies path to route LSP setup requests using list of ABRs loose routing Each ABR calculates and reserves path over connected area and requests another ABR on the path to take care of next section In practise, service providers prefer flat core network (OSPF area0 / L2-only IS-IS) 46
Dynamic routing & TE tunnels Autoroute all destinations located behind TE tunnel endopoint are directed to TE tunnel interface (unidirectional) tunnel's metric normally corresponds to IGP metric between headend and tailend shortest path, regardless of actual tunnel path Logic local to tunnel headend router Forwarding adjacency Headend-tailend link (TE tunnel) is propagated into OSPF/IS-IS database Needs to be configured both on headend and tailend 47
MPLS Fast Reroute In case of node or link failure, backup LSP may be automatically initiated (in tens of milliseconds) 50 ms failover is a goal (compare to SDH) Fast Reroute option must be requested during LSP setup Global or Local restoration (Similar functionality exists in IP-only environment (IP Fast Reroute)) 48
Fast Reroute - Global restoration New LSP is set up by headend LSP failure is signalled to the headend by PathErr RSVP message failure detection using RSVP Hellos Headend has the most complete routing constraints information to establish a new LSP Backup tunnel can be pre-signalled or signalled when primary tunnel goes down latter option incurs tunnel break detection and signalling delays 49
Fast Reroute - Local restoration Detour LSP around failed link/node LSR that detected the failure (called Point of Local Repair) start to use alternative LSP Detour LSPs are manually preconfigured or precalculated dynamically by Point of Local Repair and pre-signalled Detour joins back the original LSP at the Merge Point i.e. at Next hop for link protection, Next Next hop for Node protection Facility Backup (commonly used) - double labeling is used on detour path external tag is dropped before packet enters Merge Point packets arrive to the Merge Point with the same label as they would if they came along original LSP (just from different interface) Different input interface on merge point is not an issue as labels are allocated per-platform, not per-interface One-to-One backup does not use label stacking Each LSP has it s own backup path 50
MPLS QoS 51
MPLS and Diffserv LSR uses the same mechanism as traditional router to implement different Per-Hop Behaviors (PHBs) 2 types of LSPs (may coexist on single network): EXP-inferred LSPs (mostly used) one LSP can transport multiple traffic classes simultaneously EXP bits in MPLS header used to hold DSCP value Map between EXP and PHB signaled during LSP setup extension of LDP and RSVP (new TLV defined) Label-inferred LSPs LSP can transport just one traffic class Fixed mapping of <DSCP, EXP> to PHB standardized 52
Diffserv Tunneling over MPLS There are two markings of the packet (EXP, DSCP). There are different models to handle interaction between multiple markings. Pipe model transfers IP DSCP marking untouched useful for interconnection of two Diffserv domains using MPLS Uniform Model Uniform customer and provider QoS models makes LSP an extension of DiffServ domain 53
MPLS VPNs 54
VPN Implementation: Options in General Solution to implement potentially overlapping address spaces of independent customers: Overlay model Infrastructure provides tunells between CPE routers FR/ATM virtual circuits, IP tunnels (GRE, IPSec, ) Peer-to-peer model Provider edge router exchange routing information with customer edge router Customer routes present in service provider s routing protocol Need to solve VPN separation and overlapping customer addressing traditionally by complicated filtering Optimal routing between customer sites through shared infrastructure data don t need to follow tunnel tospology 55
MPLS/VPN Basic Principles MPLS helps to separate traffic from different VPNs without usage of overlay model tunneling techniques Routes from different VPNs kept separated, multiple routing tables (VRFs) implemented at edge routers (one for each VPN) Uses MPLS label stack: outer label identifies egress edge router, inner label identifies VPN, resp. single route in particular VPN P routers in MPLS core can never see customers' addressing To allow propagation of IP prefixes from all VPNs to core (BGP), potentially overlapping addresses of separated VPNs is made unique with Route Distinguisher (different for every VPN) Those IP-VPN (VPNv4) addresses are propagated between PE routers as a new address family using Multiprotocol BGP VPNv4 AF address = RD + IPv4 address, similarilly for IPv6 With each route, MP-BGP distributes (inner) labels identifying particular route in target VRF at egress edge router (using BGP attributes) MP-BGP runs only between PEs, Ps are not involved at all Ps only tunnel data traffic between PE's loopbacks based on MPLS labels 56
MPLS VPN advantages Integrates advantages of overlay and peer-to- peer model Overlay model advantages: security and customer address space isolation Peer-to-peer model advantages: routing optimality simplicity of new CPEs addition (shared PEs) In very huge implementations, SP's route reflector capacity and MPLS label space still can be a limitation 57
MPLS VPN Implementation VPN defined as set of sites sharing the same routing information Site may belong to multiple VPNs Multiple sites (from different VPNs) may be connected to the same PE router PE routers maintains only routes for connected VPNs and backbone routes needed to reach other PEs Increases scalability Decreases capacity requirements on PE router PE router uses IP at customer network facing interface(s) and MPLS at backbone-facing interfaces Backbone (P routers) uses only label switching IGP routing protocol used only to establish optimal label switch paths between PE loopbacks (with LDP/RSVP) Utilizes MPLS label stack Inner (VPN) label identifies VRF (or particular route in destination VRF) Outer (transport) label identifies egress LSR 58
Routing information exchange P-P and P-PE routers Using IGP Needed to determine paths between PEs over MPLS backbone PE-PE routers (non-adjacent) Using MP-IBGP sessions Needed to exchange routing information between routing tables (VRFs) for particular VPN 59
Routing information in PE routers PE routers maintain multiple separated routing tables Global routing table filled in with backbone routes (from core IGP) allows to reach other PE routers VRF (VPN routing & forwarding) instances Separate routing tables for individual VPNs Every CE-facing router interface assigned to a single VRF VRF instance can be seen as virtual router 60
VPN routing and forwarding VPN A CE VPN A CE VRF A PE P VPN B CE VRF B MPLS domain VRF = virtual router VPN B CE VRF for VPN A VRF for VPN B 61
VRF usage VPN A CE VPN A CE packet VPN A CE VRF A PE P PE VPN B CE VRF B PE VPN B CE VPN A VPN B CE CE 62
MPLS VPN example OSTRAVA 10.0.0.1/24 10.0.1.1/24 TACHOV Customer A e0 e1 Customer B 10.0.0.1/24 I-PE 1.0.0.0/24 2.0.0.0/24 S0 S1/0 S1/1 S0 G-P.1.2.1.2 MPLS Core J-PE e0 Customer B e1 Customer A 10.0.2.1/24 63
VPN Route Distinguishing and Exchange Between PEs OSTRAVA Customer A Customer B RD 100:1 RT 100:10 VRF CustomerA-I 10.0.0.1/24 e0 e1 10.0.0.1/24 VRF CustomerB-I RD 100:2 RT 100:20 I-PE S0 MP-BGP 1.0.0.0/24 2.0.0.0/24 S1/0 S1/1 G-P S0.1.2.1.2 lo0 lo0 3.0.0.1/32 3.0.0.2/32 MPLS Core IGP (OSPF, IS-IS, ) RD 100:2 RT 100:20 VRF CustomerB-J J-PE 10.0.1.1/24 e0 e1 10.0.2.1/24 VRF CustomerA-J RD 100:1 RT 100:10 TACHOV Customer B Customer A 64
PE-to to-pe VPN Route Propagation PE router exports information from VRF to MP-BGP prefix uniqueness ensured using Route Distinguisher (64bit ID) Unique for the same VRF on all routers or unique per VRF+per router VPN-V4 prefix = RD + IPv4 prefix Route exported with a set of route target(s) specifying which target VRF should import the route Multiprotocol (MP) ) ibgp i session between PE routers over MPLS backbone (P routers) Full mesh (route reflectors often used) Propagates VPNv4 routes BGP attributes identify site-of-origin and route target(s) Opposite PE router imports information from MP-BGP into VRF(s) based on import Route Targets precofigured for each VRF 65
MPLS VPN BGP attributes Site of Origin (SOO) Identifies site where the route originated from avoids loops Route Target Each VRF may configure which RT(s) it import and which ones it exports Technically, listed attributes are represented using well-known extended communities Extcommunity propagation has to be allowed between respective BGP neighbors 66
Customer route advertisement from PE router (MP-BGP) PE router assigns RT, RD based on source VRF and SOO PE router assigns VPN (MPLS) label per VRF/per route Identifies particular VPN route (in VPN site s routing table, i.e. in VRF) Used as second label in the label stack Top-of-stack label identify egress PE router Next-hop of propagated route is rewritten to advertising PE router loopback interface MP-IBGP update is sent to other PE routers most probably via route reflector 67
Overlapping of VPNs Site (VRF) may belong to multiple VPNs provided that there is no addresses overlap Useful for shared services, extranets, Internet, hub VRFs etc. Multiple RT imports and exports may be configured for each particular VRF Typical usages both in SP networks and in DC cores Keep in mind that i/e routing exchange between VRFs is non-transitive. 68
Overlapping VPNs example OSTRAVA Customer A Customer B RD 100:1 RT 100:11 VRF CustomerA-I 10.0.0.1/24 e0 e1 10.0.0.1/24 VRF CustomerB-I RD 100:2 RT 100:21 I-PE S0 1.0.0.0/24 2.0.0.0/24 S1/0 S1/1 G-P S0.1.2.1.2 lo0 lo0 3.0.0.1/32 3.0.0.2/32 RD 100:2 RT 100:22 VRF CustomerB-J J-PE 10.0.1.1/24 e0 e1 10.0.2.1/24 VRF CustomerA-J RD 100:1 RT 100::12 TACHOV Customer B Customer A 69
CE to PE routing information exchange CE router always exchanges routes with VRF assigned to interface connecting to that CE router Static routing or directly y connected networks External BGP IGP (RIPv2,OSPF,EIGRP) Multiple instances of routing process (for every VRF) are running on PE router or separated routing contexts in single routing process 70
OSPF: PE-CE protocol specifics Superarea concept MPLS backbone replaces area 0 or area 0 parts connected via superbackbone Routes seen as E1/2 or IA based on OSPF process ID match domain ID Down bit protects again information looping via backdoor links EIGRP/RIP Metric transferred using MED atribute BGP most easy and most scalable It might be needed to manipulate BGP anti-looping rules if same customer AS# is reused for multiple PE-CE routing sessions AS override / ignore ASPath check SOO may be used as additional protection against routing loops 71
Inter-AS MPLS VPN Options (RFC 2547bis) Separate IBGP/RR structures in different SP's ASes EBGP needed to distribute vpnv4 addresses Option 10A Back to Back VRFs between ASBRs Option 10B VPNv4 ebgp between ASBRs Option 10C VPNv4 between RRs or PEs using multihop ebgp 72
Option 10A Back to Back VRFs between ASBRs PE AS1 PE AS2 multiple subinterfaces/vrfs/ipv4 AF EBGP sessions No MPLS labels Each PE treats other PE as CE Easy, but not very scalable (4k VLAN tags per physical port) 73
Option 10B VPNv4 ebgp between ASBRs On trusted private peering only Labeled vpnv4 addresses distributed from PE to RR, ASBR PE also peers with RR multiple ASBR PEs may be implemented EBGP redistribution of labeled VPN-IPv4 routes from AS1 to neighboring AS2 (and to AS2 RR) top label of incoming data packets should be checked against locally generated label table LSP from ingress PE1/AS1 to egress PE2/AS2 LSP can span more than 2 Ases Route targets needs to be agreed between cooperating service providers 74
Option 10C VPNv4 between RRs (or PEs) using multihop ebgp ASBRs does not maintain nor distribute customer's vpnv4 routes Only /32 labeled routes to PE loopbacks EBGP used to redistribute labeled PE loopback routes to neighboring AS ASBR LSPs between PEs in different ASes EBGP multihop session between RRs in neighboring ASes for (labeled) vpnv4 AF (customer routes) If PE loopback /32 routes are not distributed to P routers of all ASes, 3 labels are needed Inner-most: assigned by egress PE, identifies output VRF/route Middle: assigned by ASBR - for egress PE loopback Topmost: assigned by ingress PE downstream router LSP to ASBR Similar to CsC 75