Multipoint LDP (mldp)

Similar documents
Next Generation MULTICAST In-band Signaling (VRF MLDP: Profile 6)

You must be familiar with IPv4 multicast routing configuration tasks and concepts.

Stateless Multicast with Bit Indexed Explicit Replication

Stateless Multicast with Bit Indexed Explicit Replication

Internet Engineering Task Force (IETF) Category: Standards Track. T. Morin France Telecom - Orange Y. Rekhter. Juniper Networks.

Stateless Multicast with Bit Indexed Explicit Replication (BIER)

MLDP In-Band Signaling/Transit Mode

Network Configuration Example

LARGE SCALE IP ROUTING LECTURE BY SEBASTIAN GRAF

Implementing Layer-3 Multicast Routing on Cisco IOS XR Software

MPLS VPN--Inter-AS Option AB

Viewing IP and MPLS Multicast Configurations

MPLS VPN Inter-AS Option AB

BGP mvpn BGP safi IPv4

MVPN: Inter-AS Option B

Configuring Multicast VPN Inter-AS Support

Core of Multicast VPNs: Rationale for Using mldp in the MVPN Core

Multicast in a VPN I. In This Chapter SR Advanced Configuration Guide Page 635

Configuring multicast VPN

Practice exam questions for the Nokia NRS II Composite Exam

WAN Edge MPLSoL2 Service

BGP-MVPN SAFI 129 IPv6

Internet Engineering Task Force (IETF) Request for Comments: AT&T N. Leymann Deutsche Telekom February 2012

Cisco Training - HD Telepresence MPLS: Implementing Cisco MPLS V3.0. Upcoming Dates. Course Description. Course Outline

Configuration Commands. Generic Commands. shutdown: Multicast XRS Routing Protocols Guide Page 103. Syntax [no] shutdown

IxNetwork TM mldp Emulation

LDP Fast Reroute using LDP Downstream On Demand. 1. Problem: 2. Summary: 3. Description:

Introduction to Segment Routing

Implementing MPLS Layer 3 VPNs

BrainDumps.4A0-103,230.Questions

Configuring MPLS, MPLS VPN, MPLS OAM, and EoMPLS

Configuring Multicast VPN Extranet Support

Network Configuration Example

Implementing MPLS Label Distribution Protocol

You cannot configure prefix-sids on transit links at this time. Support for this feature may be introduced in later code versions.

Configuring VRF-lite CHAPTER

Testking.4A0-103,249.QA 4A Alcatel-Lucent Multi Protocol Label Switching

CCIE Service Provider Sample Lab. Part 2 of 7

Configuring PIM. Information About PIM. Send document comments to CHAPTER

mrsvp-te based mvpn draft-hlj-l3vpn-mvpn-mrsvp-te-00 Lin Richard Li 84th Vancouver Page 1

Securizarea Calculatoarelor și a Rețelelor 32. Tehnologia MPLS VPN

Implementing Layer-3 Multicast Routing on Cisco IOS XR Software

Agenda DUAL STACK DEPLOYMENT. IPv6 Routing Deployment IGP. MP-BGP Deployment. OSPF ISIS Which one?

MPLS etc.. MPLS is not alone TEST. 26 April 2016 AN. Multi-Protocol Label Switching MPLS-TP FEC PBB-TE VPLS ISIS-TE MPƛS GMPLS SR RSVP-TE OSPF-TE PCEP

Deploying MPLS Traffic Engineering

Bit Indexed Explicit Replication A Stateless Multicast Architecture. Nagendra Kumar Nainar NANOG72

Segment Routing Commands

Internet Engineering Task Force (IETF) Category: Standards Track ISSN: Y. Cai Alibaba Group T. Morin Orange June 2016

MPLS VPN. 5 ian 2010

Internet Engineering Task Force (IETF)

MPLS LDP. Agenda. LDP Overview LDP Protocol Details LDP Configuration and Monitoring 9/27/16. Nurul Islam Roman

BASIC MULTICAST TROUBLESHOOTING. Piotr Wojciechowski (CCIE #25543)

Cisco IOS XR Multicast Configuration Guide for the Cisco CRS Router, Release 5.1.x

Table of Contents 1 Multicast VPN Configuration 1-1

CCIE Service Provider v3.0 Sample Lab

Internet Engineering Task Force (IETF) Request for Comments: Juniper Networks, Inc. J. Tantsura Ericsson Q. Zhao Huawei Technology January 2016

BGP Best External. Finding Feature Information

IPv6 PIM. Based on the forwarding mechanism, IPv6 PIM falls into two modes:

Table of Contents 1 PIM Configuration 1-1

Segment Routing With IS-IS v4 Node SID

internet technologies and standards

Internet Engineering Task Force (IETF) Request for Comments: Alcatel-Lucent January 2016

Configure IOS XR Traffic Controller (XTC)

PIM-tunnels and MPLS P2MP as Multicast data plane in IPTV and MVPN. Lesson learned

High Availability for 2547 VPN Service

Multicast VPN C H A P T E R. Introduction to IP Multicast

Computer Network Architectures and Multimedia. Guy Leduc. Chapter 2 MPLS networks. Chapter 2: MPLS

Multicast Routing and Forwarding Commands

Live-Live: A Network Solution Without Packet Loss

IP Multicast Load Splitting across Equal-Cost Paths

Multicast only Fast Re-Route

IPv6 Switching: Provider Edge Router over MPLS

Juniper Networks Live-Live Technology

Configuring Virtual Private LAN Service (VPLS) and VPLS BGP-Based Autodiscovery

CONTENTS. Introduction

CCIE R&S Techtorial MPLS

Configuring Virtual Private LAN Services

Vendor: Alcatel-Lucent. Exam Code: 4A Exam Name: Alcatel-Lucent Multiprotocol Label Switching. Version: Demo

HP MSR Router Series. MPLS Configuration Guide(V7) Part number: Software version: CMW710-R0106 Document version: 6PW

Table of Contents Chapter 1 MPLS Basics Configuration

Label Distribution Protocol and Basic MPLS Configuration. APNIC Technical Workshop October 23 to 25, Selangor, Malaysia Hosted by:

MPLS VPN C H A P T E R S U P P L E M E N T. BGP Advertising IPv4 Prefixes with a Label

Deploying Next-Generation Multicast VPN. Emil Gągała PLNOG, Warsaw,

Multiprotocol Label Switching (MPLS)

Multiprotocol Label Switching (MPLS)

Multicast only Fast Re-Route

Segment Routing On Demand for L2VPN/VPWS

Implementing Static Routes on Cisco IOS XR Software

ASM. Engineering Workshops

HP 5920 & 5900 Switch Series

Configuring Multicast VPN Extranet Support

mpls traffic-eng lsp attributes

SDN Workshop. Contact: WSDN01_v0.1

This module describes how to configure IPv6 Multicast PIM features.

SYSC 5801 Protection and Restoration

BIER. Bit Indexed Explicit Replica0on. MBONED, IETF 92 Greg Shepherd

IPv6 Switching: Provider Edge Router over MPLS

LISP Multicast. Finding Feature Information. Prerequisites for LISP Multicast

Configure Multipoint Layer 2 Services

Advanced Network Training Multicast

Transcription:

1

Multipoint LDP (mldp) IJsbrand Wijnands BRKIPM-3111

Agenda Introduction FEC encoding Capability negotiation P2MP & MP2MP LSPs Root Node Redundancy Fast ReRoute using Link Protection Make Before Break Recursive FEC MoFRR In-band signalling Configuration and show commands 3

Introduction

Introduction Why mldp? Customers running MPLS in their network want to run Multicast natively over MPLS MPLS forwarding plane is shared between unicast and multicast By that unicast MPLS features are applied to multicast Separation of data plane and control plane has advantages 5

Introduction (cont) Why mldp? Simplification compared to PIM No shared tree / source tree switchover No (S,G,R) prune s No DR election No PIM registers No Asserts No Periodic messaging No Auto-RP/BSR 6

Introduction Extensions to LDP mldp is an extension to the IETF LDP RFC 3036. Procedures are documented in IETF RFC 6388 Joined effort by multiple vendors and customers. mldp reuses LDP protocol packets and neighbor adjacencies. mldp is a client of the LDP infrastructure. mldp allows to create P2MP and MP2MP LSP, we refer to these as Multipoint LSPs (MP LSPs). 7

Introduction Terminology P2MP - Point to Multi-point Like a PIM SSM tree MP2MP Multi-Point to Multi-Point Like a PIM Bidir tree MP LSP Multi-Point LSP, either P2MP or MP2MP Label Mapping Like a PIM Join Label Withdraw Like a PIM Prune Label Release, Notification Does not exist in PIM 8

FEC Encoding

FEC Encoding The mldp FEC Element FEC stands for Forwarding Equivalence Class FEC is a unique identifier of an forwarding entry; For unicast this is a Prefix For PIM it is a (S,G) or (*,G) The FEC in mldp is combination of 3 tuples; Tree Type Root Address Variable Length Opaque encoding. The Opaque field consists of TLV s Each service/application can have it own TLV type. Very flexible approach to make the FEC unique. 10

FEC Encoding LDP message encoding FEC elements are carried within a LDP FEC TLV mldp defines three FEC elements for MP LSPs P2MP FEC element MP2MP downstream FEC element MP2MP upstream FEC element LDP protocol consists of messages which carry TLVs Other TLV Label TLV Opaque Root Tree Type FEC Element FEC TLV Label Mapping Message 11

FEC Encoding The FEC Element encoding 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Type Address Family Address Length +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ~ Root Node Address ~ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Opaque Length Opaque Value... +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + ~ ~ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Field Type Description P2MP, MP2MP Up, MP2MP Down Address Family Address Family Numbers by IANA (IPv4 = 1, IPv6 = 2) Address Length Root Node Address Opaque Length Opaque field Length of the address IP address of MP LSP root (within MPLS core) Length of the Opaque encoding that follows TLV encoded 12

FEC Encoding The mldp Root address Root address is used to route the LSP through the network Very much like how PIM route s the tree using Source or RP. Each LSR in the path resolves next-hop of root address Label mapping message then sent to that next-hop Resulting in a dynamically created MP LSP No pre-computed, traffic engineered path 13

FEC Encoding Opaque Value Opaque field is a variable length value encoded as TLV mldp does not care what is encoded in the Opaque value Only the applications using the mldp LSP care. Value encoded is application specific It can represent the (S,G) stream. Or can be an LSP identifier (Default/Data MDTs in mvpn) 14

FEC Encoding The mldp Opaque TLV encoding 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Type < 255 Length Value... +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ~ ~ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Type name Type # Length Generic LSP ID 1 4 bytes { 4 byte ID } Value MVPN MDT 2 11 bytes { VPN-ID, MDT # } IPv4 In-band signalling 3 8 bytes { Source, Group } IPv6 In-band signalling 4 32 bytes { Source, Group } Recursive FEC 7 { FEC element } Recursive VPN FEC 8 8 + { RD, FEC element } 15

FEC Encoding The mldp Extended Opaque TLV encoding 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Type = 255 Extended Type Length (high) +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+- Length (low) Value +-+-+-+-+-+-+-+-+ ~ ~ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Defined in case we exceed the available 255 types Currently not used First come first service allocation, no IETF draft needed. 16

Capability negotiation

Capability negotiation Why do we need it New FEC Elements added to LDP for mldp Don t know if your LDP neighbour understand the new FEC type Want to prevent certain types to be used in the network This is inconvenient while troubleshooting/deploying a feature For that reason Capability Negotiation has been defined for LDP 18

Capability negotiation RFC 5561 Allows advertising of capability TLVs At session initialisation time within the Initialisation Message Dynamically during the session within a Capabilities Message Several mldp capability TLVs are defined P2MP (Point to Multipoint) TLV 0x0508 MP2MP (Multipoint to Multipoint) TLV 0x0509 MBB (Make Before Break) TLV 0x050A Also use for other purposes (not only mldp) Typed Wildcard FEC Upstream Label Assignment 19

P2MP and MP2MP LSP building

P2MP & MP2MP LSPs Determining the upstream LDP neighbour In order to build a tree, the upstream LDP neighbour needs to be determined based on the Root address. This is similar to the RPF check with PIM. A unicast route lookup is done on the Root address until a directly connected next-hop is found. However, it is very likely there is no LDP neighbour with the same address as the next-hop. That is because the LDP session is run between the loopback addresses. Note, this is different with PIM. LDP announces all of its interfaces addresses to its neighbours. We use that address database to find the LDP neighbour. 21

P2MP & MP2MP LSPs Upstream LDP neighbour, example for root 10.0.0.1 RP/0/3/CPU0:GSR2#sh route 10.0.0.1 Routing entry for 10.0.0.1/32 Known via "ospf 0", distance 110, metric 3, type intra area Installed Feb 6 06:43:57.931 for 1w1d Routing Descriptor Blocks 10.0.4.1, from 10.0.0.1, via GigabitEthernet0/5/0/1 Route metric is 3 No advertising protos. 10.0.0.1 RP/0/3/CPU0:GSR2#sh mpls ldp neighbor 10.0.0.4 10.0.0.2 LDP session Peer LDP Identifier: 10.0.0.4:0 TCP connection: 10.0.0.4:17191-10.0.0.2:646 Graceful Restart: No Session Holdtime: 180 sec State: Oper; Msgs sent/rcvd: 10114/10106; Downstream-Unsolicited Up time: 6d02h LDP Discovery Sources: GigabitEthernet0/5/0/1 Addresses bound to this peer: 10.0.4.1 10.0.7.1 10.0.9.2 10.0.14.1 10.0.4.1 Determine upstream LDP peer for 10.0.0.1 RP/0/3/CPU0:GSR2#sh mpls mldp neighbors addresses 10.0.4.1 Wed Feb 15 05:51:18.786 UTC LDP remote address : 10.0.4.1 LDP remote ID(s) : 10.0.0.4:0 22

P2MP & MP2MP LSPs Determining the downstream interface A Label Mapping is received over the LDP session. The source of the Label Mapping is the LDP-ID of the sender. In order to program forwarding, the interface and directly connected next-hop need to be found. This interface/next-hop does not come with the Label Mapping. Label Mapping only carries the Label. We use the LDP Discovery messages to know which interfaces are connected to the LDP neighbour. There is no equivalent to this in PIM 23

P2MP & MP2MP LSPs Downstream interface, example for LDP neighbour 10.0.0.2 RP/0/1/CPU0:GSR3#sh mpls ldp neighbor 10.0.0.2 Peer LDP Identifier: 10.0.0.2:0 TCP connection: 10.0.0.2:646-10.0.0.4:17191 Graceful Restart: No Session Holdtime: 180 sec State: Oper; Msgs sent/rcvd: 11594/11605; Downstream-Unsolicited Up time: 1w0d LDP Discovery Sources: GigabitEthernet0/2/1/2 10.0.0.1 Addresses bound to this peer: 10.0.4.2 10.0.14.2 10.10.10.1 RP/0/1/CPU0:GSR3#sh mpls ldp discovery 10.0.0.2:0 det Local LDP Identifier: 10.0.0.4:0 Discovery Sources: Interfaces: GigabitEthernet0/2/1/2 (0x3000800) : xmit/recv Source address: 10.0.4.1; Transport address: 10.0.0.4 Hello interval: 5 sec (due in 1.7 sec) Quick-start: Enabled LDP Id: 10.0.0.2:0 Source address: 10.0.4.2; Transport address: 10.0.0.2 Hold time: 15 sec (local:15 sec, peer:15 sec) (expiring in 12.9 sec) 10.0.0.4 10.0.0.2 LDP session Determine downstream interface for LDP peer 10.0.0.2 10.0.4.2 24

P2MP & MP2MP LSPs Upstream and Downstream ECMP There can be multiple upstream LDP neighbours to reach the root. There can be multiple downstream interfaces to reach a neighbour. We support per LSP load balancing across the candidates. 25

P2MP & MP2MP LSPs P2MP Overview P2MP LSP is rooted at Ingress LSR P2MP LSP is unidirectional. Egress LSRs initiate the tree creation using the unicast reachability to the root address. Receiver driven, hop-by-hop to root 26

P2MP and MP2MP LSPs P2MP setup North (10.0.0.1) Sender Label Mapping FEC Label 48 Label Map P2MP FEC, 10.0.0.1, Opaque 48 48 P2MP FEC 10.0.0.1, Opaque 19 23 Label Map P2MP FEC, 10.0.0.1, Opaque 19 Central Label Map P2MP FEC, 10.0.0.1, Opaque 23 Receiver West East Receiver 27

21 G S Data P2MP & MP2MP LSPs P2MP packet flow North (10.0.0.1) (S) Downstream path label Downstream traffic 21 P2MP state 10.0.0.1, Opaque 22 20 Receiver West East Receiver 28

21 G S Data P2MP & MP2MP LSPs show mpls mldp database North (10.0.0.1) (S) RP/0/1/CPU0:GSR3#sh mpls mldp database Tue Feb 28 06:10:35.101 UTC mldp database LSM-ID: 0x00006 Type: P2MP Uptime: 2w5d FEC Root : 10.0.0.1 Opaque decoded : [vpnv4 2:2 192.169.0.1 232.2.2.2] Upstream neighbor(s) : 10.0.0.1:0 [Active] Uptime: 2w5d Next Hop : 10.0.3.1 Interface : GigabitEthernet0/2/1/1 Local Label (D) : 21 21 Downstream client(s): P2MP state 10.0.0.1, Opaque LDP 10.0.0.2:0 22 Uptime: 2w5d 20 Next Hop : 10.0.4.2 Interface : GigabitEthernet0/2/1/2 Remote label (D) : 20 LDP 10.0.0.3:0 Uptime: 2w5d Next Hop : 10.0.5.2 Interface : GigabitEthernet0/2/1/3 Remote label (D) : 22 Downstream traffic Receiver West East Receiver 29

P2MP & MP2MP LSPs MP2MP Overview MP2MP LSP allows multiple leaf LSRs to inject packets into tree MP2MP LSP is constructed using a downstream and an upstream path Downstream and upstream paths are merged such that we create a MP2MP LSP. A MP2MP LSP is MP2MP at control plane, but translates into a P2MP replication in the data plane. Downstream Path Much like a normal P2MP LSP Upstream Path Upstream path is like a P2P LSP upstream But inherits labels from the downstream path. 30

P2MP & MP2MP LSPs MP2MP setup North (10.0.0.1) (S) Label Mapping FEC Downstream path Label Upstream path Label Label Map MP2MP down, 10.0.0.1, Opaque 21 30 21 21 30 MP2MP state 10.0.0.1, Opaque 22 32 31 20 Label Map mp2mp up, 10.0.0.1, Opaque 30 Downstream traffic Upstream traffic Label Map MP2MP down, 10.0.0.1, Opaque 22 P-Central Label Map MP2MP down, PE-North, Opaque 20 Receiver West Label Map MP2MP up, 10.0.0.1, Opaque 32 Label Map MP2MP up, 10.0.0.1, Opaque 31 East Receiver 31

21 G S Data P2MP & MP2MP LSPs MP2MP packet flow North (10.0.0.1) (S) Downstream path Label Upstream path Label 21 30 MP2MP state 10.0.0.1, Opaque 22 30 G S Data 32 31 20 Downstream traffic Upstream traffic Receiver West East Receiver 32

P2MP & MP2MP LSPs show mpls mldp database North (10.0.0.1) (S) RP/0/1/CPU0:GSR3#sh mpls mldp database LSM-ID: 0x00001 Type: MP2MP Uptime: 3w1d Downstream path FEC Root : 10.0.0.1 Label Opaque decoded : [mdt 1:1 0] Upstream path Label Upstream neighbor(s) : 10.0.0.1:0 [Active] Uptime: 2w5d Next Hop : 10.0.3.1 Interface : GigabitEthernet0/2/1/1 Local Label (D) : 21 21 30 Remote Label (U): 30 Downstream client(s): MP2MP state 10.0.0.1, Opaque LDP 10.0.0.2:0 Uptime: 2w5d 22 32 31 20 Next Hop : 10.0.4.2 Interface : GigabitEthernet0/2/1/2 Remote label (D) : 20 Local label (U) : 31 LDP 10.0.0.3:0 Uptime: 2w5d Next Hop : 10.0.5.2 Interface : GigabitEthernet0/2/1/3 Remote label (D) : 22 Local label (U) : 32 Upstream traffic Downstream traffic Receiver West East Receiver 33

P2MP & MP2MP LSPs MPLS forwarding table P3#sh mpls forwarding-table Local Outgoing Prefix Bytes Label Outgoing Next Hop Label Label or Tunnel Id Switched interface 21 20 [mdt 1:1 0] 11518920 East point2point 22 [mdt 1:1 0] 11518920 West point2point 32 30 [mdt 1:1 0] 11518920 North point2point 20 [mdt 1:1 0] 11518920 East point2point 31 30 [mdt 1:1 0] 11518920 North point2point 22 [mdt 1:1 0] 11518920 West point2point For each direction (North, East and West) a P2MP Label replication entry is programmed into MPLS forwarding table. The number of label replications depends on the number of LDP neighbours participating in the MP2MP LSP. 34

P2MP & MP2MP LSPs MP2MP benefits A MP2MP LSP only creates 1 state in control plane. This is independent of the number of senders/receivers A full mesh of P2MP creates control plane state for each sender/receivers. A MP2MP LSP uses less labels for creating a MP2MP service compared to a full mesh of P2MP LSPs. 35

P2MP & MP2MP LSPs Full mesh Label and State comparison 5 PE s PE s Local Labels State MP2MP 1 1 xp2mp 4 5 Core Local Labels State MP2MP 5 1 xp2mp 5 5 100 PE s PE s Local Labels State MP2MP 1 1 xp2mp 99 99 Core Local Labels State MP2MP 100 1 xp2mp 100 99 36

Root Node Redundancy

Root Node Redundancy Why do we need it The root node is a single point of failure Only one root node is active in an MP LSP In case the root is statically configured there is a need for redundancy. If the root is dynamically learned via BGP there is no need for redundancy procedures. Requirements are: Redundancy mechanism in the event of a root failure Fast convergence in selecting a new root 38

Root Node Redundancy Solution 1: Anycast root address Root inject address 10.1.1.1 with different mask Longest match is preferred, in this example Root 2 When longest match disappears, use next best. Source Root 1 Leaf A CE Root 1 injects 10.1.1.1/31 Receiver Root 2 injects 10.1.1.1/32 Leaf B CE Root 2 Source Leaf C CE 39

Root Node Redundancy Solution 1: Anycast root address After the preferred root fails, the LSP is rerouted to the next best root based on the mask length. All MP2MP LSP s will prefer the same root node. There is a single MP2MP LSP at any given time, so no hot standby path. No load balancing over the anycast root s. This type of redundancy is a configuration trick! Also used for PIM. 40

Root Node Redundancy Solution 2: Hot standby Create two or more Hot Standby MP2MP LSPs root nodes Each leaf is configured with the same set of root nodes. Each leaf joins ALL the configured root nodes. Each leaf ACCEPTS from ALL roots Each leaf is ONLY allowed send to ONE selected root. 41

Root Node Redundancy Solution 2: Hot standby Leaf A select Root 1, leaf C selects root 2 as the preferred node. Leaf B gets the packet from A and C. Root 1 Leaf A CE Source Root 2 Leaf B CE Receiver Leaf C CE Source 42

Root Node Redundancy Solution 2: Hot standby Root selection is based on IGP reachability of the Leaf. Root 1 Leaf A CE Source Root 2 Leaf B CE Receiver Unicast routing update Leaf C CE Source 43

Root Node Redundancy Solution 2: Hot standby Switch to new root as fast as IGP convergence Root selection is a local leaf policy Can be based on IGP distance, load, etc Roots can share the tree load from leafs A separate MP2MP LSP is created for each root Multi-path load balancing is supported In both the upstream and downstream directions 44

Root Node Redundancy Summary Two types of redundancy Anycast root node redundancy Hot standby redundancy Additional state vs. failover time Both are implemented Needed only when root node is statically configured Switchover is in the order of seconds (depending on IGP) 45

Fast ReRoute

mldp Fast ReRoute Link Protection mldp shares the downstream assigned label space that unicast is using. For the MPLS forwarding plane there is in essence no difference between multicast packets or unicast packets. Since the forwarding plane is shared with unicast, certain unicast feature are inherited for multicast, like FRR. The link can be protected by a TE P2P LSP or a LDP LFA P2P LSP. 47

mldp Fast ReRoute Link Protection TE/LFA backup Tunnel For link A C Root L18 A L20 L17 Link A B L16 mldp 18 17 16 D 1. There is a unicast backup P2P Tunnel that protects Link A. 2. mldp LSP is build from D, B, A towards the root. 3. Router A installs a downstream forwarding replication over link A to router B. 48

mldp FastReRoute Link Protection TE/LFA backup Tunnel For link A C PHP Root L18 A L20 L17 Link A B L16 mldp 18 17 16 D 1. Link A breaks 2. Traffic over Link A is rerouted over the backup tunnel by imposing the Tunnel label 20. 3. Router C does PhP and strips the outer label 20 4. Router B receives the mldp packets with label 17 and forwards as normal to router D. 49

mldp FastReRoute Link Protection TE/LFA backup Tunnel For link A C PHP Root L18 A D 1. mldp is notified that the root is reachable via Router C and will converge. Link A 2. A new mldp path is build to router A via C. L20 L22 L17 18 16 B mldp 3. Router A will forward packets natively using mldp LSP to B (L22). 4. Temporarily router B will receive packets over the backup P2P tunnel and natively, due to the RPF check on the label only the TE received packets are forwarded 5. Router B uses a make-before-break trigger to switch from the backup tunnel to new native mldp LSP, label 17 to 21. 6. Router B prunes off the backup tunnel with a label withdraw to router A L21 L16 50

mldp Fast ReRoute Link Protection There are 2 make before break triggers Additional signaling is added in mldp to notify the downstream router the LSP is completed. As what is documented in the mldp RFC. Apply a configurable delay before switching to the new path. A combination of both is possible. 51

mldp Fast ReRoute MP2MP MP2MP LSP s are translated into a set of P2MP replications in forwarding. For FRR, there is no special handling needed for MP2MP because forwarding is based on P2MP. MP2MP is supported for both TE tunnel and LFA backup tunnels. 52

Make Before Break

Make Before Break Introduction With Make Before Break (MBB) we setup a new tree before we tear down the old tree This makes sense when the old tree is still forwarding packets This is typically true in combination with FRR IGP based convergence based in link-up events or metric changes When the old tree is broken, MBB does not help! MBB and FRR go hand-in-hand MLDP MBB uses Query and Ack signalling to determine the new tree is ready to forward packets. 54

Make Before Break MBB Request and Ack B Label Map Label Map with MBB Request Notification with MBB Ack Root L18 A C 18 16 E L16 mldp D 1. Initial tree is from C to B to A 2. Link E - C comes up and provides a better path to reach the root via A 3. C re-converges to E sending a Label Map with MBB Request 4. E has no state yet, forwards the MBB Request to A 5. A has active forwarding state, sends a notification with MBB Ack down the tree, hop-by-hop to C. Packets are also forwarded. 55

Make Before Break Switch to new path B Label Withdraw Notification with MBB Ack Root A C 18 16 E mldp D 1. As soon as C received the MBB Ack start accepting from E (Label 23) start dropping from B (Label 21) 2. Break the old LSP (withdraw) 56

Make Before Break FRR B LDP session TE/LFA P2P backup tunnel Root A C 1. Recall that with FRR we use the MBB trigger on C to switch from TE tunnel to a new native path, ie. start accepting from L21, dropping from L17 2. C is the tail-end of a Tunnel, so don t see any tunnel.. for C POV packets are coming from A!! 3. C does MBB procedures between LDP neighbor A and B 4. How can C sent a withdraw to LDP neighbor A while Link AC is down? 5. A and C have configured session protection, neighbors stays up 6. LDP neighbors are established over TCP session between loopbacks, connectivity remains between A and C via B. 57

Make Before Break Summary Label Mapping with MBB Request is forwarded upstream until: A node is found with active forwarding state The root node is reached The MBB Ack is send down the tree via a LDP Notification message. As soon as the node received the MBB Ack, the tree is ready. Additional delay may be added to clean up the old tree to allow the platform to program all the forwarding state to the linecards. MBB is needed to avoid additional loss when moving from the FRR TE tunnel to a new native path. LDP session protection is used to keep the LDP neighbour up. LDP connectivity remains due to TCP session. 58

Recursive FEC

Recursive FEC Introduction Recursive FEC is used to route an mldp LSP across (part) of the network that may not have IGP reachability to the Root of LSP. RFC 6512 This is similar to the PIM RPF vector The original FEC is encapsulated in a new FEC. The Root of the new FEC is an reachable intermediate node in the network. Applicability: Carriers Carrier (CsC) BGP free core Seamless MPLS 60

Recursive FEC BGP free core / seamless MPLS / Inter-AS Root BGP P2MP Root Opaque P2MP ABR1 FEC P2MP Root Opaque Access ABR1 Recursive FEC P Core ABR2 Access Label mapping comes in from Access to ABR2 with FEC ABR2 looks up Root in routing table, finds BGP route next-hop ABR1 ABR1 becomes Root for the recursive FEC LSP is routed through core based on reachability to ABR1! ABR1 retrieves the original FEC from the Opaque encoding and continues 61

Recursive FEC Multiple recursions Root BGP BGP P2MP Root Opaque P2MP ABR1 FEC P2MP ABR2 FEC P2MP Root Opaque ABR1 ABR2 ABR3 Access Core Recursive FEC Core Access Multiple recursions are supported ABR2 find a BGP route for Root and immediately encodes into a new FEC This is typical for an Inter-AS deployment between the ASBRs 62

Recursive FEC Control plane state example LSM-ID: 0x0000D Type: P2MP Uptime: 00:00:30 FEC Root : 10.0.0.11 Opaque decoded : [static-id 0] Features : RFEC Upstream neighbor(s) : Recursive encode LSM-ID: 0x0000E Downstream client(s): LDP 10.0.0.2:0 Uptime: 00:00:30 Next Hop : 10.0.4.2 Interface : GigabitEthernet0/2/1/2 Remote label (D) : 16027 Root node is 10.0.0.11 Upstream neighbour has Recursive encode LSM-ID, effectively treating the recursive FEC as an upstream neighbour LSM-ID: 0x0000E Type: P2MP Uptime: 00:00:35 FEC Root : 10.0.0.1 Opaque decoded : [recursive] 10.0.0.11:[static-id 0] Features : RFEC Upstream neighbor(s) : 10.0.0.1:0 [Active] Uptime: 00:00:35 Next Hop : 10.0.3.1 Interface : GigabitEthernet0/2/1/1 Local Label (D) : 1048566 Downstream client(s): Recursive 0x0000D Uptime: 00:00:35 Recursive root node is 10.0.0.1 Original FEC (0x00D) is treated as a downstream client Opaque encoding has original FEC 63

Recursive FEC Forwarding plane example RP/0/0/CPU0:GSR3#sh mpls forwarding labels 1048566 Fri Mar 9 22:23:33.835 UTC Local Outgoing Prefix Outgoing Next Hop Bytes Label Label or ID Interface Switched ------ ----------- ------------------ ------------ --------------- ------------ 1048566 16027 MLDP LSM ID: 0xe Gi0/2/1/2 10.0.4.2 68498 The Original and Recursive FEC are stitched in the forwarding plane Local label comes from Recursive FEC (upstream) Outgoing label comes from the Original FEC (downstream) Forwarding plane is flat, single entry 64

Recursive FEC Encodings There are two types of recursive encodings A global table recursive encoding Used for BGP free core Seamless MPLS Inter-AS A VPN recursive encoding. Carriers Carrier (CsC). Inter-AS The only difference is the RD being part of the encoding. 65

Recursive FEC The Recursive Opaque Encoding 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Type == 7 Length +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ~ P2MP or MP2MP FEC element ~ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Field Type Length FEC element Description Recursive Opaque Encoding, type 7 (RFC6512) Variable, depending on FEC element The complete mldp FEC 66

Recursive FEC The VPN Recursive Opaque Encoding 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Type == 8 Length +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Route Distinguisher +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ~ P2MP or MP2MP FEC element ~ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Field Type Length RD FEC Element Description VPN Recursive Opaque Encoding, type 8 (RFC6512) Variable + 8, depending on FEC element Route Distinguisher (8 octets) The complete mldp FEC 67

Recursive FEC Summary Recursive FEC is useful in various deployments; BGP free core Inter-AS Seamless MPLS Carriers carrier (CsC) Two different encodings Global table VPN Stitched trees at control plane Flat trees at the forwarding plane 68

Multicast only Fast ReRoute (MoFRR)

MoFRR Introduction MoFRR is a Live-Live solution to provide redundancy Based on ECMP or LFA alternate paths, 2 trees are build towards the root of the MP LSP Documented at IETF via draft-karan-mofrr-02 Applies to PIM and mldp (initial idea came from PIM) A node dual connected to 2 trees may switch between the them very quickly based on different triggers; Link status IGP BFD Traffic flow 70

MoFRR Example Link Status B Root L18 A C L16 18 16 E D C has ECMP reachability to Root via B and E C joins the LSP via both B and E C forwards packets from B and blocks traffic from E (secondary) C receives two identical packets, but forwards only one 71

MoFRR Example Link Status B Root L18 A C L16 18 16 E D C detects upstream failure to B C blocks traffic from B C unblocks traffic from E Traffic flow has recovered without additional protocol signalling 72

MoFRR Link coming back up When a previously broken link comes back up, what do we do? Stick with the existing link or revert back to the previous? We stick with the existing link to not cause additional traffic loss Even though the router is receiving both streams, switching from one to the other may either cause duplicates or loss of packets Not necessarily due to the router, but can also be due to buffering/link delays between both paths 73

MoFRR Example Link Status Root L18 RP/0/0/CPU0:GSR3#sh mpls mldp database opaquetype static-id Tue Mar 6 23:12:04.060 UTC mldp database LSM-ID: 0x0000C Type: P2MP Uptime: 00:04:00 FEC Root B : 10.0.0.15 Opaque decoded : [static-id 0] Features : MoFRR Upstream neighbor(s) : 10.0.0.1:0 A [Active] Uptime: 00:04:00 C L16 Next Hop : 10.0.3.1 Interface : GigabitEthernet0/2/1/1 Local Label (D) : 1048562 10.0.0.6:0 [Inactive] Uptime: 00:00:20 D Next Hop E : 10.0.9.1 Interface : GigabitEthernet0/2/1/0 Local Label (D) : 1048563 Downstream client(s): LDP 10.0.0.2:0 Uptime: 00:04:00 Next Hop : 10.0.4.2 Interface : GigabitEthernet0/2/1/2 Remote label (D) : 16026 18 16 There are two upstream neighbours for the same P2MP FEC 10.0.0.1:0 is the Active neighbour 10.0.0.6:0 is the Inactive (standby) neighbour 74

MoFRR Summary Join the same LSP via two different upstream paths The Repair Point router (initiating the MoFRR) can switch to the standby upstream path based on a fast trigger. Works best in dual plane topologies Otherwise path separation is possible with Multi Topology or static routing. 75

In-band signaling global table

In-band signaling global context PIM (S,G) tree is mapped to a mldp P2MP LSP. Root PE is learned via BGP Next-Hop of the Source address. PIM (S1,G) R-PE may use SSM Mapping if Receiver is not SSM PIM (S2,G) Source PIM (S1,G) aware. S1,S2 Source S3 PIM (S2,G) PIM (S3,G) Root-PE P2MP LSP FEC {S1,G} P2MP LSP FEC {S2,G} P2MP LSP FEC {S3,G} R-PE PIM (S1,G) PIM (S3,G) Receiver Receiver Root-PE MPLS cloud R-PE PIM (S,G) tree is mapped to a mldp P2MP LSP. Root PE is learned via BGP Next-Hop of the Source address. R-PE may use SSM Mapping if Receiver is not SSM aware 77

In-band signaling global context PIM (*,G) tree is mapped to a mldp P2MP LSP. Root PE is learned via BGP Next-Hop of the Source address. PIM (*,G1) R-PE may use SSM Source Mapping PIM (*,G1) if Receiver is not SSM aware. S1,S2 P2MP LSP FEC {*,G1} P2MP LSP FEC {*,G2} Receiver Source S3 RP PIM (*,G2) Root-PE R-PE PIM (*,G1) Receiver RP Root-PE MPLS cloud R-PE PIM (*,G) tree is mapped to a mldp P2MP LSP. Root PE is learned via BGP Next-Hop of the RP address. All sources known by the RP are forwarded down the tree. 78

In-band signaling global context Very useful for IPTV deployments. Works with PIM SSM and (*,G) trees, no Sparse-mode. SSM Mapping may be deployed to convert to SSM. One-2-One mapping between PIM tree and mldp LSP. No flooding/wasting of bandwidth. Works well if the amount of state is bound. IOS support GSR, CRS (shipping) 7600 (shipping) ASR9K (shipping) ASR1K (shipping) 79

In-band signaling VPN context

In-band signaling MVPN context Source S1,S2 Source S1 RD CE PIM (S1,G) PIM (S2,G) PIM (S1,G) Root-PE P2MP LSP FEC {RD,S1,G} P2MP LSP FEC {RD,S2,G} P2MP LSP FEC {RD,S1,G} R-PE PIM (S1,G) PIM (S2,G) PIM (S1,G) CE RD RD Receiver Receiver RD CE Root-PE MPLS cloud R-PE PIM (S1,G) CE CE RD Receiver PIM (S,G) VPN tree is mapped to a mldp P2MP LSP. Root PE is learned via BGP Next-Hop of the VPNv4 Source address. R-PE may use SSM Mapping if Receiver is not SSM aware. RD of the source VRF is included in the mldp FEC to allow overlapping (S,G) addresses 81

In-band signaling MVPN context Same characteristics as global table Not well suited for generic MVPN support. IOS support GSR, CRS (shipping) 7600 (shipping) ASR9K (shipping) ASR1K (shipping) 82

Configuration and show commands

Configuration and show commands Basic mldp configuration RP/0/0/CPU0:GSR3#sh run mpls ldp mpls ldp mldp! interface GigabitEthernet0/2/1/0! interface GigabitEthernet0/2/1/1! interface GigabitEthernet0/2/1/2 mldp disable!! Configuration of mldp is a sub-mode of LDP Applies to all interfaces enabled for LDP by default Unless explicitly disabled under the interface config mldp show commands are under show mpls mldp.. 84

Configuration and show commands mldp status RP/0/0/CPU0:GSR3#sh mpls mldp status RP/0/0/CPU0:GSR3#sh mpls mldp status standby mldp statistics Process status : Active, Running and Ready Multipath upstream : Enabled Multipath downstream : Enabled Logging notifications : Disabled Database count : 12 RIB connection status : Connected RIB connection open : Yes TE Intact : Disabled Active RIB table : default/ipv4/unicast mldp statistics Process status : Standby, Running and Ready Table Name AFI SAFI RIB converged Table ID : default : IPv4 : Unicast : Yes : E0000000 Table Name AFI SAFI RIB converged Table ID : default : IPv4 : Multicast : Yes : E0100000 85

Configuration and show commands mldp feature configuration RP/0/0/CPU0:GSR3(config-ldp-mldp)#? logging MLDP logging commands make-before-break Make Before Break mofrr MLDP MoFRR support no Negate a command or set its defaults recursive-fec MLDP Recursive FEC support mpls ldp mldp make-before-break delay 0 mofrr recursive-fec! MoFRR, MBB and Recursive features can be selectively enabled using a Route-Policy (RPL) 86

Configuration and show commands mldp root RP/0/0/CPU0:GSR3#sh mpls mldp root Root node : 10.0.0.14 (We are the root) Metric : 0 Distance : 0 FEC count : 1 RFEC count : 0 Path count : 1 Path(s) : 10.0.0.14 LDP nbr: none Root node : 10.0.0.15 Metric : 2 Distance : 110 FEC count : 1 RFEC count : 0 Path count : 2 Path(s) : 10.0.9.1 LDP nbr: 10.0.0.6:0 : 10.0.3.1 LDP nbr: 10.0.0.1:0 RIB information related to the root of a MP LSP 87

Configuration and show commands LDP neighbour capabilities RP/0/0/CPU0:GSR3#sh mpls ldp neighbor capabilities Peer LDP Identifier: 10.0.0.2:0 Capabilities: Sent: 0x508 (MP: Point-to-Multipoint (P2MP)) 0x509 (MP: Multipoint-to-Multipoint (MP2MP)) 0x50b (Typed Wildcard FEC) Received: 0x508 (MP: Point-to-Multipoint (P2MP)) 0x509 (MP: Multipoint-to-Multipoint (MP2MP)) 0x50b (Typed Wildcard FEC) RP/0/0/CPU0:GSR3#sh mpls mldp neighbors 10.0.0.2 Fri Mar 9 23:19:50.327 UTC MLDP peer ID : 10.0.0.2:0, uptime 00:00:11 Up, Capabilities : Typed Wildcard FEC, P2MP, MP2MP Target Adj : No Upstream count : 1 Branch count : 7 Label map timer : never Policy filter in : None Path count : 1 Path(s) : 10.0.4.2 GigabitEthernet0/2/1/2 LDP Adj list : 10.0.4.2 GigabitEthernet0/2/1/2 88

Multipoint mldp Conclusion Protocol to build P2MP and MP2MP LSPs Scalable due to receiver driven nature, like PIM Extension to existing LDP protocol Reusing existing infrastructure Simpler compared to PIM due to not supporting Sparse-Mode. Current mldp features FRR over TE tunnels Make Before Break MoFRR Recursive FEC 89

Questions?

Complete Your Online Session Evaluation Give us your feedback and you could win fabulous prizes. Winners announced daily. Receive 20 Cisco Daily Challenge points for each session evaluation you complete. Complete your session evaluation online now through either the mobile app or internet kiosk stations. Maximize your Cisco Live experience with your free Cisco Live 365 account. Download session PDFs, view sessions on-demand and participate in live activities throughout the year. Click the Enter Cisco Live 365 button in your Cisco Live portal to log in. 91