IP Mobility Design Considerations

Similar documents
Enterprise. Nexus 1000V. L2/L3 Fabric WAN/PE. Customer VRF. MPLS Backbone. Service Provider Data Center-1 Customer VRF WAN/PE OTV OTV.

Locator ID Separation Protocol (LISP) Overview

INTRODUCTION 2 DOCUMENT USE PREREQUISITES 2

Deploying LISP Host Mobility with an Extended Subnet

Location ID Separation Protocol. Gregory Johnson -

LISP Router IPv6 Configuration Commands

Mobility and Virtualization in the Data Center with LISP and OTV

LISP Locator/ID Separation Protocol

VXLAN Overview: Cisco Nexus 9000 Series Switches

Cisco IOS LISP Application Note Series: Access Control Lists

TTL Propagate Disable and Site-ID Qualification

Mobility and Virtualization in the Data Center with LISP and OTV

Locator/ID Separation Protocol (LISP)

IP Routing: LISP Configuration Guide, Cisco IOS Release 15M&T

Cisco IOS LISP Application Note Series: Lab Testing Guide

Internet Engineering Task Force (IETF) Category: Experimental ISSN: D. Meyer D. Lewis. Cisco Systems. January 2013

LISP. - innovative mobility w/ Cisco Architectures. Gerd Pflueger Consulting Systems Engineer Central Europe Version 0.

Lecture 7 Advanced Networking Virtual LAN. Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it

Cisco Nexus 7000 Series NX-OS LISP Configuration Guide

Mobility and Virtualization in the Data Center with LISP and OTV

Multi-site Datacenter Network Infrastructures

APT: A Practical Transit-Mapping Service Overview and Comparisons

DNA SA Border Node Support

GETVPN+LISP Lab Guide

Cisco Nexus 7000 Series NX-OS LISP Command Reference

Internet Engineering Task Force (IETF) Request for Comments: Cisco Systems January 2013

LISP: What and Why. RIPE Berlin May, Vince Fuller (for Dino, Dave, Darrel, et al)

PrepAwayExam. High-efficient Exam Materials are the best high pass-rate Exam Dumps

LISP: Intro and Update

Cisco Nexus 7000 Series NX-OS LISP Configuration Guide

LISP Multicast. Finding Feature Information. Prerequisites for LISP Multicast

Computer Networks ICS 651. IP Routing RIP OSPF BGP MPLS Internet Control Message Protocol IP Path MTU Discovery

MPLS VPN. 5 ian 2010

LISP: A Level of Indirection for Routing

LISP Parallel Model Virtualization

LISP: A NOVEL APPROACH FOR FUTURE ATN/IPS

Enterprise IPv6 Transition Strategy

Service Graph Design with Cisco Application Centric Infrastructure

SD-WAN Deployment Guide (CVD)

Introduction to routing in the Internet

Contents. EVPN overview 1

Chapter 5 OSI Network Layer

Introduction to routing in the Internet

LISP Generalized SMR

LISP. Migration zu IPv6 mit LISP. Gerd Pflueger Version Feb. 2013

Virtual Security Gateway Overview

Implementing IP in IP Tunnel

Higher scalability to address more Layer 2 segments: up to 16 million VXLAN segments.

Securizarea Calculatoarelor și a Rețelelor 32. Tehnologia MPLS VPN

Implementing MPLS VPNs over IP Tunnels

Configuring MPLS L3VPN

MPLS опорни мрежи MPLS core networks

Packetization Layer Path Maximum Transmission Unit Discovery (PLPMTU) For IPsec Tunnels

Unit 5 - IPv4/ IPv6 Transition Mechanism(8hr) BCT IV/ II Elective - Networking with IPv6

Overview. Overview. OTV Fundamentals. OTV Terms. This chapter provides an overview for Overlay Transport Virtualization (OTV) on Cisco NX-OS devices.

Locator/ID Separation Protocol (LISP) Virtual Machine Mobility Solution

An Industry view of IPv6 Advantages

Configuring MPLS and EoMPLS

The Interconnection Structure of. The Internet. EECC694 - Shaaban

IPv6 Next generation IP

Data Center Interconnection

HPE FlexFabric 5940 Switch Series

Layering and Addressing CS551. Bill Cheng. Layer Encapsulation. OSI Model: 7 Protocol Layers.

Implementing VXLAN. Prerequisites for implementing VXLANs. Information about Implementing VXLAN

Planning for Information Network

EEC-684/584 Computer Networks

How to Configure BGP over IKEv2 IPsec Site-to- Site VPN to an Google Cloud VPN Gateway

Cisco Campus Fabric Introduction. Vedran Hafner Systems engineer Cisco

Data Center Configuration. 1. Configuring VXLAN

Evolving your Campus Network with. Campus Fabric. Shawn Wargo. Technical Marketing Engineer BRKCRS-3800

Foreword xxiii Preface xxvii IPv6 Rationale and Features

MPLS L3VPN. The MPLS L3VPN model consists of three kinds of devices: PE CE Site 2. Figure 1 Network diagram for MPLS L3VPN model

LISP A Next Generation Networking Architecture

The Internetworking Problem. Internetworking. A Translation-based Solution

VXLAN Design with Cisco Nexus 9300 Platform Switches

(Chapters 2 3 in Huitema) E7310/Internet basics/comnet 1

Initial motivation: 32-bit address space soon to be completely allocated. Additional motivation:

Huawei CloudEngine Series. VXLAN Technology White Paper. Issue 06 Date HUAWEI TECHNOLOGIES CO., LTD.

LISP A Next-Generation Networking Architecture

TLEN 5710 Capstone. Final Paper. Date 4/25/2014. Defining a Routing Architecture for Multi-Tenant Virtual Data Centers to support Host.

IPv6 Switching: Provider Edge Router over MPLS

Membership test for Mapping Information optimization draft-flinck-lisp-membertest-00

ICN IDENTIFIER / LOCATOR. Marc Mosko Palo Alto Research Center ICNRG Interim Meeting (Berlin, 2016)

Cisco ACI Multi-Pod and Service Node Integration

ACI Fabric Endpoint Learning

Intelligent WAN Multiple VRFs Deployment Guide

Internet. 1) Internet basic technology (overview) 3) Quality of Service (QoS) aspects

MPLS, THE BASICS CSE 6067, UIU. Multiprotocol Label Switching

MPLS VPN Explicit Null Label Support with BGP. BGP IPv4 Label Session

21CTL Disaster Recovery, Workload Mobility and Infrastructure as a Service Proposal. By Adeyemi Ademola E. Cloud Engineer

ROUTING INTRODUCTION TO IP, IP ROUTING PROTOCOLS AND PROXY ARP

LISP-Click. D. Saucez, V. N. Nguyen and O. Bonaventure. Université catholique de Louvain.

LOGICAL ADDRESSING. Faisal Karim Shaikh.

Pluribus Data Center Interconnect Validated

Virtual Subnet (VS): A Scalable Data Center Interconnection Solution

Vers un renforcement de l architecture Internet : le protocole LISP ( Locator/ID Separation Protocol )

Contents. Configuring EVI 1

Aggregate Interfaces and LACP

TCP/IP Protocol Suite

Configuring VXLAN EVPN Multi-Site

Transcription:

CHAPTER 4 The Cisco Locator/ID Separation Protocol Technology in extended subnet mode with OTV L2 extension on the Cloud Services Router (CSR1000V) will be utilized in this DRaaS 2.0 System. This provides IP Mobility between data centers within SP Cloud for the (VPC to VPC) In-Cloud replication use case. The Cisco LISP Virtual Machine Mobility (LISP VM-Mobility) solution allows any host to move anywhere in the network while preserving its IP address. The capability allows members of a subnet to be dispersed across many locations without requiring any changes on the hosts and while maintaining optimal routing and scalability in the network. LISP is a network architecture and a set of protocols that implements a new semantic for IP addressing. LISP creates two namespaces and uses two IP addresses: Endpoint Identifiers (EIDs), which are assigned to end-hosts, and Routing Locators (RLOCs), which are assigned to devices (primarily routers) that make up the global routing system. Performing this separation offers several advantages, including: Improved routing system scalability by using topologically-aggregated RLOCs Provider-independence for devices numbered out of the EID space (IP portability) Low-OPEX multi-homing of end-sites with improved traffic engineering IPv6 transition functionality IP mobility (EIDs can move without changing - only the RLOC changes!) LISP is a simple, incremental, network-based implementation that is deployed primarily in network edge devices. It requires no changes to host stacks, DNS, or local network infrastructure, and little to no major changes to existing network infrastructures. LISP Overview To understand LISP, it is important to understand the concept of "Location to Identity Separation." Figure 4-1 Mobility with Location/ID Protocol Technology x.y.z.1 a.b.c.1 e.f.g.7 x.y.z.1 Only the location changes 294757 4-1

LISP Overview In traditional IP, the IP edge routing subnets are advertised all over the network using either an IGP or an EGP. Advertising any host address (subnet mask /32) occurs rarely; most of the time subnet larger or equal to /24 is used. Because all routes are advertised everywhere and installed in the forwarding plane in IP, limiting the amount of entries is important. By doing so, IP subnets are strictly limited to a geographical area and a subnet is only managed by one pair of router, which is the default gateway. This implies that if a node moves location, then its IP address must be updated accordingly to the local default gateway. This constraint is very strong and cumbersome; in order to escape from it, we see across sites more and more VLAN extension with all the drawbacks this approach can raise. With LISP, such a constraint disappears; LISP splits the edge ID (EID) from the Routing Location (RLOC), allowing any host to move from location to location while keeping its identity. LISP architecture is composed of several elements: ETR (Egress Tunnel Router) Registers the EID address space for which it has authority Identified by one (or more) RLOCs Receives and de-encapsulates the LISP frames Map Server The database where all EID/RLOC association are stored Can simply be deployed on a pair of devices for low scale implementation Or it can be a hierarchy of devices, organized like a DNS system for large scale implementation (LISP-DDT) ITR (Ingress Tunnel Router) Sends request toward the Map resolver Populates its local map-cache with the learned association Responsible for performing the LISP encapsulation Map Resolver Receives the request and selects the appropriate map server Proxy The point of interconnection between an IP network and a LISP network, playing the role of ITR and ETR at this peering point. An ETR is authoritative for a subnet, and registers it using a map-register message to the map server. When triggered on the data-plane by a packet destined to a remote EID, the ITR performs a "map-request" toward the map-resolver, which forwards it to the right map-server, which then forwards it to the authoritative ETR. The ETR replies to the requesting ITR using a "map-reply" message. The map-reply message contains a list of the RLOCs having the capability to reach the requested EID along with their characteristic in terms of priority of usage and weighted load repartition. Figure 4-2 shows LISP-ESM deployment using with in VMDC VSA 1.0 architecture. 4-2

LISP Overview Figure 4-2 LISP within VMDC VSA 1.0 Architecture Non-LISP Sites Enterprise Data Center P MPLS PE ebgp PE CSR = default GW ZBFW for all hosts ebgp IPsec LISP XTR & MS/MR OTV/IPSEC L3 for ZBFW SP Cloud Data Center 1 SP Cloud Data Center 2 294758 VMDC SP Cloud LISP and OTV roles can be deployed in the network as shown in Figure 4-3. within the VPC on source and destination data centers will be used as the OTV Edge device and LISP. Mapping Server and Resolver will also reside on the within the VPC at both the data centers in order to eliminate the use of additional devices and to reduce cost. This also improves scalability, as the MS/MR database will be per tenant. Figure 4-2 shows the options for P deployment; the recommendation for P deployment is on CSR1000V at the customer premise. Traffic to server VLANs, which are part of LISP domain, can be directed to the P within the Enterprise. 4-3

Figure 4-3 LISP & OTV Roles and Deployment in Network P: Border routers @ transit points Customer backbone routers Customer router @ co-location SP-provided router/service () Non-LISP Sites Enterprise Data Center P Mapping Servers/Resolvers: Distributed across Data Centers SP-managed/owned SP provided service CSR 1000v per VPC MPLS PE ebgp OTV/IPSEC/LISP ebgp PE LISP DC : Aggregation routers@data Center SP-managed/owned vce OTV: Aggregation routers @ VPC SP-managed/owned SP Cloud Data Center 1 SP Cloud Data Center 2 294759 VMDC SP Cloud As an over-the-top technology, LISP has ingress (ITR) and egress (ETR) points. Everything that is in the core between these tunnel end points is overlaid transparently. The only strong requirement about this core is the ability to support greater PDU that includes the LISP header (Figure 4-4). The transport MTU should be 1536 to ensure transparency for 1500 bytes PDU. In case the core is not able to accommodate a larger frame than the basic ones, then LISP ITR is able to support the Path MTU Discovery (PMTUD) approach, sending the ICMP Destination Unreachable message (type 3, code 4) with a code meaning "fragmentation needed and DF set" back to the source of the packet as specified in the original IP header leading the source to adjust packet size to 1444. 1444 is the IP MTU of the original IP packet, but LISP also encapsulates the original IP header, so the payload of a LISP packet (before adding the external IP header) is: 1444 (original payload) + 20 (original Inner IP header) + 8 (LISP header) + 8 (UDP header) + 20 (Outer IP header) = 1500 bytes. 4-4

Figure 4-4 LISP Header Format IPv4 Outer Header 0 1 2 3 4 5 6 7 1 2 3 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 Version IHL Type of Service Total Length Identification Flags Fragment Offset Time to Live Protocol (17) Header Checksum Source Routing Locator Destination Routing Locator UDP Source Port (xxxx) Destination Port (4341) UDP Length UDP Checksum LISP Header N L E V I Flags Nonce/Map-Version Instance ID Version IHL Type of Service Total Length Reserved IPv4 Inner Header Identification Time to Live Flags Protocol Source EID Destination EID Fragment Offset Header Checksum 294760 No other strict considerations are mandatory for the core transport. Other aspects like QoS may definitely be needed; in that aspect, LISP is copying the original DSCP towards its tunnel encapsulation header, allowing an end-to-end DiffServ behavior. The two main LISP benefits to consider are multi-tenancy and mobility. LISP is a pervasive solution, and the placement of the (ITR or ETR) can be seen in multiple places. The most ideal placement for an would be on the within the VMDC VSA 1.0 data centers. The default gateway resides in the data center and a dedicated per tenant within the Cloud provides sufficient scalability. With this model, the Cloud would be fully virtualized in multiple instances and mobility across all data center sites. Some factors may add complexity to this simple model and must be taken in consideration. One of them, and probably the most complex, is the insertion of a firewalling service (Figure 4-5). No firewall currently on the market is able to read a LISP-encapsulated frame. This means that to be efficient the firewall must be placed "south" of the in order to apply rules to a clear text traffic outside of any encapsulation. If the firewall is a virtual appliance or is a VSG running in the VM context, then there is no concern as the firewall service is south. If the firewall is running in transparent mode at the access layer, then again there is no concern as the firewall service is south. Within the VMDC VSA 1.0 architecture, as the CSR 1000v provides zone-based firewall capabilities and is also being proposed to be the. Firewall and LISP can coexist, improving the classical VMDC design with virtualization and mobility. For this reason, multi-hop LISP is not required to support the DRaaS use case for failing over workloads from one VPC to another within the Cloud data centers. 4-5

Figure 4-5 LISP with Localized Firewall Services ZBFW ZBFW DC1 DC2 294761 The other consideration is the LISP and SLB integration (Figure 4-6). Within the DRaaS System, the SLB Virtual IP (VIP) is active at one location at a time and represents the LISP EID. The failover of workloads belonging to the load-balanced server farm does not necessarily trigger a VIP move. For this reason, in the case of some servers within a server farm being failed over to the secondary site, they will still have the VIP located at the original site and the traffic flows from that site. In the case of all the servers within a server farm being failed over to a secondary site, the SLB VIP also has to be failed over to the secondary site to provide load-balancing services from the new site. The failover of the VIP can be performed manually by touching the load-balancers deployed on both sites or to have the load-balancer as part of the virtual protection group along with the servers, which are part of the server farm. The idea is to failover all the servers along with the load balancer that are all contained within a protection group, from one site to another. The recommendation in this document is to not run the servers that are part of a load-balanced server farm from both primary and secondary sites in a split fashion. The VIP location will get updated in LISP only when moved between the sites along with the servers that are load balanced. Figure 4-6 LISP and SLB Integration VIP1 Move VIP VIP1 DC1 DC2 294762 4-6