How to Achieve True Active Active Data Centre Infrastructures
|
|
- Deborah Armstrong
- 6 years ago
- Views:
Transcription
1
2 How to Achieve True Active Active Data Centre Infrastructures John Schaper Technical Solutions Architect BRKDCT-2615 # 5354
3 Agenda Introduction Mapping Applications to Business Criticality Levels Active Active Data Centre - Storage Active Active Data Centre Design Network Considerations Interconnecting VXLAN Fabric for Active Active Data Center LISP Active Active Data Centre Mobility ACI Active Active multi site considerations Conclusion
4 Introduction
5 Introduction Building True Active Active Datacenter Infrastructure Question why do we build multiple Data Centers? Applications 5
6 The App market transition from traditional to cloud-native Traditional Monolithic Multi-tier App Cloud-Aware App IaaS è PaaS
7 Global Data Center / Cloud Traffic Forecast ( ) 7
8 Hybrid Cloud and Active Active Data Centre Enterprise Data Center Public Cloud Hybrid Cloud Private Cloud Virtual Private Cloud Tenant B Ø Workload Migration Active Active Ø Rapidly provision new applications ( Dev/QA ) Ø Cloud Bursting/Load Balancing Ø Disaster Recovery Ø VDI What is the most important use case for Hybrid Cloud? Automated on-demand capacity (cloud bursting): 47% Split application architectures: 22% Disaster recovery: 22% Backup and archive: 5% Data center migration: 4% 8
9 CloudCenter Application Deployment Model Once. Deploy and Manage Anywhere. Data Center DEPLOY MODEL Private Cloud MANAGE Public Cloud One Integrated Platform Lifecycle Management New and Existing Applications
10 Data Center Network Evolution Journey to the Fabric Agility Workload Mobility Increased App Communication Higher Server Port Density and Bandwidth STP based Tiered Design VPC based Tiered Design VXLAN based Fabric Design 2 or more Spines Leaf Classic STP Limitation 50% of all Links not utilized Complex to Harden No STP Blocked Ports Full Links Utilization Faster Convergence Macro for best practice No STP Higher Fabric Bandwidth Consistent Latency Spine Scales to provide fabric bandwidth Leaf Scales to provide access port density
11 Mapping Applications to Business Criticality Levels
12 Active Active DC and Cloud Terminology The Terminology around Workload and Business Availability / Continuity is not always consistent Some examples: Availability Zone AWS - Availability Zones are distinct locations within a region that are engineered to be isolated from failures in other Availability Zones OpenStack - An availability zone is commonly used to identify a set of servers that have a common attribute. For instance, if some of the racks in your data center are on a separate power source, you can put servers in those racks in their own availability zone.
13 Availability Zone & Regions AWS Definitions Regions are large and widely dispersed into separate geographic locations. Availability Zones are distinct locations within a region that are engineered to be isolated from failures in other Availability Zones and provide inexpensive, low latency network connectivity to other Availability Zones in the same region
14 Availability Zone and Regions - Openstack Definitions Regions - Each Region has its own full Openstack deployment, including its own API endpoints, networks and compute resources. Different Regions share one set of Keystone and Horizon to provide access control and Web portal. (Newer deployments do not share Keystone) Availability Zones - Inside a Region, compute nodes can be logically grouped into Availability Zones, when launching new VM instance, we can specify AZ or even a specific node in a AZ to run the VM instance.
15 In-Region and Out-of-Region Data Centers Business Continuity and Disaster Recovery Active/active Traffic intended for the failed node is either passed onto an existing node or load balanced across the remaining nodes. Active/passive Provides a fully redundant instance of each node, which is only brought online when its associated primary node fails. Out-of-Region Beyond the Blast Radius for any disaster
16 Industry Standard Measurements of Business Continuity Time to Recover Data Lost 7 p.m.
17 Application Resiliency and Business Criticality Levels Defining how a Service outage impacts Business will dictate a redundancy strategy (and cost) Each Data Center should accommodate all levels Cost is important factor Lowest RTO / RPO Criticality Levels C1 RTO/RPO 0 to 15 mins Application Level Mission Imperative Impact Description Any outage results in immediate cessation of a primary function, equivalent to immediate and critical impact to revenue generation, brand name and/or customer satisfaction; no downtime is acceptable under any circumstances C2 Mission Critical Any outage results in immediate cessation of a primary function, equivalent to major impact to revenue generation, brand name and/or customer satisfaction; Typical App Distribution ~ 20% of Apps Typical Cost $ 100% Duplicate Resources, 2x Cost Multiplier (Most Costly) Highest RTO / RPO C3 RTO/RPO 15+ mins C4 C5 Business Critical Business Operational Business Administrative Any outage results in cessation over time or an immediate reduction of a primary function, equivalent to minor impact to revenue generation, brand name and/or customer satisfaction A sustained outage results in cessation or reduction of a primary function A sustained outage has little to no impact on a primary function ~ 20% of Apps ~ 40% of Apps ~ 20% of Apps Many-to-One Resource Sharing, Lower Cost Multiplier (Less Costly) Two of the most common RTO/RPO Targets
18 Data Center Interconnect Business Drivers DCI Data Centers are extending beyond traditional boundaries Virtualization applications are driving DCI across PODs and across Data Centers Drivers Business Solution Constraints IT Technology Business Recovery ü Disaster Recovery Plan ü ü Stateless Network Service ü ü ü GSLB Geo-clusters HA Cluster Operation Cost Containment ü Data Center Maintenance / Migration / Consolidation ü LAN Extension ü DCI LAN extension ü DCI Layer 3 Business Continuity Resource & Optimization ü ü Disaster Avoidance Workload Elasticity ü ü ü ü Stateful Bandwidth Hair-pinning Latency ü ü ü Metro Virtual DC Virtualization VM mobility Cloud Services ü ü Inter-Cloud Networking XaaS ü Flexibility ü ü ü VM Mobility Virtualization Automation
19 Active Active Data Centre Storage
20 True Active Active Data Centre Infrastructure Please don t forget storage requirements Primary Site Backup Site 20
21 Data Center Interconnect SAN Extension DC 1 Synchronous I/O implies strict distance limitation Localization of Active Storage is key è Consistent LUN ID must be maintained for stateful migration DC 2 ESX-A source ESX-B target 21
22 SAN Extension Synchronous vs. Asynchronous Data Replication Synchronous Data replication: The Application receives the acknowledgement for I/O complete when both primary and remote disks are updated. This is also known as Zero data loss data replication method (or Zero RPO) Metro Distances (depending on the Application can be kms max) Asynchronous Data replication: The Application receives the acknowledgement for I/O complete as soon as the primary disk is updated while the copy continues to the remote disk. Unlimited distances Synchronous Data Replication Asynchronous Data Replication
23 Synchronous Data Replication Network Latency Speed of Light is about Km/s Speed is reduced to Km/s è 5 μs per Km (8 μs per Mile) That gives us an average of 1ms for the light to cross 200 Km of fiber Synchronous Replication: SCSI protocol (FC) takes a four round trips For each Write cmd a two round trips is about 10 μs per kilometer è 20μs/km for 4 round trips for Synch data replication 50 Kilometers 1ms 250 μs : Rec_Ready? μs : Wait for response? μs : Send data 1 Local Storage Array 250 μs : Wait for Ack? 2 Remote Storage Array
24 Storage Deployment in DCI Shared Storage Improvement Using Cisco IOA Core Network L2 extension for vmotion Network DC 1 DC 2 ESX-A source ESX-B target Virtual Center Improve Latency using Cisco Write Acceleration feature on MDS Fabric
25 Active Active Data Centre Storage on Fabric IP based storage: use same fabric for storage and compute v L2 Transport: RoCE, FCoE within VXLAN v NFS, iscsi, FCoIP, iwarp options Challenges v FCoE on VXLAN: modeling/operational v No Drop behavior: priority pause storage traffic
26 Active Active Data Center Design Network Considerations
27 LAN Extension for Active Active Data Centre Technology Selection Ethernet MPLS Over dark fiber or protected D-WDM Ø VSS & vpc Dual site interconnection Ø FabricPath & VXLAN & ACI Stretched Fabric Multiple sites interconnection MPLS Transport Ø EoMPLS Transparent point to point Ø VPLS Large scale & Multi-tenants, Point to Multipoint Ø PBB-EVPN Large scale & Multi-tenants, Point to Multipoint Metro style SP style IP IP Transport Ø OTV Enterprise style Inter-site MAC Routing IP style Ø LISP Ø For Subnet extension and Path Optimization Ø VXLAN/EVPN evolving & ACI Multi Site Multi PoD Emerging A/A site interconnect (Layer 2 only or with Anycast L3 gateway)
28 Traditional vs Fabric Topologies Traditional DC Network ACI Fabric DC Layer 3 Core L2 Domain L2 Domain L2 Domain L2 Domain Limited mobility domains Single mobility domain
29 Active Active - Scope of the DCI requirements Must have: ØFailure domain containment Ø Control Plane Independence o STP domain confined inside the DC o EVPN multi-domains Ø Control-plane MAC learning Ø Reduce any flooding Ø Control the BUM* Ø Site Independence ØDual-homing with independent paths ØReduced hair-pinning Ø Distributed L2 Gateway on TOR Ø Localized E-W traffic o FHRP Isolation o Anycast L3 Default Gateway ØFast convergence ØTransport agnostic Additional Improvements: Ø ARP suppress Ø ARP caching Ø VLAN translation Ø IP Multicast or Non-IP Multicast choice Ø Multi-homing Ø Path Diversity (VLAN based / Flow-based / IP-based) Ø Load Balancing (A/S, VLAN-based, Flow-based) Ø Localized N-S traffic (for long distances) o Ingress Path Optimization (LISP) o Works in conjunction with egress Path optimization (FHRP localization, Anycast L3 Gateway) * Broadcast, Unknown Unicast and Multicast
30 DC Fabrics impact on DCI Fabric Attribute Benefit/Use Multi-DC Implication DCI Role L2 Services Non IP Applications Failure propagation Domain Separation Failure Containment Flood control L3 Services with Mobility Any workload anywhere Optimize intra-fabric routing Host state Ambiguous routing paths Multi-domain state reduction (summarization/default) Optimize Routing for Symmetry Propagate Mobility Semantics Failure Containment Host State Management Route Optimization & Mobility
31 Multi-Data Center Scale & Failure Containment Active DCI Considerations Single VLAN or EVPN VXLAN Domain Boundary EVPN-VXLAN OTV/LISP/EVPN EVPN-VXLAN DC-1 DCI Scale Containment: Default routes to remote destinations in local domains DCI devices may hold full state or pull conversational state on demand DC-2 Without a well defined DC-edge: All state would have to be populated at all DCs Capacity planning becomes difficult and risky Failures would propagate freely
32 Overlay Transport Virtualization (OTV) Simplifying LAN Extensions Ethernet LAN Extension over any Network Works over dark fiber, MPLS, or IP Multi-data center scalability Simplified Configuration & Operation Seamless overlay - No network re-design Single touch site configuration High Resiliency Failure domain isolation Seamless Multi-homing Maximizes available bandwidth Automated multi-pathing Optimal multicast replication Many Physical Sites One Logical Data Center Any Workload, Anytime, Anywhere Unleashing the Full Potential of Compute Virtualization BRKDCT
33 A quick look at OTV Mac in IP based technology IS-IS based control plane for MAC learning L2 over IP VLAN maps to OTV tunnel Multi Tenancy via OTV sessions Site redundancy Gateway Isolation Presentation ID 33
34 Interconnecting VXLAN Fabrics for Active Active Data Centre
35 VXLAN Frame Format Review 50 (54) Bytes of Overhead MAC-in-IP Encapsulation Outer MAC Header Outer IP Header UDP Header VXLAN Header Original Layer-2 Frame Underlay Overlay Dest. MAC Address 48 Src. MAC Address VLAN Type 0x8100 VLAN ID Tag Ether Type 0x0800 Source Port VXLAN Port UDP Length Checksum 0x Bytes (4 Bytes Optional) 8 Bytes VXLAN Flags RRRRIRRR Reserved VNI Next-Hop MAC Address UDP 4789 Src VTEP MAC Address IP Header Misc. Data Protocol 0x11 (UDP) 8 Bytes Header Checksum Source IP Dest. IP Bytes Hash of the inner L2/L3/L4 headers of the original frame. Enables entropy for ECMP Load balancing in the Network. Allows for 16M possible Segments Src and Dst addresses of the VTEPs Reserved 8 35
36 VXLAN Inter-Fabric Connectivity Options Active Active DC V V V V V V V V V V V V V V V V V V V V V V V V V V V V V V V V AS#65501 AS#65502 Multi-POD End-to-End VXLAN EVPN Inter-POD Network End-to-End VXLAN Tunnel Multi-POD AS#65501 AS#65502 Multi-Fabric Connecting VXLAN EVPN via Ethernet to DCI (two box) Inter-POD Network POD1 VXLAN Tunnel AS#65501 DCI Multi-Fabric POD2 VXLAN Tunnel AS#65502 Multi-Site Integrating VXLAN EVPN into DCI (single box) V V V V V V V V V V V V V V Inter-POD Network POD1 VXLAN Tunnel Inter-Site Encap Multi-Site POD2 VXLAN Tunnel 36
37 What Does Stretched Fabric Mean? Host Reachability is End-to-End RR RR RR RR DCI Leaf nodes VXLAN Leaf DCI Leaf nodes VXLAN Stretched Fabric VXLAN tunnel Traffic is encapsulated & de-encapsulated on each far end side Host Reachability Information is distributed across all sites via Route Reflectors As a result the Data Plane is stretched End-to-End (VXLAN tunnels are established from site to site) DCI Leaf nodes are pure Layer 3 device interconnecting with remote site They do NOT require VTEP Encap/Decap function per site, but supported (Bud-node) We can consider 3 key components with VXLAN/EVPN Fabric Underlay Control Plane Establishes L3 connectivity between VTEPs deployed across multiple locations Supports IP Multicast routing for BUM traffic Overlay Control Plane BGP is used for VTEP & Host Reachability Information and Prefix distribution Host Reachability information Host Discovery (locally using HMM and remotely via BGP updates) 37
38 Simple Active Active - vpc and Stretched VXLAN Fabric Ø Stretched VXLAN Fabric doesn't require VTEP capable DCI nodes (no Encap/Decap functions) MAC-to-UDP encapsulation requires to increase the MTU in the transport network by at least 50 bytes It is common usage to deploy Jumbo Frames Intra/Inter-DC Ø When a Transit Leaf node also becomes a Compute or Service Leaf, it may be part of a vpc domain It Encaps/Decaps (VTEP) VXLAN traffic from and to the locally attached nodes while it is an IP transit device for DCI (requires Bud Node support) VTEP VTEP VTEP L3 Spine Increase MTU by 50 bytes VXLAN Stretched Fabric VXLAN Leaf DCI Leaf nodes vpc Configuration with anycast VTEP Bud Nodes End-nodes vpc Domain VTEP VTEP VTEP VXLAN Stretched Fabric
39 Active Active DC Control plane mode - What is EVPN? EVPN family introduces next generation solutions for Ethernet services BGP control-plane for Ethernet Segment and MAC distribution learning over MPLS and VXLAN data-plane Same principles and operational experience as in IP VPNs P2P EVPN Multipoint RFC 7432 No use of Pseudowires Uses MP2P tunnels for unicast Multi-destination frame delivery via ingress replication (via MP2P tunnels) or LSM Multi-vendor solutions EVPN-VPWS draft-ietf-bess-evpn-vpws RFC 7432 RFC 7623 Cisco leader in industry standardization efforts (RFCs/Drafts) EVPN PBB-EVPN
40 VXLAN Evolution BGP EVPN Control Plane Workload MAC / IP Addresses learnt by Multi-Protocol BGP (MP-BGP) based Control-Plane using EVPN NLRI Advertises Layer-2 & Layer-3 Address-to-VTEP Association RR RR Datacenter Use-cases Multi-tenancy with Virtualized hosts Support for VXLAN and other Encapsulations Integrated Routing & Bridging Supports exchange of IP Addresses & IP Prefixes V 1 V 2 Standards Based draft-ietf-bess-evpn-overlay draft-ietf-bess-evpn-inter-subnet-forwarding draft-ietf-bess-evpn-prefix-advertisement V 3 RR BGP Route-Reflector ibgp Adjacency
41 Multiprotocol BGP (MP-BGP) Primer VXLAN Active Active DC Control Plane Multiprotocol BGP (MP-BGP) Extension to Border Gateway Protocol (BGP) - RFC 4760 VPN Address-Family: Allows different types of address families (e.g. VPNv4, VPNv6, L2VPN EVPN, MVPN) Information transported across single BGP peering V 1 RR RR V 2 V 3 RR BGP Route-Reflector ibgp Peering* *ebgp supported without BGP Route-Reflector Cisco and/or and/or its its affiliates. affiliates. All All rights rights reserved. reserved. Cisco Cisco Public Public 46
42 EVPN DCI - Multi-Tenancy & Seamless Host Mobility at Cloud Scale ToR PE L2 overlay MPLS underlay IPV4 Forwarding MPLS Forwarding or OTV IPV4 Forwarding INTEROPERABLE Standards based BGP EVPN VXLAN INCREASED SCALE Eliminates Flooding Policy Based Updates AUTOMATED PROVISIONING E2E Provisioning with APIC, WAE, VTS OPTIMIZED MOBILITY Distributed Anycast Gateway Multi-homing OPERATIONAL FLEXIBILITY L2 and/or L3/IRB LAN and WAN
43 Overlay Control Plane with MP-BGP MP-BGP EVPN is able to transport L2 information like MAC addresses, but also L3 information like host IP addresses (host routes) or IP subnets. For this purpose, it uses two forms of routing advertisements: ü ü Type 2 (populated by HMM) o o Used to announce host MAC and host IP information for the endpoint directly connected to the VXLAN fabric. Extended community Router MAC (for L3 VNI), Sequence number Type 5 (redistributed routing) o o Advertises IP subnet prefixes or host routes (associated for example to locally defined loopback interfaces). Extended community Router MAC address, uniquely identifying each VTEP node.
44 MP-BGP Fabric BGP for VXLAN EVPN EVPN Control Control Plane Plane Host Advertisement NLRI: Host MAC1, IP1 NVE IP 1 VNI 5000 Ext. Community: Encapsulation: VXLAN, NVGRE Cost/Sequence VNI 5000 VTEP VTEP RR RR VTEP VTEP Spine Leaf 1. Host Attaches 2. Attachment VTEP advertises host s MAC address and IP address to other VTEPs through BGP RR Host 1 VLAN 10 MAC IP VNI Next- Hop Encap IP1 VXLAN 0 Seq 44
45 VXLAN BGP Control Plane Fabric BGP-EVPN Control Plane Host Moves NLRI: Host MAC1, IP1 NVE IP 1 VNI 5000 Ext. Community: Encapsulation: VXLAN, NVGRE Cost/Sequence Host 1 MAC1 IP 1 VLAN 10 VXLAN Host Moves behind switch VTEP-3 2. VTEP-3 detects Host1 and advertises H1 with seq #1 3. VTEP-1 sees more recent route and withdraws its advertisement VTEP-1 RR VTEP-2 RR VTEP-3 VTEP-4 Spine Leaf MAC IP IP VNI VNI Next-Hop Encap Encap Seq Seq Hop IP3 IP1 IP3 VXLAN
46 EVPN VXLAN Fabric Inter Data Center Connectivity Spine Spine VXLAN Overlay RR RR RR EVPN VRF/VRFs Space RR DC #1 EVPN ibgp Border Leaf Border Leaf DC #2 EVPN ibgp VTEP VTEP VTEP VTEP VTEP VTEP VTEP VTEP VTEP VTEP VTEP VTEP Leaf Leaf Inter-DC EVPN ebgp Global Default VRF Or User Space VRFs One EVPN Administrative Domain Stretched Across Two Data Centers
47 Data Center Interconnectivity with VXLAN EVPN Spine Spine RR RR RR VXLAN Overlay EVPN VRF/VRFs Space RR DC #1 ibgp Border Leaf Border Leaf DC #2 ibgp VTEP VTEP VTEP VTEP VTEP VTEP VTEP VTEP VTEP VTEP VTEP VTEP Leaf Leaf VXLAN EVPN Administrative Domain #1 VXLAN EVPN Administrative Domain #2 VLAN Hand-off VTEP VTEP VTEP VTEP Global Default VRF Or User Space VRFs OTV/EVPN Domain BENEFITS Administrative Delineation Scalable, with Hierarchical Routing Fault & Event Containment Visibility and Telemetry at DC Edge Data Plane Normalization
48 OTV LAN Extensions for VXLAN-EVPN Fabrics Inter Data Center Connectivity RR Spine RR DC #1 ibgp ß Next hop Self Border Leaf Inter-DC ebgp Next hop Self à Border Leaf RR Spine DC #2 ibgp VXLAN Overlay EVPN VRF/VRFs Space RR VTEP VTEP VTEP VTEP VTEP VTEP VTEP VTEP VTEP VTEP VTEP VTEP Leaf Leaf VXLAN EVPN Administrative Domain #1 VXLAN EVPN Administrative Domain #2 Scalable, with Hierarchical Routing Fault & Event Containment Visibility and Telemetry at DC Edge VLAN Hand-off OTV OTV OTV OTV OTV EVPN Domain Failure Domain Global Containment: Default VRF Or User Space VRFs Unknown Unicasts ARPs STP BRKDCT
49 Introducing to VXLAN Multi Site Fabrics Current Multi-Pod Solution The current Multi-Pod design: - All Pods are in the same VNI administration domain - All Pods are in the same VXLAN BUM flooding domain - Pods are in separate VXLAN encapsulation and forwarding domain - Each Pod can individually scale up to 256 VTEPs - Host mobility event is propagated to all pods.
50 Current Verified Multi-Pod Design One EVPN VNI Administration Domain One BUM Flooding Domain A VTEP sees all other VTEPs in the local POD and remote PODs as VNI peers. Host-B: MAC-B, IP-B BGP Next-hop = VTEP Host A DC #1 ibgp Host-B: End-to-end MAC-B, IP-B VXLAN BGP Next-hop = tunnel. Intermediate EVPN MP-BGP devices Border need VTEP: to preserve the original BGP next-hop VTEP VTEP VTEP and route-targets. Multi-Hop ebgp Host-B: MAC-B, IP-B BGP Next-hop = Border VTEP: The BGP next-hop in an EVPN BGP route needs to preserved end to end VTEP End-to-End VXLAN Tunnel VTEP DC #2 ibgp VXLAN Overlay EVPN VRF/VRFs Space VTEP Host-B: MAC-B, IP-B BGP Next-hop = VTEP Host B
51 VXLAN Multi-Pod Solution Each Pod has N VTEP Peers One EVPN VNI Administration Domain One BUM Flooding Domain Each BG installs N local site VTEPs + all BG Peers Border-Gateway (BG) Leaf Core VXLAN Multi-Pod Each VTEP will install N+ BG Peers
52 Enhanced Multi-Pod Solution Advantages: - More scalable VXLAN EVPN networks VTEPs per Pod - Reduced Next-Hop scale - Better host Mobility event isolation - Route changes in one Pod is transparent to other Pods. - Reduced BGP updates and control plane churns
53 EVPN Multi-Site Solution One EVPN VNI Administration Domain One EVPN VNI Administration Domain Host-B: MAC-B, IP-B BGP Next-hop = VTEP DC #1 ibgp Border VTEP: VTEP VTEP VTEP Host-B: MAC-B, IP-B BGP Next-hop = Host-B: MAC-B, IP-B BGP Next-hop = Border VTEP: VTEP VTEP DC #2 ibgp VXLAN Overlay EVPN VRF/VRFs Space VTEP Host-B: MAC-B, IP-B BGP Next-hop = VTEP Host A Multi-Hop ebgp Host B Downstream allocated VNI to signal VLAN/VNI mapping Border discovery routes with site-identifiers
54 EVPN Multi-Site Solution One EVPN VNI Administration Domain One EVPN VNI Administration Domain Host-B: MAC-B, IP-B BGP Next-hop = VTEP DC #1 ibgp Border VTEP: VTEP VTEP VTEP Host-B: MAC-B, IP-B BGP Next-hop = Host-B: MAC-B, IP-B BGP Next-hop = Border VTEP: VTEP VTEP DC #2 ibgp VXLAN Overlay EVPN VRF/VRFs Space VTEP Host-B: MAC-B, IP-B BGP Next-hop = VTEP Host A Multi-Hop ebgp Host B VXLAN Tunnel 1 Downstream VXLAN allocated Tunnel VNI 2 to signal VLAN/VNI VXLAN Tunnel 3 (Intra Site) mapping (Inter Sites) (Intra Site) Border discovery routes with site-identifiers
55 EVPN Multi-Site Solution EVPN VNI Domain BUM Flood Domain EVPN VNI Domain BUM Flood Domain Each Site has N VTEP Peers Each BG installs N local site VTEPs + all BG Peers Inter-Site BUM Flood Domain Border-Gateway (BG) Leaf Core Each VTEP will install N + BG Peers EVPN VNI Domain BUM Flood Domain EVPN VNI Domain BUM Flood Domain EVPN Multi-Site
56 Advantages of EVPN Multi-Site Solution 1. Eliminates problem of NxN VTEPs in network and allows scaling to exponential scale of VTEPs across the network. We can support up to 65k (256x256) VTEPs. 2. Drastically reduces the NH requirement in forwarding plane on VTEPs. 3. Fabric HA/convergence will not deteriorate linearly as network size increases. 4. Intra-site mobility events don t cause BGP updates in remote sites 5. Limits number of replications per VTEP in case of EVPN/IR to number of leaf VTEPs with in a site. 6. VxLAN data packet termination at BG(s) gives immense capabilities to characterize traffic between sites. Allows addition to apply storm control, policies related to QoS, ACLs etc based on inner payload. 7. Allows interconnecting sites will different IGPs, different PIM/VNI group mappings.
57 VXLAN EVPN Multi-Pod / Multi-Site Current Validated EVPN Multi-Pod Design: - New RFC Draft for Enhanced EVPN Multi-Pod and EVPN Multi-Sites Design: -
58 BGP EVPN VXLAN Fabric and OTV Integration VTEP ED/VTEP Fabric vpc Border Leaves Edge Devices DCI N7K 8.x, N9K F-release VTEP ED/VTEP VXLAN VNI OTV/VXLAN VNI Single box L2 tunnel to L2 tunnel bridging Requires M3/N7K, N9200 or N9300-EX NXOS 8.0 on N7K, TBD on N9K Redistribute MACs between EVPN and OTV (ISIS) orib Abstraction of internal interface Control Plane triggered learning Route between VNIs in EVPN Downstream VNID assignment OTV EDs bundled in the same vpc as the Border Leaves (this is a function of VXLAN- EVPN)
59 VXLAN-EVPN to OTV Bridging VXLAN to OTV stitching (L2) V V OTV and BDI co-existence V V Data Center #1 Data Center #2 Data Center #3 V V V V V V V V VXLAN with BGP EVPN VXLAN to OTV Tunnel stitching (VNIto-VNI) Ethernet with VPC Legacy Site integration (VNI-to-VLAN) VXLAN-EVPN to OTV Bridging VXLAN with BGP EVPN Seamless End-to-End DCI (L2) Host Mobility Unicast and Multicast VXLAN OTV Ethernet
60 Advantages of OTV and VXLAN for Active Active DC: Ultimate operational simplicity Provisioning/CLI DCNM automation Seamless insertion Scalability Failure Scenarios / Optimizations Maturity Shipping since 2010, thousands of mission critical deployments Integrated DCI feature set Multi-homing with loop detection, site and flood suppression Standards based data-plane VXLAN encapsulation in OTV 60
61 Active Active DC Distributed Anycast Layer 3 Gateway EVPN Localized E-W routing for DCI 1. WEB in DC-1 communicates with the DB in a different subnet in DC-2. WEB uses its local DG from Leaf 1 2. While WEB is communicating with DB (and end-user), it does a Stateful live migration 3. WEB has moved without interruption and uses its new local DG from Leaf 4 interface Vlan100 no shutdown vrf member Tenant-1 ip address /24 fabric forwarding mode anycast-gateway 1 L3 VNI L2 VNI 5030 L2 VNI interface Vlan100 no shutdown vrf member Tenant-1 ip address /24 fabric forwarding mode anycast-gateway VLAN 100 VTEP 1 VTEP 2 DCI/L3 VTEP 3 VLAN 30 VLAN 100 VTEP 4 WE B Ho st B DB WE B 2
62 Distributed Gateway ACI Fabric Mode / / / /24 EP Move: SVI will be removed from the original leaf and programmed on the new leaf/location Gateway is always one hop away. Decouple identity and location Cisco and/or and/or its its affiliates. affiliates. All All rights rights reserved. reserved. Cisco Cisco Public Public 26
63 LISP Active Active Data Centre Mobility
64 Locator/ID Separation Protocol (LISP) A routing Architecture Separate address spaces for Identity and Location End-point Identifiers (EID) Routing locators (RLOC) A Control Plane Protocol A system that maps end-point identities to their current location (RLOC) A Data Plane Protocol Encapsulates EID-addressed packets Inside RLOC-addressed headers. EID RLOC Mapping System EID 64
65 Location Identity Separation Protocol What do we mean by Location and Identity IP core Today s IP Behavior Loc/ID Overloaded Semantic When the Device Moves, It Gets a New IPv4 or IPv6 Address for Device IPv4 or IPv6 Its New Identity and Location Address Represents Identity and Location Device IPv4 or IPv6 Address Represents Identity Only. Its Location Is Here! IP core Only the Location Changes LISP Behavior Loc/ID Split When the Device Moves, Keeps Its IPv4 or IPv6 Address. It Has the Same Identity 65
66 LISP Terminology LISP Roles Tunnel Routers - xtrs Edge devices in charge of encap/decap Ingress/Egress Tunnel Routers (ITR/ETR) Map-Server / Map-Resolver MS contains RLOC to EID mappings Proxy Tunnel Routers - PxTR Coexistence between LISP and non-lisp sites Ingress/Egress: PITR, PETR Address Spaces EID = End-point Identifier Host IP or prefix RLOC = Routing Locator IP address of routers in the backbone used as tunnel source/destination. 66
67 LISP Operations LISP Control Plane :: Control Plane Messages Control Plane EID Registration Map-Register message Sent by ETR to MS to register its associated EID prefixes Specifies the RLOC(s) to be used by the MS when forwarding Map-Requests to the ETR Control Plane Data-triggered mapping service Map-Request message Sent by an ITR when it needs for EID/RLOC mapping, to test an RLOC for reachability, or to refresh a mapping before TTL expiration Map-Reply message Sent by an ETR in response to a valid map-request to provide the EID/RLOC mapping and site ingress policy for the requested EID Map-Notify message Sent by Map-Server to ETR to acknowledge that its requested EID prefixes were registered successfully 67
68 Active Active DC LISP Properties On demand routing (pull model) BGP and OSPF use push model Pull mode is massively scalable Forwarding state proportional to router capacity Simple to Deploy Incrementally deployable No host changes End systems unaware of LISP Address family agnostic IPv4 or IPv6 EIDs IPV4 or IPV6 RLOCs IP Number Portability Never renumber again No DNS changes Session survivability Open Standard RFC
69 A LISP Packet Walk How does LISP operate? 1 DNS Entry: D.abc.com A > > > > /24 LISP Site S ETR ITR West-DC D /24 3 Mapping Entry EID-prefix: /24 Locator-set: , priority: Non-LISP 1, weight: 50 (D1) Non-LISP site , sitepriority: 1, weight: 50 (D2) IP Network /24 PITR EID-to-RLOC mapping This Policy Controlled by Destination Site East-DC 69
70 LISP operations Registration Push Mode LISP Site Map Server ETR A B C D West-DC East-DC X Y Z 70
71 LISP Operations Resolution Pull Mode LISP Site ITR Map Server ETR A B C D West-DC East-DC X Y Z 71
72 LISP Operations Data-plane Connectionless tunnel = Overlay for Active Active DC Operations LISP Site A B C D West-DC East-DC X Y Z 72
73 Secure Hybrid Cloud with LISP on CSR Enterprise Data Center RLOC Local DC Internet RLOC Cloud Public Cloud Provider VPC Si Si WAN Branch Office Branch Office with Internet connection WAAS (optional) Non-Intrusive Insertion of LISP-enabled CSR with VPN WAAS (optional) 73
74 Secure Hybrid Cloud with LISP on CSR Enterprise Data Center RLOC Local DC Internet RLOC Cloud Public Cloud Provider VPC Si Si WAN Branch Office Branch Office with Internet connection WAAS (optional) Cold migration of the VM into the cloud. WAAS (optional) 74
75 Secure Hybrid Cloud with LISP on CSR Enterprise Data Center RLOC Local DC Internet RLOC Cloud Public Cloud Provider VPC Si Si WAN Branch Office Branch Office with Internet connection WAAS (optional) Local DC traffic is extended. Intra and Inter-subnet. WAAS (optional) 75
76 LISP Data Center/Host Mobility Data Center/Host Mobility Host Move Scenarios LISP Host Mobility Config Guide: Moves With LAN Extension Moves Without LAN Extension Non-LISP Site LISP Site XTR LISP Site XTR IPv4 Network LAN Extension Mapping DB Mapping DB IPv4 Network DR Location or Cloud Provider DC LISP-VM (XTR) West-DC East-DC LISP-VM (XTR) West-DC East-DC Routing for Extended Subnets Active-Active Data Centers Distributed Data Centers Application Members Distributed Broadcasts across sites IP Mobility Across Subnets Disaster Recovery Cloud Bursting Application Members In One Location 76
77 Moving vs. Distributing Workloads Why do we really need LAN Extensions? Hypervisor Moving Workloads Hypervisor Control Traffic (routable) IP Network Hypervisor Move workloads with IP mobility solutions: LISP Host Mobility IP preservation is the real requirement (LAN extensions not mandatory) Distribute workloads with LAN extensions Application High Availability with Distributed Clusters O S O S Distributed App (GeoCluster) O S Non-IP application traffic (heartbeats) LAN Extension (OTV) 77
78 Active Active DC Live Moves or Cold Moves Live (hot) Moves preserve existing connections and state e.g. vmotion, Cluster failover Requires synchronous storage and network policy replication è Distance limitations Cold Moves bring machines down and back up elsewhere e.g. Site Recovery Manager No state preservation: less constrained by distances or services capabilities Moving Workloads Hypervisor Hypervisor Control Traffic (routable) IP Network Hypervisor Mobility across PODs within a site or across different locations 78
79 Active Active DC Live Moves Extended Firewall Clusters All Active FW cluster extended across locations LAN extensions for heartbeats, state sync and redirection within the cluster FW state is synchronized across all cluster members All members active LISP steers traffic to different locations Flows existing prior to the move will be redirected within the FW cluster (over the LAN extension) New flows will be instantiated on the FWs at the new site LISP LISP Extended cluster LAN Extension DC1 DC2 79
80 Live Moves or Cold Moves Services Live Moves Established before the move Established after the move Services Cold Moves LISP LISP LISP LISP LAN Extension DC1 DC2 Redirection of established flows: - Extended Clusters - Cluster or LISP based re-direction DC1 DC2 IP preservation è Uniform Policies 80
81 LISP and Services Integration Active/Standby Units Deployed in Each Site LISP site FWs in separated sites work independently Layer 3 Core Stateless Active/Active scenario FW in different sites are not sync d Policies have to be replicated between sites VIP ESX VIP Workload/Server Farm Moves ESX No state information maintained between sites VIP on LB must be moved between independent pairs Will drop previously established sessions after live workload move (i.e. vmotion) Data Center 1 Data Center 2 Positioned for cold migration scenarios (like Disaster Recovery for example) 81
82 LISP Active Active DC SLB Virtual-IP (VIP) Failover VIP is active at one location at a time VIP location is advertised in LISP VIP may failover on failure or change active device on machine moves VIP becomes active at a new site LISP VIP LISP VIP VIP activity is detected by the VM-mobility logic LAN Extension LAN Extension VIP location is updated in LISP on failover DC1 DC2 82
83 Dissection of LISP mobility functionality Modular functionality: Fixed menu or a-la-carte 1. Overlay Control Plane (Signaling) * 2. Overlay Data Plane (Encap-decap) * 3. Mobility detection ** 4. First hop routing fix-ups ** IP Core * Site Gateway (SG): LISP encap/decap LISP signaling Menu of functionality: Consume some or all May be distributed over multiple hops Mobile Host ** First-Hop Router (FHR) : Move Detection Local Routing Fix-up 83
84 LISP mobility consumption models Modular functionality: Fixed menu or a-la-carte SG+FH R IP SG Advertise Host Routes Redistribute LISP to IGP SG FHR Single-Hop (SH) Multi-Hop (MH) IGP Assist (SH or MH) Fabric Integrated SG: Encap/decap SG: LISP Signaling FHR: Move Detection FHR: Local Routing Fix-up
85 LISP Host-mobility IGP Assist Dynamic Host Routes Installed by LISP and redistributed into the IGP L3 Core R1: FHR roamer (lands in other Active DC) Host routing end to end LISP provides host mobility detection LISP provides signaling to guide IGP convergence The IGP propagates host routes received from LISP No LISP encapsulation involved 85
86 LISP IGP Assist with LAN Extension 5 OSPF Advertisements (withdraw old host route) Map-Notify 4 Map Server LISP Registration/ Notifications 3 Map-Register 2 OSPF Advertisements (New Host Route) Redistribute LISP routes into IGP 5 Remove /32 lisp interface route FH1 LAN Extension FH2 1 Map-Notify roamer (lands other Active DC) 2 Redistribute LISP routes into IGP Install /32 lisp interface route 86
87 LISP Standalone Fabric Integration NXOS 7.2 Benefits: Ingress Routing Optimization Edge Router Scale Branch WAN/Multi-homing Host detection done by the fabric LISP WAN SG LISP mobility triggered by reception of host routes MP-BGP to LISP handoff VXLAN EVPN (Standalone) Fabric Protocol (e.g.bgp) Site Gateway (SG): LISP encap/decap LISP signaling VPNv4 Fabric based Mobility: Move Detection Local Routing Fix-up BGP host advertisement 87
88 LISP VXLAN Fabric Integration EVPN Multi-pod scenario Routing Table: /32 L3, /32 L6, /32 Null0-LISP LISP encap/decap BGP withdraw Map-Notify /32 <E-H> L3 Core A B C D BGP AS Map-Notify L1 L2 L3 L4 4 2 ibgp Host Routes Map-System LISP Registration/ Notifications 2 ebgp Host Routes w/sequence Community Map-Register 2 ibgp Host Routes /32 RLOC A,B,C,D /32 RLOC E,F,G,H 3 Map-Register /32 <E-H> L3 Routing Core Table: /32 L3, /32 L6, LISP E F G encap/decap H BGP AS L5 L6 L7 L8 BGP withdraw Routing Table: /32 Local /32 L6, BGP EVPN with VXLAN CP roamer (lands in a foreign network) Routing Table: /32 L3, /32 Local
89 Active Active DC Inbound Path Optimization Design Considerations Ø WAN Edge devices must be deployed in a different AS than the one used for the VXLAN EVPN fabric Ø Inbound traffic destined to endpoints connected in a given Fabric should always come in via the local connection to the WAN Edge devices o Leverage route-map with AS-Prepend toward the local WAN Edge devices to steer traffic for local endpoints Ø Outbound traffic should always exit via the local connection to the WAN Edge devices o Configure higher local-pref (200) for all prefixes toward the local WAN Edge devices. Use the same local-pref 200 on the L3-DCI connection for all endpoints connected to remote fabrics (default 100 for all the other prefixes learned via the L3-DCI connection)
90 Whitepapers + Design Guidance VXLAN + MPLS BorderPE (Provider Edge) Design: VXLAN + LISP InterNetworking Design: TECDCT
91 ACI Active Active fabric design options
92 ACI Network Security Policy vs Traditional WEB POLICY APP POLICY Application Owner Network Engineer DB Traditional DC Network
93 ACI Host Routing - Inside Inline Hardware Mapping DB - 1,000,000+ hosts Global Station Table contains a local cache of the fabric endpoints Proxy Proxy Proxy Proxy APIC Leaf Leaf 1 fe80::8e5e fe80::5b1a Leaf 4 Leaf * Leaf 3 Proxy A Port 9 Proxy Station Table contains addresses of all hosts attached to the fabric fe80::462a:60ff:fef7:8e5e fe80::62c5:47ff:fe0a:5b1a Local Station Table contains addresses of all hosts attached directly to the Leaf The Forwarding Table on the Leaf Switch is divided between local (directly attached) and global entries The Leaf global table is a cached portion of the full global table If an endpoint is not found in the local cache the packet is forwarded to the default forwarding table in the spine switches (1,000,000+ entries in the spine forwarding table)
94 ACI MultiPod/MultiSite Active Active Use Cases Single Site Multi-Fabric Multiple Fabrics connected within the same DC (between halls, buildings, within the same Campus location) Cabling limitations, HA requirements, Scaling requirements Single Region Multi-Fabric (Classic Active/Active scenario) Scoped by the application mobility domain of 10 msec RTT BDs/IP Subnets can be stretched between sites Desire is reducing as much as possible fate sharing across sites, yet maintaining operational simplicity Multi Region Multi-Fabric Disaster Recovery Minimal Cross Site Communication Deployment of applications not requiring Layer 2 adjacency 94
95 ACI Multi-Fabric Design Options Single APIC Cluster/Single Domain Stretched Fabric Multiple APIC Clusters/Multiple Domains Dual-Fabric Connected (L2 and L3 Extension) ACI Fabric Site 1 Site 2 ACI Fabric 1 ACI Fabric 2 L2/L3 Multi-POD Multi-Site POD A IP Network POD n Site A IP Network Site n MP-BGP - EVPN MP-BGP - EVPN APIC Cluster 95
96 Stretched ACI Fabric ACI Stretched Fabric DC Site 1 DC Site 2 vcenter Fabric stretched to two sites à works as a single fabric deployed within a DC One APIC cluster à one management and configuration point Anycast GW on all leaf switches Transit leaf Transit leaf Work with one or more transit leaf per site à any leaf node can be a transit leaf Number of transit leaf and links dictated by redundancy and bandwidth capacity decision 96
97 Stretched ACI Fabric Support for 3 Interconnected Sites Site 2 Transit leafs in all sites connect to the local and remote spines Site 1 Transit Leaf Site 3 2x40G or 4x40G 97
98 ACI Multi-POD Solution Overview Inter-POD Network POD A POD n MP-BGP - EVPN IS-IS, COOP, MP-BGP Single APIC Cluster IS-IS, COOP, MP-BGP Multiple ACI PODs connected by an IP Inter- POD L3 network, each POD consists of leaf and spine nodes Managed by a single APIC Cluster Single Management and Policy Domain Forwarding control plane (IS-IS, COOP) fault isolation Data Plane VXLAN encapsulation between PODs End-to-end policy enforcement 98
99 ACI Multi-POD Solution Use Cases Handling 3-tiers physical cabling layout Cable constrain (multiple buildings, campus, metro) requires a second tier of spines Preferred option when compared to ToR FEX deployment Leaf Nodes POD Spine Nodes Inter-POD Network Evolution of Stretched Fabric design Metro Area (dark fiber, DWDM), L3 core >2 interconnected sites POD 1 POD 2 APIC Cluster DB Web/App Web/App 99
100 ACI Multi-POD Solution Supported Topologies POD 1 40G/100G Intra-DC 40G/100G POD n Two DC sites connected back2back 10G/40G/100G POD 1 40G/100G 40G/100G Dark fiber/dwdm (up POD 2 to 10 msec RTT) DB Web/App APIC Cluster Web/App DB Web/App APIC Cluster Web/App 3 DC Sites POD 1 POD 2 10G/40G/100G 40G/100G 40G/100G Dark fiber/dwdm (up to 10 msec RTT) 40G/100G Multiple sites interconnected by a generic L3 network 40G/100G 40G/100G L3 (up to 10 msec RTT) 40G/100G 40G/100G POD 3 100
101 ACI Multi-POD Solution Inter-POD Network (IPN) Requirements Not managed by APIC, must be pre-configured IPN topology can be arbitrary, not mandatory to connect to all spine nodes POD A 40G/100G 40G/100G POD B Main requirements: MP-BGP - EVPN 40G/100G interfaces to connect to the spine nodes DB Web/App APIC Cluster Web/App Multicast BiDir PIM à needed to handle BUM traffic DHCP Relay to enable spine/leaf nodes discovery across PODs OSPF to peer with the spine nodes and learn VTEP reachability Increased MTU support to handle VXLAN encapsulated traffic QoS (to prioritize intra APIC cluster communication) 101
102 ACI Multi-POD Solution APIC Cluster Deployment APIC cluster is stretched across multiple PODs Central Mgmt for all the PODs (VTEP address, VNIDs, class-ids, GIPo, etc.) Centralized policy definition POD A POD B Recommended not to connect more than two APIC nodes per POD (due to the creation of three replicas per shard ) The first APIC node connects to the Seed POD DB Web/App MP-BGP - EVPN APIC Cluster Web/App Drives auto-provisioning for all the remote PODs PODs can be auto-provisioned and managed even without a locally connected APIC node 102
103 ACI Multi-POD Solution Inter-PODs MP-BGP EVPN Control Plane MP-BGP EVPN used to communicate Endpoint (EP) and Multicast Group information between PODs All remote POD entries associated to a Proxy VTEP next-hop address Leaf Leaf Proxy B Proxy B Proxy A MP-BGP - EVPN Proxy A Proxy A Leaf 4 Leaf 6 Proxy B Single BGP AS across all the PODs BGP EVPN on multiple spines in a POD (minimum of two for redundancy) APIC Cluster Some spines may also provide the route reflector functionality (one in each POD) 103
104 Multi-POD and GOLF Project Intra-DC Deployment Control Plane Public BD subnets advertised to GOLF devices with the external spine-proxy TEP as Next-Hop Host routes* for endpoint belonging to public BD subnets MP-BGP EVPN Control Plane GOLF Devices IPN WAN WAN routes received on the POD spines as EVPN routes and translated to VPNv4/VPNv6 routes with the spine proxy TEP as Next-Hop DB Web/App Web/App Single APIC Domain Cluster Multiple PODs... Web/App DB *Not available at FCS 104
105 Multi-POD and GOLF Project Multi-DC Deployment Control Plane Host routes for endpoint belonging to public BD subnets in POD A GOLF devices inject host routes into the WAN or register them in the LISP database Host routes for endpoint belonging to public BD subnets in POD B MP-BGP EVPN Control Plane POD A IPN L3 network MP-BGP EVPN Control Plane POD B DB Web/App Single APIC Cluster Web/App DB 105
106 ACI Multi-POD Solution Summary ü ACI Multi-POD solution represents the natural evolution of the Stretched Fabric design ü Combines the advantages of a centralized mgmt and policy domain with fault domain isolation (each POD runs independent control planes) ü Control and data plane integration with WAN Edge devices (Nexus 7000/7700, ASR 9000 and ASR 1000) completes and enriches the solution ü The solution is planned to be available in Q3CY16 and will be released with a companion Design Guide 106
107 ACI Dual-Fabric Solution ACI Fabric 1 ACI Fabric 2 L2/L3 DCI Independent ACI Fabrics interconnected via L2 and L3 DCI technologies Each ACI Fabric is independently managed by a separate APIC cluster Separate Management and Policy Domains Data Plane VXLAN encapsulation terminated at the edge of each Fabric VLAN hand-off to the DCI devices for providing Layer 2 extension service Requires to classify inbound traffic for providing end-to-end policy extensibility 107
108 Multi-Fabric Scenarios Multi-Site Fabric A mbgp - EVPN Fabric B DB Web/App Multiple APIC Clusters Web/App Web1 Web2 Import Web & App from Fabric B Export Web & App to Fabric A Web1 Web2 App1 db1 App2 Export Web, App, DB to Fabric B Import Web, App, DB from Fabric A App1 db1 App2 db2
109 Multi-Site Fabrics Reachability Fabric A mbgp - EVPN Fabric B DB Web/App Multiple APIC Clusters Web/App Host Level Reachability Advertised between Fabrics via BGP Transit Network is IP Based Host Routes do not need to be advertised into transit network Policy Context is carried with packets as they traverse the transit IP Network Forwarding between multiple Fabrics is allowed (not limited to two sites)
110 Multi-Site Fabrics Policy EPG policy is exported by source site to desired peer target site fabrics Fabric A advertises which of its endpoints it allows other sites to see Target site fabrics selectively imports EPG policy from desired source sites Fabric B controls what it wants to allow its endpoints to see in other sites Policy export between multiple Fabrics is allowed (not limited to two sites) Site A Site B Web1 Web2 Import Web & App from Fabric B Export Web & App to Fabric A Web1 Web2 App1 db1 App2 Export Web, App, DB to Fabric B Import Web, App, DB from Fabric A App1 db1 App2 db2
111 Scope of Policy Fabric A mbgp - EVPN Fabric B DB Web/App Multiple APIC Clusters Web/App Web App Web App Policy is applied at provider of the contract (always at fabric where the provider endpoint is connected) Rational: Scoping of changes No need to propagate all policies to all fabrics Different policy applied based on source EPG (which fabric)
112 Multi-Fabric with End-to-End Policy Fabric A IP Network Fabric B mbgp - EVPN DB Web App VTEP IP Group Policy VNID Tenant Packet Web1 Web2 Web1 Web2 App1 App2 App1 App2 DB1 DB1 DB2
113 Multi-Fabric Solution Fabric A mbgp - EVPN Translation of VTEP, GBP-ID, VNID (scoping of name spaces) Fabric B DB Web/App Multiple APIC Clusters Web/App Leaf to Leaf VTEP, class-id is local to the Fabric Site to Site VTEP traffic (, VNID and Class-id are mapped on spine) Leaf to Leaf VTEP, class-id is local to the Fabric VTEP IP Group Policy VNID Tenant Packet VTEP IP Group Policy VNID Tenant Packet VTEP IP Group Policy VNID Tenant Packet Extend connectivity, policy enforcement across multiple fabrics Scale extension with VXLAN and BGP-EVPN Maintain separate name spaces with ID translation
114 ACI Infrastructure: Multi-Site Multiple Fabrics, APIC Clusters Multi-Site Controller Site 1 Site 2 12 Active-Active Active-Standby Deployment: Application Deployment: Components Cross-Site Spread traffic Across Low/High Multiple Sites Active-Active? Central Active-Standby? Policy Exchange? Orchestration?
115 Multi-Fabric Scenarios Availability Zone Models Availability Zone A Site A Multi-Fabric Controller Web GUI Rest API Availability Zone B Site B DB Web/App Web/App Each fabric has separated APIC Cluster Each fabric has its own APIC Cluster Each fabric manages its own resource Each fabric is considered as different availability zone Multi-Fabric controller push cross-fabric configuration to multiple APIC clusters. Configuration change control is critical to maintain separation of availability zone.
116 Active Active ACI Fabric - Multi-Fabric Controller Multi-Fabric Controller Fabric discovery and bring up Central configuration for the fabric Central management for the fabric Integration with third party Maintain runtime data(vtep address, VNID, Class_ID, GIPo etc) Not participate in the fabric data plane and control plane Central configuration point for multi-fabric solution Propagate configuration to multiple APIC clusters End-to-end visibility and troubleshooting No run time data. Configuration depository Does not participate in the fabric data plane and control plane
117 LISP ACI Fabric Integration Benefits: Ingress Routing Optimization Edge Router Scale WAN Multi-homing Fabric to WAN handoff MP-BGP/EVPN over VXLAN SpineßàWAN-router Advertise Local Host Routes LISP mobility triggered by reception of host routes LISP Branch MP-BGP/ VXLAN WAN N7K Hosts Fabric based Mobility: Move Detection Local Routing Fix-up COOP Registration Site Gateway (SG): LISP encap/decap LISP signaling Spine N9500 Leaf N
118 LISP ACI Fabric Integration Branch Spine N9500 MP-BGP/ VXLAN Hosts LISP N7K WAN MP-eBGP-EVPN ivxlan LISP N7K Hosts Site Gateway (SG): LISP encap/decap LISP signaling MP-BGP/ VXLAN Spine N9500 Leaf N9300 Multi-Fabric Scenario portrayed, can accommodate single fabric scenarios as well Leaf N9300 Fabric based Mobility: Move Detection Local Routing Fix-up COOP Registration 118
119 Conclusion
120 Its all about the Applications Network Gateway considerations Fate Sharing L2 over L3 Storage Distance and Latency IP based storage FC based storage Mobility and Optimal Routing Application state, policy and DC Services 120
121 Active Active Reference Slides
122 ASR9k MACSEC Phase 1: XR Release Use cases Use-case #1: Link MACSEC in MPLS/IP Topology Use-case #2: MACSEC over LAG members CE PE ASR9k P PE CE MACSEC on LAG Member link Inheritance MACSEC Links Use-case #3 CE Port Mode MACSEC over L2VPN MACSEC Links Use-case #4 VLAN Clear Tags MACSEC over L2VPN ASR9k MKA ASR9k ASR9k MKA ASR9k CE port mode L2VPN CE/WAN port mode CE CE vlan clear-tags L2VPN CE/WAN vlan clear-tags CE MACSEC Links MACSEC Links
123 Network Service placement for Metro Distances A/S stateful devices stretched across 2 locations nominal workflow Historically this has been well accepted for most of Metro Virtual DC (Twin- DC) Almost 80% of Twin-DC follows this model L3 Core Outside VLAN FW FT and session synch Inside VLAN Network Services are usually active on primary DC Distributed pair of Act/Sby FW & SLB on each location Additional VLAN Extended for state synchronization between peers Source NAT for SLB VIP VIP Src-NAT VIP VLAN SLB session synch Note: With traditional pair cluster this scenario is limited to 2 sites Front-end VLAN Subnet A Back-end VLAN Subnet A Primary DC-1 Secondary DC-2
124 Network Service placement for Metro Distances Ping-Pong impact with A/S stateful devices stretched across 2 locations L3 Core Historically limited to Network services and HA clusters offering stateful failover & fast convergences Not optimum, but has been usually accepted to work in degraded mode with predictable mobility of Network Services under short distance VIP Src-NAT Outside VLAN Inside VLAN VIP VLAN Front-end VLAN FW failover to remote site Source NAT for SLB VIP Consider +/- 1 ms for each round trip for 100 km For Secured multi-tier software architecture, it is possible to measure + 10 round-trips from the initial client request up to the result. Interface tracking optionally enabled to maintain active security and network services on the same site Subnet A Back-end VLAN Subnet A Primary DC km +/- 1 ms per round trip Secondary DC-2
125 Network Service placement for Metro Distances Additional Ping-Pong impact with IP mobility between 2 locations Network team is not necessarily aware of the Application/VM mobility Uncontrolled degraded mode with unpredictable mobility of Network Services and Applications VIP Src-NAT L3 Core Outside VLAN Inside VLAN VIP VLAN FW failover to remote site Front-end server farm moves to remote site Source NAT for SLB VIP maintains the return path thru the Active SLB Partial move of a server farm is not optimized Understand and identify the multi-tier frameworks Front-end VLAN Subnet A Back-end VLAN Subnet A Primary DC km +/- 1 ms per round trip Secondary DC-2
126 Network Service placement for Metro Distances Stateful Devices and Trombone effect for IP Mobility between 2 locations Limited relation between server team (VM mobility) and Network Team (HSRP Filtering) and Service Team (FW, SLB, IPS..) Ping-Pong effect with active services placement may impact the performances Src-NAT L3 Core Outside VLAN Inside VLAN VIP VLAN It is preferred to migrate the whole multi-tier framework and enable FHRP filtering to reduce the trombone effect FHRP filtering is ON on the Front-end & Back-end side gateways Source NAT for SLB VIP maintains the return path thru the Active SLB Understand and identify the multi-tier frameworks Front-end VLAN HSRP Filter Subnet A Back-end VLAN Subnet A Primary DC km +/- 1 ms per round trip Secondary DC-2
127 Network Service placement for Metro Distances Intelligent placement of Network Services based on IP Mobility localization Improving relations between sillo ed organizations increases workflow efficiency Reduce trombon ing with active services placement L3 Core Outside VLAN Inside VLAN Move the FW Context associated to the application of interests Interface Tracking to maintain the stateful devices in same location when possible Return traffic keeps symmetrical via the stateful devices and source-nat Intra-DC Path Optimization almost achieved, however Ingress Path Optimization may be required VIP Src-NAT Subnet A VIP VLAN Front-end VLAN Back-end VLAN HSRP Filter VIP Src-NAT Subnet A Sillo ed organisations Server/app Network/HSRP filter service & security Storage Primary DC km +/- 1 ms per round trip Secondary DC-2
128 Network Service placement for long distances Active/Standby Network Services per Site with Extended LAN (Hot Live migration) L3 Core Subnet is extended with the VLAN Src NAT on each FW is mandatory No Config nor State Synch Per-box per-site configuration Ingress Path Optimization can be initiated to reduce trombone effect due to active services placement With Ingress Path optimization the current Active TCP sessions are interrupted and re-established on the remote site routed mode Src NAT IP Localization Inside VLAN HSRP Filter routed mode Src NAT Extend the VLAN of interests FW and SLB maintain stateful session per DC. No real limit in term of number of DC Granular migration is possible only using LISP or IGP Assist or RHI (if the Enterprise owns the L3 core) Front-End VLAN Subnet A Subnet A Back-End VLAN DC-1 Move the whole apps framework (Front-End and Back-End) DC-2
129 Single ASA Clustering stretched across Multiple DC Case 1: LISP Extended Subnet Mode with ASA Clustering (Stateful Live migration with LAN extension) M-DB Update your Table L3 Core ITR One Way Symmetric Establishment is achieved via the CCL Current active sessions are maintained stateful New Sessions are optimized dynamically Up to 10ms max one-trip latency, extend the ASA cluster for config and state synch Subnet A HRSP Filter App is located in ETR-DC-2 ETR CCL Extension Owner HRSP Filter Data VLAN Extension App has moved Subnet A HRSP Filter ETR HRSP Filter Director 1 - End-user sends Request to App 2 - ITR intercepts the Req and check the localization 3 - MS relays the Req to the ETR of interests which replies location for Subnet A being ETR DC ITR encaps the packet and sends it to RLOC ETR-DC-1 4 LISP Multi-hop notifies ETR on DC-2 about the move of the App 5 Meanwhile ETR DC-2 informs MS about new location of App 6 MR updates ETR DC-1 7 ETR DC-1 updates its table (App:Null0) 8 ITR sends traffic to ETR DC-1 9 ETR DC-1 replies with a Solicit Map Req 8 ITR sends a Map Req and redirects the Req to ETR DC-2 DC-1 DC-2
130 Single ASA Clustering stretched across Multiple DC Case 2: LISP Across Subnet Mode with ASA Clustering (Cold migration with Layer 3) M-DB Update your Table L3 Core ITR Servers can migrate with same IP identifiers (LISP ASM takes care of it) Cold Migration implies the Server to restart Business continuity implies the user to reestablish a new session TCP session is re-initiated Up to 10ms max one-way latency, extend the ASA cluster for config synch ETR Owner Subnet AA CCL Extension App has moved Subnet B ETR 1 - End-user sends Request to App 2 - ITR intercepts the Req and check the localization 3 - MS relays the Req to the ETR of interests which replies location for Subnet A being ETR DC ITR encaps the packet and sends it to RLOC ETR-DC-1 4 LISP Multi-hop notifies ETR on DC-2 about the move of App 5 ETR DC-2 informs MS about new location of App 6 MR updates ETR DC-1 7 ETR DC-1 updates its table (App:Null0) 8 ITR sends traffic to ETR DC-1 9 ETR DC-1 replies Solicit Map Req 8 ITR sends a Map Req and redirects the flow to ETR DC-2 DC-1 DC-2
131 Single ASA Clustering stretched across Multiple DC Case 3: IGP Assist ESM (Stateful Live migration with LAN extension) LISP Control Plane User Data Plane Subnet A / 25 Redistribute LISP route into IGP Remove /32 lisp interface route Subnet A HRSP Filter DC-1 Owner HRSP Filter L3 Core Active sessions are maintained stateful CCL Extension LISP Map-notify Local & remote FHR OTV VLAN Extension Subnet A HRSP Filter DC-2 Subnet A / 24 Host route /32 HRSP Filter New sessions are optimized Director Redistribute LISP route into IGP Install /32 lisp interface route Selective and Automatic notification IGP Assist is available with the function LISP FHR (N7k) It offers a very granular optimization (per-host /32) It requires the L3 core to support host route (usually the Enterprise owns the L3 network) With the ASA cluster assist, active sessions are maintained, new sessions are optimized 1 - End-user sends Request to App 2 - Request is attracted by DC-1 (best metric) for subnet A (/25) 3 Apps moves from DC-1 to DC-2 and continue sending traffic to outside (stateful live migration) 4 LISP IGP Assist detects the new EID and mapnotifies all LISP FHR (DC-2 and DC-1) via the LAN extension. 5 Meanwhile LISP First Hop Router (FHR) s Redistribute the LISP routes into IGP 6 On DC-2, where detection has been made, it installs a /32 LISP Interface Route of the host (EID) 7 On DC-1, the original location of the EID, it removes /32 LISP Interface Route of the host 8 The host route is propagated to the core attracting the traffic to its destination (more specific route) Active sessions are maintained stateful New 2017 sessions Cisco and/or its workflows affiliates. All rights reserved. are optimized Cisco Public
132 Single ASA Clustering stretched across Multiple DC Case 5: IGP Assist ASM (Cold Migration) LISP Control Plane User Data Plane Subnet B / 24 Subnet A / 25 Redistribute LISP route into IGP Remove /32 lisp interface route Subnet A HRSP Filter DC-1 Active / Active DC Sessions are distributed CCL Map-Notify HRSP Filter L3 Core M-DB Map-Register LISP Map-notify all local FHR Subnet B HRSP Filter DC-2 Subnet A / 24 Host route /32 HRSP Filter sessions are Redirected to the DC of interest CCL Subnet B / 25 Redistribute LISP route into IGP Install /32 lisp interface route Selective and Automatic notification IGP Assist is available with the function LISP FHR (N7k) It offers an very granular optimization (per-host /32) It requires the L3 core to support host route (usually the Enterprise owns the L3 network) Current session are interrupted and new sessions are optimized 1 - End-user sends Request to App 2 - Request is attracted by DC-1 (best metric) for subnet A (/25) and DC-2 is primary DC for Subnet B 3 Apps migrates and restarts on DC-2 (Cold move) 4 LISP IGP Assist detects the new EID and mapnotifies its local LISP XTR (DC-2) 5 LISP FHR DC-2 Redistribute LISP routes into IGP 6 Installs a /32 LISP Interface Route of the host (EID) 7 Sends a Map-register of new EID to the MS 8 MS sends a Map-notifies to LISP FHR (DC-1) 9 LISP FHR DC-1 Redistribute LISP routes into IGP and removes /32 LISP Interface Route of the host 10 The host route is propagated to the core attracting the traffic to its destination (more specific route) New sessions workflows are optimized
133 Active Active Data Center Storage Considerations
134 Storage Deployment in DCI Option 1 - Shared Storage Core Network L2 extension for vmotion Network DC 1 DC 2 ESX-A source Initiator ESX-B target Virtual Center Volumes Target
135 Storage Deployment in DCI Shared Storage Improvement Using Cisco IOA Core Network L2 extension for vmotion Network DC 1 DC 2 ESX-A source ESX-B target Virtual Center Improve Latency using Cisco Write Acceleration feature on MDS Fabric
136 Storage Deployment in DCI Option 2 - NetApp FlexCache (Active/Cache) Core Network L2 extension for vmotion Network DC 1 DC 2 NAS Read Write? ESX-A source data 3 ACK 2 data Temp Cache data 2 4 ACK Write Read 1 data ESX-B target Virtual Center FlexCache does NOT act as a write-back cache FlexCache responds to the Host only if/when the original subsystem ack ed to it No imperative need to protect a Flexcache from a power Failure
137 Storage Deployment in DCI Option 3 - EMC VPLEX Metro (Active/Active) Hosts at both sites instantly access Distributed Virtual Volume Synchronization starts at Distributed Volume creation Synchronous Latency Distributed Virtual Volume Fibre Channel DC A WRITEs are protected on storage at both Site A and B READs are serviced from VPLEX cache or local storage. DC B
138 Storage Deployment in DCI Option 3 - EMC VPLEX Metro (Active/Active) Core Network L2 extension for vmotion Network DC 1 DC 2 ESX-A source Initiator ESX-B target Virtual Center From the Host VPLEX Virtual Layer From the Storage EMC VMAX Initiator Target Target VPLEX Engine LUNv Synchronous Latency requiments 5ms max VPLEX Engine EMC CLARiiON Cisco EMC/dciEmc.html and/or its affiliates. All rights reserved. Cisco Public LUNv
139 Storage Deployment in DCI Option 4 - EMC VPLEX Geo (Active/Active) Active/Active Storage Virtualization Platform for the Private and Hybrid Cloud Enables workloads Mobility over Long distance at Asynchronous distances using Microsoft Hyper-V. OTV WAAS L3 Network Infrastructure WAAS OTV Distributed Virtual Volumes SAN DC A VPLEX Geo cluster EMC or non-emc Storage Asynchronous latency up to 50ms max Uses existing WAN bandwidth which can be optimized Windows Failover Cluster is used to provide HA features, Live Migration and Cluster Shared volume capability VPLEX Geo cluster EMC or non-emc Storage DC B SAN
140 Q & A
141 Complete Your Online Session Evaluation Give us your feedback and receive a Cisco Live 2017 Cap by completing the overall event evaluation and 5 session evaluations. All evaluations can be completed via the Cisco Live Mobile App. Caps can be collected Friday 10 March at Registration. Learn online with Cisco Live! Visit us online after the conference for full access to session videos and presentations. BRKDCT
142 Thank you
143
Cisco ACI Multi-Pod/Multi-Site Deployment Options Max Ardica Principal Engineer BRKACI-2003
Cisco ACI Multi-Pod/Multi-Site Deployment Options Max Ardica Principal Engineer BRKACI-2003 Agenda ACI Introduction and Multi-Fabric Use Cases ACI Multi-Fabric Design Options ACI Stretched Fabric Overview
More informationACI Multi-Site Architecture and Deployment. Max Ardica Principal Engineer - INSBU
ACI Multi-Site Architecture and Deployment Max Ardica Principal Engineer - INSBU Agenda ACI Network and Policy Domain Evolution ACI Multi-Site Deep Dive Overview and Use Cases Introducing ACI Multi-Site
More informationMP-BGP VxLAN, ACI & Demo. Brian Kvisgaard System Engineer, CCIE SP #41039 November 2017
MP-BGP VxLAN, ACI & Demo Brian Kvisgaard System Engineer, CCIE SP #41039 November 2017 Datacenter solutions Programmable Fabric Classic Ethernet VxLAN-BGP EVPN standard-based Cisco DCNM Automation Modern
More informationIntroduction to External Connectivity
Before you begin Ensure you know about Programmable Fabric. Conceptual information is covered in the Introduction to Cisco Programmable Fabric and Introducing Cisco Programmable Fabric (VXLAN/EVPN) chapters.
More informationHierarchical Fabric Designs The Journey to Multisite. Lukas Krattiger Principal Engineer September 2017
Hierarchical Fabric Designs The Journey to Multisite Lukas Krattiger Principal Engineer September 2017 A Single Fabric, a Single Data Center External Layer-3 Network Pod 1 Leaf/ Topologies (aka Folded
More informationCisco ACI Multi-Pod Design and Deployment
Cisco ACI Multi-Pod Design and Deployment John Weston Technical Marketing Engineer Cisco Spark How Questions? Use Cisco Spark to communicate with the speaker after the session 1. Find this session in the
More informationMobility and Virtualization in the Data Center with LISP and OTV
Mobility and Virtualization in the Data Center with LISP and OTV Agenda Mobility and Virtualization in the Data Center Introduction to LISP LISP Data Center Use Cases LAN Extensions: OTV LISP + OTV Deployment
More informationImplementing VXLAN in DataCenter
Implementing VXLAN in DataCenter LTRDCT-1223 Lilian Quan Technical Marketing Engineering, INSBU Erum Frahim Technical Leader, ecats John Weston Technical Leader, ecats Why Overlays? Robust Underlay/Fabric
More informationMobility and Virtualization in the Data Center with LISP and OTV
Cisco Expo 2012 Mobility and Virtualization in the Data Center with LISP and OTV Tech DC2 Martin Diviš Cisco, CSE, mdivis@cisco.com Cisco Expo 2012 Cisco and/or its affiliates. All rights reserved. 1 Twitter
More informationEnterprise. Nexus 1000V. L2/L3 Fabric WAN/PE. Customer VRF. MPLS Backbone. Service Provider Data Center-1 Customer VRF WAN/PE OTV OTV.
2 CHAPTER Cisco's Disaster Recovery as a Service (DRaaS) architecture supports virtual data centers that consist of a collection of geographically-dispersed data center locations. Since data centers are
More informationCisco ACI Multi-Pod and Service Node Integration
White Paper Cisco ACI Multi-Pod and Service Node Integration 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 68 Contents Introduction... 3 Prerequisites...
More informationDeploying LISP Host Mobility with an Extended Subnet
CHAPTER 4 Deploying LISP Host Mobility with an Extended Subnet Figure 4-1 shows the Enterprise datacenter deployment topology where the 10.17.1.0/24 subnet in VLAN 1301 is extended between the West and
More informationVXLAN Overview: Cisco Nexus 9000 Series Switches
White Paper VXLAN Overview: Cisco Nexus 9000 Series Switches What You Will Learn Traditional network segmentation has been provided by VLANs that are standardized under the IEEE 802.1Q group. VLANs provide
More informationImplementing VXLAN. Prerequisites for implementing VXLANs. Information about Implementing VXLAN
This module provides conceptual information for VXLAN in general and configuration information for layer 2 VXLAN on Cisco ASR 9000 Series Router. For configuration information of layer 3 VXLAN, see Implementing
More informationMulti-site Datacenter Network Infrastructures
Multi-site Datacenter Network Infrastructures Petr Grygárek rek 2009 Petr Grygarek, Advanced Computer Networks Technologies 1 Why Multisite Datacenters? Resiliency against large-scale site failures (geodiversity)
More informationINTRODUCTION 2 DOCUMENT USE PREREQUISITES 2
Table of Contents INTRODUCTION 2 DOCUMENT USE PREREQUISITES 2 LISP MOBILITY MODES OF OPERATION/CONSUMPTION SCENARIOS 3 LISP SINGLE HOP SCENARIO 3 LISP MULTI- HOP SCENARIO 3 LISP IGP ASSIT MODE 4 LISP INTEGRATION
More informationOptimizing Layer 2 DCI with OTV between Multiple VXLAN EVPN Fabrics (Multifabric)
White Paper Optimizing Layer 2 DCI with OTV between Multiple VXLAN EVPN Fabrics (Multifabric) What You Will Learn This document describes how to achieve a VXLAN EVPN multifabric design by integrating Virtual
More informationIP Mobility Design Considerations
CHAPTER 4 The Cisco Locator/ID Separation Protocol Technology in extended subnet mode with OTV L2 extension on the Cloud Services Router (CSR1000V) will be utilized in this DRaaS 2.0 System. This provides
More informationMulti-Site Use Cases. Cisco ACI Multi-Site Service Integration. Supported Use Cases. East-West Intra-VRF/Non-Shared Service
Cisco ACI Multi-Site Service Integration, on page 1 Cisco ACI Multi-Site Back-to-Back Spine Connectivity Across Sites Without IPN, on page 8 Bridge Domain with Layer 2 Broadcast Extension, on page 9 Bridge
More informationIP Fabric Reference Architecture
IP Fabric Reference Architecture Technical Deep Dive jammon@brocade.com Feng Shui of Data Center Design 1. Follow KISS Principle Keep It Simple 2. Minimal features 3. Minimal configuration 4. Configuration
More informationData Center Configuration. 1. Configuring VXLAN
Data Center Configuration 1. 1 1.1 Overview Virtual Extensible Local Area Network (VXLAN) is a virtual Ethernet based on the physical IP (overlay) network. It is a technology that encapsulates layer 2
More informationWhite Paper ACI Multi-Pod White Paper 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
White Paper ACI Multi-Pod White Paper 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 43 Contents Introduction... 3 Overview of ACI Multi-Pod...
More informationMobility and Virtualization in the Data Center with LISP and OTV
Mobility and Virtualization in the Data Center with LISP and OTV Victor Moreno, Distinguished Engineer Agenda Mobility and Virtualization in the Data Center Introduction to LISP LISP Data Center Use Cases
More informationConfiguring VXLAN EVPN Multi-Site
This chapter contains the following sections: About VXLAN EVPN Multi-Site, on page 1 Licensing Requirements for VXLAN EVPN Multi-Site, on page 2 Guidelines and Limitations for VXLAN EVPN Multi-Site, on
More informationEthernet VPN (EVPN) and Provider Backbone Bridging-EVPN: Next Generation Solutions for MPLS-based Ethernet Services. Introduction and Application Note
White Paper Ethernet VPN (EVPN) and Provider Backbone Bridging-EVPN: Next Generation Solutions for MPLS-based Ethernet Services Introduction and Application Note Last Updated: 5/2014 Ethernet VPN (EVPN)
More informationVXLAN Design with Cisco Nexus 9300 Platform Switches
Guide VXLAN Design with Cisco Nexus 9300 Platform Switches Guide October 2014 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 39 Contents What
More informationVXLAN Deployment Use Cases and Best Practices
VXLAN Deployment Use Cases and Best Practices Azeem Suleman Solutions Architect Cisco Advanced Services Contributions Thanks to the team: Abhishek Saxena Mehak Mahajan Lilian Quan Bradley Wong Mike Herbert
More informationVXLAN Multipod Design for Intra-Data Center and Geographically Dispersed Data Center Sites
White Paper VXLAN Multipod Design for Intra-Data Center and Geographically Dispersed Data Center Sites May 17, 2016 Authors Max Ardica, Principal Engineer INSBU Patrice Bellagamba, Distinguish System Engineer
More informationBESS work on control planes for DC overlay networks A short overview
BESS work on control planes for DC overlay networks A short overview Jorge Rabadan IETF99, July 2017 Prague 1 Agenda EVPN in a nutshell BESS work on EVPN for NVO3 networks EVPN in the industry today Future
More informationEthernet VPN (EVPN) in Data Center
Ethernet VPN (EVPN) in Data Center Description and Design considerations Vasilis Stavropoulos Sparkle GR EVPN in Data Center The necessity for EVPN (what it is, which problems it solves) EVPN with MPLS
More informationProvisioning Overlay Networks
This chapter has the following sections: Using Cisco Virtual Topology System, page 1 Creating Overlays, page 2 Creating Network using VMware, page 4 Creating Subnetwork using VMware, page 4 Creating Routers
More informationConfiguring VXLAN EVPN Multi-Site
This chapter contains the following sections: About VXLAN EVPN Multi-Site, page 1 Guidelines and Limitations for VXLAN EVPN Multi-Site, page 2 Enabling VXLAN EVPN Multi-Site, page 2 Configuring VNI Dual
More informationBuilding Data Center Networks with VXLAN EVPN Overlays Part I
BRKDCT-2949 Building Data Center Networks with VXLAN EVPN Overlays Part I Lukas Krattiger, Principal Engineer Cisco Spark How Questions? Use Cisco Spark to communicate with the speaker after the session
More informationData Center Interconnect Solution Overview
CHAPTER 2 The term DCI (Data Center Interconnect) is relevant in all scenarios where different levels of connectivity are required between two or more data center locations in order to provide flexibility
More informationConfiguring VXLAN EVPN Multi-Site
This chapter contains the following sections: About VXLAN EVPN Multi-Site, page 1 Licensing Requirements for VXLAN EVPN Multi-Site, page 2 Guidelines and Limitations for VXLAN EVPN Multi-Site, page 2 Enabling
More informationService Graph Design with Cisco Application Centric Infrastructure
White Paper Service Graph Design with Cisco Application Centric Infrastructure 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 101 Contents Introduction...
More informationNetwork Virtualization in IP Fabric with BGP EVPN
EXTREME VALIDATED DESIGN Network Virtualization in IP Fabric with BGP EVPN Network Virtualization in IP Fabric with BGP EVPN Version 2.0 9035383 February 2018 2018, Extreme Networks, Inc. All Rights Reserved.
More informationContents. EVPN overview 1
Contents EVPN overview 1 EVPN network model 1 MP-BGP extension for EVPN 2 Configuration automation 3 Assignment of traffic to VXLANs 3 Traffic from the local site to a remote site 3 Traffic from a remote
More informationVXLAN EVPN Multi-Site Design and Deployment
White Paper VXLAN EVPN Multi-Site Design and Deployment 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 55 Contents What you will learn... 4
More informationVXLAN Cisco and/or its affiliates. All rights reserved. Cisco Public
VXLAN Presentation ID 1 Virtual Overlay Encapsulations and Forwarding Ethernet Frames are encapsulated into an IP frame format New control logic for learning and mapping VM identity (MAC address) to Host
More informationDisclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme
NET1350BUR Deploying NSX on a Cisco Infrastructure Jacob Rapp jrapp@vmware.com Paul A. Mancuso pmancuso@vmware.com #VMworld #NET1350BUR Disclaimer This presentation may contain product features that are
More informationHuawei CloudEngine Series. VXLAN Technology White Paper. Issue 06 Date HUAWEI TECHNOLOGIES CO., LTD.
Issue 06 Date 2016-07-28 HUAWEI TECHNOLOGIES CO., LTD. 2016. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of
More informationHPE FlexFabric 5940 Switch Series
HPE FlexFabric 5940 Switch Series EVPN Configuration Guide Part number: 5200-2002b Software version: Release 25xx Document version: 6W102-20170830 Copyright 2017 Hewlett Packard Enterprise Development
More informationCisco ACI Multi-Site Architecture
White Paper Cisco ACI Multi-Site Architecture 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 53 Contents Introduction... 3 Cisco ACI Multi-Site
More informationEXTREME VALIDATED DESIGN. Network Virtualization in IP Fabric with BGP EVPN
EXTREME VALIDATED DESIGN Network Virtualization in IP Fabric with BGP EVPN 53-1004308-07 April 2018 2018, Extreme Networks, Inc. All Rights Reserved. Extreme Networks and the Extreme Networks logo are
More informationSolution Guide. Infrastructure as a Service: EVPN and VXLAN. Modified: Copyright 2016, Juniper Networks, Inc.
Solution Guide Infrastructure as a Service: EVPN and VXLAN Modified: 2016-10-16 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net All rights reserved.
More informationDesigning Mul+- Tenant Data Centers using EVPN- IRB. Neeraj Malhotra, Principal Engineer, Cisco Ahmed Abeer, Technical Marke<ng Engineer, Cisco
Designing Mul+- Tenant Data Centers using EVPN- IRB Neeraj Malhotra, Principal Engineer, Cisco Ahmed Abeer, Technical Marke
More informationExtending ACI to Multiple Sites: Dual Site Deployment Deep Dive
Extending ACI to Multiple Sites: Dual Site Deployment Deep Dive Patrice Bellagamba (pbellaga@cisco.com), Distinguished Systems Engineer BRKACI-3503 Agenda Multi-Data Center Design Options Stretched Fabric
More informationLTRDCT-2781 Building and operating VXLAN BGP EVPN Fabrics with Data Center Network Manager
LTRDCT-2781 Building and operating VXLAN BGP EVPN Fabrics with Data Center Network Manager Henrique Molina, Technical Marketing Engineer Matthias Wessendorf, Technical Marketing Engineer Cisco Spark How
More informationExtreme Networks How to Build Scalable and Resilient Fabric Networks
Extreme Networks How to Build Scalable and Resilient Fabric Networks Mikael Holmberg Distinguished Systems Engineer Fabrics MLAG IETF TRILL Cisco FabricPath Extreme (Brocade) VCS Juniper QFabric IEEE Fabric
More informationPluribus Data Center Interconnect Validated
Design Guide Pluribus Data Center Interconnect Validated Design Guide www.pluribusnetworks.com Terminology Reference This is a glossary of acronyms and terms used throughout this document. AS BFD BGP L2VPN
More informationCisco ACI Multi-Site Fundamentals Guide
First Published: 2017-08-10 Last Modified: 2017-10-09 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387)
More informationACI Transit Routing, Route Peering, and EIGRP Support
ACI Transit Routing, Route Peering, and EIGRP Support ACI Transit Routing This chapter contains the following sections: ACI Transit Routing, on page 1 Transit Routing Use Cases, on page 1 ACI Fabric Route
More informationLARGE SCALE IP ROUTING LECTURE BY SEBASTIAN GRAF
LARGE SCALE IP ROUTING LECTURE BY SEBASTIAN GRAF MODULE 07 - MPLS BASED LAYER 2 SERVICES 1 by Xantaro MPLS BASED LAYER 2 VPNS USING MPLS FOR POINT-TO-POINT LAYER 2 SERVICES 2 by Xantaro Why are Layer-2
More informationVerified Scalability Guide for Cisco APIC, Release 3.0(1k) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.0(1k)
Verified Scalability Guide for Cisco APIC, Release 3.0(1k) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.0(1k) Overview 2 General Scalability Limits 2 Fabric Topology, SPAN, Tenants, Contexts
More informationIntroduction to Segment Routing
Segment Routing (SR) is a flexible, scalable way of doing source routing. Overview of Segment Routing, page 1 How Segment Routing Works, page 2 Examples for Segment Routing, page 3 Benefits of Segment
More informationDeploy Application Load Balancers with Source Network Address Translation in Cisco DFA
White Paper Deploy Application Load Balancers with Source Network Address Translation in Cisco DFA Last Updated: 1/27/2016 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco
More informationDNA SA Border Node Support
Digital Network Architecture (DNA) Security Access (SA) is an Enterprise architecture that brings together multiple building blocks needed for a programmable, secure, and highly automated fabric. Secure
More informationCisco HyperFlex Systems
White Paper Cisco HyperFlex Systems Install and Manage Cisco HyperFlex Systems in a Cisco ACI Environment Original Update: January 2017 Updated: March 2018 Note: This document contains material and data
More informationCisco VTS. Enabling the Software Defined Data Center. Jim Triestman CSE Datacenter USSP Cisco Virtual Topology System
Cisco Virtual Topology System Cisco VTS Enabling the Software Defined Data Center Jim Triestman CSE Datacenter USSP jtriestm@cisco.com VXLAN Fabric: Choice of Automation and Programmability Application
More informationVerified Scalability Guide for Cisco APIC, Release 3.0(1k) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.0(1k)
Verified Scalability Guide for Cisco APIC, Release 3.0(1k) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.0(1k) Overview 2 General Scalability Limits 2 Fabric Topology, SPAN, Tenants, Contexts
More informationData Center InterConnect (DCI) Technologies. Session ID 20PT
Data Center InterConnect (DCI) Technologies Session ID 20PT Session Objectives The main goals of this session are: Highlighting the main business requirements driving Data Center Interconnect (DCI) deployments
More informationSP Datacenter fabric technologies. Brian Kvisgaard System Engineer CCIE SP #41039
SP Datacenter fabric technologies Brian Kvisgaard System Engineer CCIE SP #41039 VMDC 2.1 DC Container Architecture Simplified architecture Services on the stick design modification (Core/Agg handoff)
More informationFeature Information for BGP Control Plane, page 1 BGP Control Plane Setup, page 1. Feature Information for BGP Control Plane
Feature Information for, page 1 Setup, page 1 Feature Information for Table 1: Feature Information for Feature Releases Feature Information PoAP diagnostics 7.2(0)N1(1) Included a new section on POAP Diagnostics.
More informationPrepAwayExam. High-efficient Exam Materials are the best high pass-rate Exam Dumps
PrepAwayExam http://www.prepawayexam.com/ High-efficient Exam Materials are the best high pass-rate Exam Dumps Exam : 642-997 Title : Implementing Cisco Data Center Unified Fabric (DCUFI) Vendor : Cisco
More informationVirtual Extensible LAN and Ethernet Virtual Private Network
Virtual Extensible LAN and Ethernet Virtual Private Network Contents Introduction Prerequisites Requirements Components Used Background Information Why you need a new extension for VLAN? Why do you chose
More informationMigration from Classic DC Network to Application Centric Infrastructure
Migration from Classic DC Network to Application Centric Infrastructure Kannan Ponnuswamy, Solution Architect, Cisco Advanced Services Acronyms IOS vpc VDC AAA VRF STP ISE FTP ToR UCS FEX OTV QoS BGP PIM
More informationARISTA DESIGN GUIDE Data Center Interconnection with VXLAN
ARISTA DESIGN GUIDE Data Center Interconnection with VXLAN Version 1.0 November 2014 The requirement to operate multiple, geographically dispersed data centers is a fact of life for many businesses and
More informationLocation ID Separation Protocol. Gregory Johnson -
Location ID Separation Protocol Gregory Johnson - grjohnso@cisco.com LISP - Agenda LISP Overview LISP Operations LISP Use Cases LISP Status (Standards and in the Community) Summary 2 LISP Overview 2010
More informationBorder Provisioning Use Case in VXLAN BGP EVPN Fabrics - Multi-Site
Border Provisioning Use Case in VXLAN BGP EVPN Fabrics - Multi-Site This chapter explains LAN Fabric border provisioning using EVPN Multi-Site feature. Overview, page 1 Prerequisites, page 1 Limitations,
More informationLecture 7 Advanced Networking Virtual LAN. Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it
Lecture 7 Advanced Networking Virtual LAN Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it Advanced Networking Scenario: Data Center Network Single Multiple, interconnected via Internet
More informationInternet Engineering Task Force (IETF) Request for Comments: N. Bitar Nokia R. Shekhar. Juniper. J. Uttaro AT&T W. Henderickx Nokia March 2018
Internet Engineering Task Force (IETF) Request for Comments: 8365 Category: Standards Track ISSN: 2070-1721 A. Sajassi, Ed. Cisco J. Drake, Ed. Juniper N. Bitar Nokia R. Shekhar Juniper J. Uttaro AT&T
More informationConfiguring MPLS and EoMPLS
37 CHAPTER This chapter describes how to configure multiprotocol label switching (MPLS) and Ethernet over MPLS (EoMPLS) on the Catalyst 3750 Metro switch. MPLS is a packet-switching technology that integrates
More informationVXLAN EVPN Multihoming with Cisco Nexus 9000 Series Switches
White Paper VXLAN EVPN Multihoming with Cisco Nexus 9000 Series Switches 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 27 Contents Introduction...
More informationACI Fabric Endpoint Learning
White Paper ACI Fabric Endpoint Learning 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 45 Contents Introduction... 3 Goals of this document...
More informationCisco Dynamic Fabric Automation Architecture. Miroslav Brzek, Systems Engineer
Cisco Dynamic Fabric Automation Architecture Miroslav Brzek, Systems Engineer mibrzek@cisco.com Agenda DFA Overview Optimized Networking Fabric Properties Control Plane Forwarding Plane Virtual Fabrics
More informationCisco Virtualized Workload Mobility Introduction
CHAPTER 1 The ability to move workloads between physical locations within the virtualized Data Center (one or more physical Data Centers used to share IT assets and resources) has been a goal of progressive
More informationModeling an Application with Cisco ACI Multi-Site Policy Manager
Modeling an Application with Cisco ACI Multi-Site Policy Manager Introduction Cisco Application Centric Infrastructure (Cisco ACI ) Multi-Site is the policy manager component used to define intersite policies
More informationConfiguring MPLS, MPLS VPN, MPLS OAM, and EoMPLS
CHAPTER 43 Configuring MPLS, MPLS VPN, MPLS OAM, and EoMPLS This chapter describes how to configure multiprotocol label switching (MPLS) and Ethernet over MPLS (EoMPLS) on the Cisco ME 3800X and ME 3600X
More informationLayer 3 IP Multicast Architecture and Design in Cisco ACI Fabric
White Paper Layer 3 IP Multicast Architecture and Design in Cisco ACI Fabric What You Will Learn Many enterprise data center applications require IP multicast support and rely on multicast packet delivery
More informationCampus Fabric. How To Integrate With Your Existing Networks. Kedar Karmarkar - Technical Leader BRKCRS-2801
Campus Fabric How To Integrate With Your Existing Networks Kedar Karmarkar - Technical Leader Campus Fabric Abstract Is your Campus network facing some, or all, of these challenges? Host Mobility (w/o
More informationMPLS VPN--Inter-AS Option AB
The feature combines the best functionality of an Inter-AS Option (10) A and Inter-AS Option (10) B network to allow a Multiprotocol Label Switching (MPLS) Virtual Private Network (VPN) service provider
More information21CTL Disaster Recovery, Workload Mobility and Infrastructure as a Service Proposal. By Adeyemi Ademola E. Cloud Engineer
21CTL Disaster Recovery, Workload Mobility and Infrastructure as a Service Proposal By Adeyemi Ademola E. Cloud Engineer 1 Contents Introduction... 5 1.2 Document Purpose and Scope...5 Service Definition...
More informationCloud Data Center Architecture Guide
Cloud Data Center Architecture Guide Modified: 2018-08-21 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net Juniper Networks, the Juniper Networks
More informationData Center Interconnection
Dubrovnik, Croatia, South East Europe 20-22 May, 2013 Data Center Interconnection Network Service placements Yves Louis TSA Data Center 2011 2012 Cisco and/or its affiliates. All rights reserved. Cisco
More informationMPLS VPN Inter-AS Option AB
First Published: December 17, 2007 Last Updated: September 21, 2011 The feature combines the best functionality of an Inter-AS Option (10) A and Inter-AS Option (10) B network to allow a Multiprotocol
More informationOpen Compute Network Operating System Version 1.1
Solution Guide Open Compute Network Operating System Version 1.1 Data Center Solution - EVPN with VXLAN 2016 IP Infusion Inc. All Rights Reserved. This documentation is subject to change without notice.
More informationDemand-Based Control Planes for Switching Fabrics
Demand-Based Control Planes for Switching Fabrics Modern switching fabrics use virtual network overlays to support mobility, segmentation, and programmability at very large scale. Overlays are a key enabler
More informationSegmentation. Threat Defense. Visibility
Segmentation Threat Defense Visibility Establish boundaries: network, compute, virtual Enforce policy by functions, devices, organizations, compliance Control and prevent unauthorized access to networks,
More informationCisco Application Centric Infrastructure and Microsoft SCVMM and Azure Pack
White Paper Cisco Application Centric Infrastructure and Microsoft SCVMM and Azure Pack Introduction Cisco Application Centric Infrastructure (ACI) is a next-generation data center fabric infrastructure
More informationEvolving your Campus Network with. Campus Fabric. Shawn Wargo. Technical Marketing Engineer BRKCRS-3800
Evolving your Campus Network with Campus Fabric Shawn Wargo Technical Marketing Engineer BRKCRS-3800 Campus Fabric Abstract Is your Campus network facing some, or all, of these challenges? Host Mobility
More informationReal World ACI Deployment and Migration Kannan Ponnuswamy, Solutions Architect BRKACI-2601
Real World ACI Deployment and Migration Kannan Ponnuswamy, Solutions Architect BRKACI-2601 Icons and Terms APIC Application Policy Infrastructure Controller (APIC) Cisco Nexus 9500 Cisco Nexus 9300 Nexus
More informationDCI. DataCenter Interconnection / Infrastructure. Arnaud Fenioux
DCI DataCenter Interconnection / Infrastructure Arnaud Fenioux What is DCI? DataCenter Interconnection Or DataCenter Infrastructure? 2 From interconnection to infrastructure Interconnection Dark fiber
More informationRouting Design. Transit Routing. About Transit Routing
Transit Routing, page 1 L3Out Ingress Policy Enforcement, page 16 L3Out MTU Considerations, page 20 Shared L3Outs, page 22 L3Out Router IDs, page 27 Multiple External Connectivity, page 30 Transit Routing
More informationBest Practices come from YOU Cisco and/or its affiliates. All rights reserved.
Best Practices come from YOU 2 Apple iphone4 launched in June 2010 3 Antennagate 4 IPHONE4 Best Practices from CUSTOMERS 5 vpc Best Practices and Design on NXOS Nazim Khan, CCIE#39502 (DC/SP) Technical
More informationCisco Configuring Cisco Nexus 7000 Switches v3.1 (DCNX7K)
Course Overview View Course Dates & Register Today This course is designed for systems and field engineers who configure the Cisco Nexus 7000 Switch. This course covers the key components and procedures
More informationLISP Locator/ID Separation Protocol
LISP Locator/ID Separation Protocol Hernán Contreras G. Consulting Systems Engineer hcontrer@cisco.com LISP Next Gen Routing Architecture Locator-ID Separation Protocol (LISP) Elevator Pitch LISP is a
More informationFlexible Data Centre Fabric - FabricPath/TRILL, OTV, LISP and VXLAN
Flexible Data Centre Fabric - FabricPath/TRILL, OTV, LISP and VXLAN Ron Fuller CCIE #5851 (R&S/Storage) Technical Marketing Engineer, Nexus 7000 rfuller@cisco.com Agenda The Evolving Data Centre Fabric
More informationCisco Evolved Programmable Network Implementation Guide for Large Network with End-to-End Segment Routing, Release 5.0
Cisco Evolved Programmable Network Implementation Guide for Large Network with End-to-End Segment Routing, Release 5.0 First Published: 2017-06-22 Americas Headquarters Cisco Systems, Inc. 170 West Tasman
More informationEVPN Multicast. Disha Chopra
EVPN Multicast Disha Chopra Agenda EVPN Multicast Optimizations Introduction to EVPN Multicast (BUM) IGMP Join/Leave Sync Routes Selective Multicast Ethernet Tag Route Use Case 2 EVPN BUM Traffic Basics
More information