EXTREME VALIDATED DESIGN. Network Virtualization in IP Fabric with BGP EVPN

Size: px
Start display at page:

Download "EXTREME VALIDATED DESIGN. Network Virtualization in IP Fabric with BGP EVPN"

Transcription

1 EXTREME VALIDATED DESIGN Network Virtualization in IP Fabric with BGP EVPN April 2018

2 2018, Extreme Networks, Inc. All Rights Reserved. Extreme Networks and the Extreme Networks logo are trademarks or registered trademarks of Extreme Networks, Inc. in the United States and/or other countries. All other names are the property of their respective owners. For additional information on Extreme Networks Trademarks please see Specifications and product availability are subject to change without notice. 2017, Brocade Communications Systems, Inc. All Rights Reserved. Brocade, the B-wing symbol, and MyBrocade are registered trademarks of Brocade Communications Systems, Inc., in the United States and in other countries. Other brands, product names, or service names mentioned of Brocade Communications Systems, Inc. are listed at brocade-legal-intellectual-property/brocade-legal-trademarks.html. Other marks may belong to third parties. Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document at any time, without notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade sales office for information on feature and product availability. Export of technical data contained in this document may require an export license from the United States government. The authors and Brocade Communications Systems, Inc. assume no liability or responsibility to any person or entity with respect to the accuracy of this document or any loss, cost, liability, or damages arising from the information contained herein or the computer programs that accompany it. The product described by this document may contain open source software covered by the GNU General Public License or other open source license agreements. To find out which open source software is included in Brocade products, view the licensing terms applicable to the open source software, and obtain a copy of the programming source code, please visit 2

3 Contents List of Figures... 5 Preface... 6 Extreme Validated Designs... 6 Purpose of This Document... 6 Target Audience... 6 Authors... 7 Document History... 7 About Extreme Networks... 7 Introduction... 8 Extreme IP Fabric Technology Overview... 9 Benefits... 9 Terminology Functional Components of the Extreme IP Fabric Leaf-Spine Layer 3 Clos Topology (Two-Tier) Optimized 5-Stage Layer 3 Clos Topology (Three-Tier) Edge Services and Border Leafs IP Fabric Underlay Routing Network Virtualization with BGP EVPN VXLAN Layer 2 Extension Using Flood and Learn BGP EVPN for VXLAN VTEP Static Anycast Gateway Overlay Gateway BGP EVPN Control Plane ARP Suppression VLAN Scoping Conversational Learning Integrated Routing and Bridging Multitenancy Ingress Replication vlag Pair Validated Designs Pervasive ebgp ibgp Within a PoD and ebgp Between PoDs Hardware and Software Matrix Fabric Infrastructure Configuration Node ID Configuration IP Fabric Infrastructure Links Loopback Interfaces Server-Facing Links Network Virtualization with BGP EVPN Overlay Gateway Configuration ebgp EVPN Configuration for an Optimized 5-Stage Clos Fabric ebgp EVPN Configuration for a 3-Stage Clos Fabric

4 Tenant Provisioning vlag Pair Configuration Use Cases Tenant and L2 Extension Between Racks in a 3-Stage Clos Fabric Configuration Verification Tenant and L2 Extension Between PoDs in an Optimized 5-Stage Clos Fabric Configuration Verification Tenant Extension Outside the Fabric Configuration Verification VLAN Scoping at the ToR Level Configuration Verification VLAN Scoping at the Port Level Within a ToR Configuration Verification Route Leaking for the Service VRF Configuration Verification Design Considerations Appendix 1: Configuration of the Nodes vlag Active/Active Pair Leaf Individual Non-Redundant Leaf Spine Designated to Exchange Only Underlay Routes Spine Designated to Exchange Both Underlay and Overlay Routes Super-Spine Designated to Exchange Only Underlay Routes Super-Spine Designated to Exchange Both Underlay and Overlay Routes Edge Leaf Appendix 2: Unnumbered Fabric Links References

5 List of Figures Figure 1 on page 12 Leaf-Spine L3 Clos Topology Figure 2 on page 13 Optimized 5-Stage L3 Clos Topology Figure 3 on page 15 ebgp for Underlay Figure 4 on page 16 ibgp for Underlay Figure 5 on page 18 VTEPs and L2 Extension with Flood and Learn Figure 6 on page 19 Routing Between VXLANs in a Flood-and-Learn Topology Figure 7 on page 20 VTEPs and L2 Extension with the BGP EVPN Control Plane Figure 8 on page 21 ARP Suppression Figure 9 on page 24 VLAN Scoping at the Leaf Level Figure 10 on page 24 VLAN Scoping at the Port Level Within a ToR Figure 11 on page 25 Asymmetric IRB Figure 12 on page 26 Symmetric IRB Figure 13 on page 27 Multitenancy Figure 14 on page 28 Active-Active vlag Figure 15 on page 29 Pervasive ebgp in an Optimized 5-Stage IP Fabric Figure 16 on page 30 Pervasive ebgp in a 3-Stage IP Fabric Figure 17 on page 31 ibgp Within a PoD and ebgp Between PoDs in an Optimized 5-Stage IP Fabric Figure 18 on page 59 Tenant and Layer 2 Extension Between Two Racks Figure 19 on page 73 Tenant and Layer 2 Extension Between Two PoDs Connected by Super-Spines Figure 20 on page 88 Tenant Extension Outside the Fabric Through Edge Leafs Figure 21 on page 97 VLAN Scoping at the ToR Level Figure 22 on page 105 VLAN Scoping at the Port Level Within a ToR Figure 23 on page 112 Services Provisioning on the Border Leaf Figure 24 on page 113 Service VRF with Route Leaking on the Border Leaf Figure 25 on page 113 Topology of the Service VRF with Route Leaking from Tenants 5

6 Preface Extreme Validated Designs Purpose of This Document Target Audience About the Authors Document History Extreme Validated Designs Helping customers consider, select, and deploy network solutions for current and planned needs is our mission. Extreme Validated Designs offer a fast track to success by accelerating that process. Validated designs are repeatable reference network architectures that have been engineered and tested to address specific use cases and deployment scenarios. They document systematic steps and best practices that help administrators, architects, and engineers plan, design, and deploy physical and virtual network technologies. Leveraging these validated network architectures accelerates deployment speed, increases reliability and predictability, and reduces risk. Extreme Validated Designs incorporate network and security principles and technologies across the ecosystem of service provider, data center, campus, and wireless networks. Each Extreme Validated Design provides a standardized network architecture for a specific use case, incorporating technologies and feature sets across Extreme products and partner offerings. All Extreme Validated Designs follow best-practice recommendations and allow for customer-specific network architecture variations that deliver additional benefits. The variations are documented and supported to provide ongoing value, and all Extreme Validated Designs are continuously maintained to ensure that every design remains supported as new products and software versions are introduced. By accelerating time-to-value, reducing risk, and offering the freedom to incorporate creative, supported variations, these validated network architectures provide a tremendous value-add for building and growing a flexible network infrastructure. Purpose of This Document This Extreme validated design provides guidance for designing and implementing an IP fabric in a data center network using Extreme hardware and software. It details the reference architecture for deploying an IP fabric and an EVPN-based VXLAN overlay. It should be noted that not all features such as automation practices, zero-touch provisioning, and monitoring of the Extreme IP fabric are included in this document. Future versions of this document are planned to include these aspects of the Extreme IP fabric solution. The design practices documented here follow the best-practice recommendations, but there are variations to the design that are supported as well. Target Audience This document is written for Extreme systems engineers, partners, and customers who design, implement, and support data center networks. This document is intended for experienced data center architects and engineers. This document assumes that the reader has a good understanding of data center switching and routing features and of Multi-Protocol BGP/MPLS VPN [5] for understanding multitenancy in VXLAN EVPN networks. 6

7 Authors Krish Padmanabhan Sr Principal Engineer, System and Solution Engineering Poorani Arthanari Software Engineer, System and Solution Engineering The authors would like to acknowledge the following for their technical guidance in developing this validated design: Abdul Khader Director, System and Solution Engineering Venugopal Mundathaya Principal Engineer Document History Date Part Number Description March 23, Initial release. March 30, Minor formatting changes. August 12, IP unnumbered interface support for 3-stage fabric. Illustration examples for: VLAN scoping at the ToR level and within the ToR Route leaking with the service VRF on the edge leaf Additional design considerations. November 17, Updated the document title and the hardware and software matrix tables for the Network OS version. August 18, Updated the software matrix. January Updated document to reflect Extreme's acquisition of Brocade's data center networking business. April Format change About Extreme Networks Extreme Networks (NASDAQ: EXTR) networking solutions help the world s leading organizations transition smoothly to a world where applications and information reside anywhere. This vision is designed to deliver key business benefits such as unmatched simplicity, non-stop networking, application optimization, and investment protection. Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and cost while enabling virtualization and cloud computing to increase business agility. To help ensure a complete solution, Extreme Networks partners with world-class IT companies and provides comprehensive education, support, and professional services offerings. ( 7

8 Introduction Introduction Based on the principles of the New IP, Extreme is building on the proven success of the VDX platform by expanding our cloudoptimized network and network virtualization architectures to meet customer demand for higher levels of scale, agility, and operational efficiency. This document describes cloud-optimized network designs using Extreme IP fabrics for building data-center sites. The configurations and design practices documented here are fully validated and conform to the IP fabric reference architectures. The intention of this validated design document is to provide reference configurations and document best practices for building cloud-scale data-center networks using Extreme VDX switches and IP fabric architectures. This document describes the following architectures: IP fabric deployed in 3-stage and optimized 5-stage folded Clos topologies IP fabric with network virtualization using BGP EVPN deployed in 3-stage and optimized 5-stage folded Clos topologies 8

9 Extreme IP Fabric Technology Overview Benefits Terminology Functional Components of the Extreme IP Fabric Extreme IP fabric provides a Layer 3 Clos deployment architecture for data center sites. In IP fabrics, all links in the Clos topology are Layer 3 links. The Extreme IP fabric includes the networking architecture; the protocols used to build the network; turnkey automation features used to provision, manage, and monitor the networking infrastructure; and the hardware differentiation with VDX switches. The following sections describe the validated design for data center sites with IP fabrics. Because the infrastructure is built on IP, advantages like the following are leveraged: loop-free communication using industry-standard routing protocols, ECMP, very high solution scale, and standards-based interoperability. Benefits Some of the key benefits of deploying data center sites with Extreme IP fabrics: Highly scalable infrastructure Because the Clos topology is built with IP protocols, the scale of the infrastructure is very high. The port and rack scales are documented with descriptions of the Extreme IP fabric deployment topologies. Standards-based and interoperable protocols The Extreme IP fabric is built with industry-standard protocols like Border Gateway Protocol (BGP) and Open Shortest Path First (OSPF). These protocols are well understood and provide a solid foundation for a highly scalable solution. In addition, industry-standard overlay control- and data-plane protocols like BGP EVPN and Virtual Extensible Local Area Network (VXLAN) are used to extend the Layer 2 domain and extend tenancy domains by enabling Layer 2 communications and VM mobility. Active-active vlag pairs By supporting vlag pairs on leaf switches, dual-homing of the networking endpoints is supported. This provides higher redundancy. Also, because the links are active-active, vlag pairs provide higher throughput to the endpoints. vlag pairs are supported for all 10-GbE, 40-GbE, and 100-GbE interface speeds, and up to 32 links can participate in a vlag. Support for unnumbered interfaces Using Extreme Network OS support for IP unnumbered interfaces, only one IP address per switch is required to configure the routing protocol peering. This support significantly reduces the planning and use of IP addresses, and it simplifies operations. Programmable automation server-based automation provides support for common industry automation tools such as Python Ansible, Puppet, and YANG model based REST and NETCONF APIs. The prepackaged PyNOS scripting library and editable automation scripts execute predefined provisioning tasks, while allowing customization for addressing unique requirements to meet technical or business objectives when the enterprise is ready. 9

10 Introduction Terminology Term ARP AS ASN BFD BGP BUM DCI ebgp ECMP EVPN ibgp IP IRB MAC MP-BGP MPLS ND NLRI PoD RD RT ToR UDP vlag VLAN VM VNI VPN VRF VTEP VXLAN Description Address Resolution Protocol Autonomous System Autonomous System Number Bidirectional Forwarding Detection Border Gateway Protocol Broadcast, Unknown unicast, and Multicast Data Center Interconnect External Border Gateway Protocol This refers to BGP peering between two nodes in two different autonomous systems. Equal Cost Multi-Path Ethernet Virtual Private Network Internal Border Gateway Protocol This refers to BGP peering between two nodes in the same autonomous system. Internet Protocol Integrated Routing and Bridging Media Access Control Multi-Protocol Border Gateway Protocol Multi-Protocol Label Switching Neighbor Discovery Network Layer Reachability Information Point of Delivery Route Distinguisher Route Target Top of Rack switch Also leaf or VTEP in an IP fabric context. User Datagram Protocol Virtual Link Aggregation Group Virtual Local Area Network Virtual Machine VXLAN Network Identifier Virtual Private Network VPN Routing and Forwarding instance An instance of the routing/forwarding table with a set of networks and hosts in a router. A router may have multiple such instances isolated from each other. Also referred to as a tenant. In IP fabric, this may be localized to one VTEP/leaf or may be spread across multiple VTEPs across the IP fabric and beyond the border leaf. VXLAN Tunnel Endpoint In an IP fabric, leaf and VTEP are used interchangeably. Virtual Extensible Local Area Network 10

11 Functional Components of the Extreme IP Fabric Functional Components of the Extreme IP Fabric Leaf-Spine Layer 3 Clos Topology (Two-Tier) The leaf-spine topology has become the de facto standard for networking topologies when building medium- to large-scale data center infrastructures. The leaf-spine topology is adapted from Clos telecommunications networks. An IP fabric within a PoD resembles a twotier or 3-stage folded Clos fabric. The two-tier leaf-spine topology is shown in Figure 1. The bottom layer of the IP fabric has the leaf devices (top-of-rack switches), and the top layer has spines. The role of the leaf is to provide connectivity to the endpoints in the data center network. These endpoints include compute servers and storage devices as well as other networking devices like routers, switches, load balancers, firewalls, and any other physical or virtual networking endpoints. Because all endpoints connect only to the leaf, policy enforcement, including security, traffic-path selection, QoS marking, traffic policing, and shaping, is implemented at the leaf. More importantly, the leafs act as the anycast gateways for the server segments to facilitate mobility with the VXLAN overlay. The role of the spine is to provide connectivity between leafs. The major role of the spine is to participate in the control-plane and dataplane operations for traffic forwarding between leafs. The spine devices serve two purposes: BGP control plane (route reflectors for leaf or ebgp peering with leaf) and IP forwarding based on the outer IP header in the underlay network. Since there are no network endpoints connected to the spine, tenant VRFs or VXLAN segments are not created on spines. Their routing table size requirements are also very light to accommodate just the underlay reachability. Note that all spine devices need not act as BGP route reflectors; only selected spines in the spine layer can act as BGP route reflectors in the overlay design. More details are provided in BGP EVPN Control Plane on page 23. As a design principle, the following requirements apply to the leaf-spine topology: Each leaf connects to all spines in the network through 40-GbE links. Spines are not interconnected with each other. Leafs are not interconnected with each other for data-plane purposes. (The leafs may be interconnected for control-plane operations such as forming a server-facing vlag.) The network endpoints do not connect to the spines. This type of topology has the predictable latency and also provides the ECMP forwarding in the underlay network. The number of hops between two leaf devices is always two within the fabric. This topology also enables easier scale out in the horizontal direction as the data center expands and is limited by the port density and bandwidth supported by the spine devices. This validated design recommends the same hardware in the spine layer. Mixing different hardware is not recommended. IP Fabric Infrastructure Links All fabric nodes leafs, spines, and super-spines are interconnected with Layer 3 interfaces. In the validated design, 40-GbE links are used between the fabric nodes. All these links are configured as Layer 3 interfaces with /31 IPv4 address. The MTU for these links is set to jumbo MTU. This is a requirement to handle the VXLAN encapsulation of Ethernet frames. Multiple parallel links between two nodes in the fabric must be avoided. Server-Facing Links The server-facing or access links are on the leaf nodes. In the validated design, 10-GbE links are used for server-facing VLANs. 11

12 Functional Components of the Extreme IP Fabric These links are configured as Layer 2 trunks with associated VLANs. The MTU for these links is set to the default: 1500 bytes. Spanning tree is disabled. 1 FIGURE 1 Leaf-Spine L3 Clos Topology Optimized 5-Stage Layer 3 Clos Topology (Three-Tier) Multiple PoDs based on leaf-spine topologies can be connected for higher scale in an optimized 5-stage folded Clos (three-tier) topology. This topology adds a new tier to the network, known as a super-spine. This architecture is recommended for interconnecting several EVPN VXLAN PoDs. Super-spines function similar to spines: BGP control plane and IP forwarding based on the outer IP header in the underlay network. No endpoints are connected to the super-spine. Figure 2 shows four super-spine switches connecting the spine switches across multiple data center PoDs. The connection between the spines and the super-spines follows the Clos principles: Each spine connects to all super-spines in the network. Neither spines nor super-spines are interconnected with each other. 1 Spanning tree must be enabled if there are Layer 2 switches/bridges between a leaf and servers. 12

13 Functional Components of the Extreme IP Fabric FIGURE 2 Optimized 5-Stage L3 Clos Topology Edge Services and Border Leafs For two-tier and three-tier data center topologies, the role of the border leaf in the network is to provide external connectivity to the data center site. In addition, since all traffic enters and exits the data center through the border leaf switches, they present the ideal location in the network to connect network services like firewalls, load balancers, and edge VPN routers. The border leaf switches connect to the WAN edge devices in the network to provide external connectivity to the data center site. As a design principle, two border leaf switches are recommended for redundancy. The WAN edge devices provide the interfaces to the Internet and DCI solutions. For DCI, these devices function as the Provider Edge (PE) routers, enabling connections to other data center sites through WAN technologies like Multiprotocol Label Switching (MPLS) VPN and Virtual Private LAN Services (VPLS). The Extreme validated design for DCI solutions is discussed in a separate validated design document. There are several ways that the border leafs connect to the data center site. In three-tier (super-spine) architectures, the border leafs are typically connected to the super-spines as depicted in Figure 2. In two-tier topologies, the border leafs are connected to the spines as depicted in Figure 1. Certain topologies may use the spine as border leafs (known as a border spine), overloading two functions into one. This topology adds additional forwarding requirements to spines they need to be aware of the tenants, VNIs, and VXLAN tunnel encapsulation and de-encapsulation functions. IP Fabric Underlay Routing An IP fabric collectively refers to the following: IPv4 network address assignments to the links connecting the nodes in the fabric: spines, leafs, super-spines, and border leafs. Control-plane protocol used for reachability between the nodes. A smaller scale topology might benefit from a link-state protocol such as OSPF. Large scale topologies, however, typically use BGP. Extreme validated design recommends BGP as the protocol for underlay network reachability. Resiliency feature such as BFD. 13

14 Functional Components of the Extreme IP Fabric There are several underlay deployment options. When using BGP as the only routing protocol in the fabric, there are two models: ebgp for Underlay ebgp peering between each tier of nodes: between the leaf and the spine; between the spine and the super-spine; and between the super-spine and the border leaf. ibgp for Underlay ibgp peering between the leaf and the spine within the PoD and spines as BGP route reflectors. ebgp peering between the PoDs through the super-spine layer for inter-pod reachability. ebgp for Underlay This deployment model refers to the usage of ebgp peering between the leaf and the spine in the fabric. In this model, each leaf node is assigned its own autonomous system (AS) number. The other nodes are grouped based on their role in the fabric, and each of these groups is assigned a separate AS number, as shown in Figure 3. Using ebgp in an IP fabric is simple and also provides the ability to apply BGP policies for traffic engineering on a per-leaf or per-rack basis since each leaf or rack in a PoD is assigned a unique AS number. Private AS numbers should be used in the fabric. One design consideration for the AS number assignment is that a 2-byte AS number provides a maximum of 1023 private AS numbers (ASN to ASN 65534); if the IP fabric is larger than 1023 devices, we recommend using 4-byte private AS numbers (ASN 4,200,000,000 to 4,294,967,294). Each leaf in a PoD is assigned its own AS number. All spines inside a PoD belong to one AS. All super-spines are configured in one AS. Edge or border leafs belong to a separate AS. Each leaf peers with all spines using ebgp. Each spine peers with all super-spines using ebgp. There is no ebgp peering between leafs. There is no ebgp peering between spines. There is no ebgp peering between super-spines. 14

15 FIGURE 3 ebgp for Underlay ibgp for Underlay In this deployment model, each PoD and edge services PoD is configured with a unique AS number, as shown in Figure 4. The spines and leafs in a PoD are configured with the same AS number. The ibgp design is different than the ebgp design because ibgp must be fully meshed with all BGP-enabled devices in an IP fabric. In order to avoid the complexities of a full mesh, route reflectors must be used in the fabric. ibgp peering is between the spine and the leaf in a PoD, and all spines in a PoD act as BGP route reflectors to the leafs for the underlay. ebgp is used to peer between spines and super-spines. The super-spine layer is configured with a unique AS number; all super-spines use the same AS number. When an EVPN Address-Family is enabled for overlay, Two spines in each PoD are enabled with EVPN AFI, and they act as the RR to the leaf. Leafs exchange EVPN routes to the spine RRs. These spines also exchange EVPN routes with super-spines. Edge leafs exchange EVPN routes with super-spines. 15

16 Functional Components of the Extreme IP Fabric FIGURE 4 ibgp for Underlay 16

17 Network Virtualization with BGP EVPN VXLAN Layer 2 Extension Using Flood and Learn BGP EVPN for VXLAN Network virtualization is the process of creating virtual, logical networks on physical infrastructures. With network virtualization, multiple physical networks can be consolidated to form a logical network. Conversely, a physical network can be segregated to form multiple virtual networks. Virtual networks are created through a combination of hardware and software elements spanning the networking, storage, and computing infrastructure. Network virtualization solutions leverage the benefits of software in terms of agility and programmability, along with the performance acceleration and scale of application-specific hardware. Virtual Extensible LAN (VXLAN) is an overlay technology that provides Layer 2 connectivity for workloads residing across the data center network. VXLAN creates a logical network overlay on top of physical networks, extending Layer 2 domains across Layer 3 boundaries. VXLAN provides decoupling of the virtual topology provided by the VXLAN tunnels from the physical topology of the network. It leverages Layer 3 benefits in the underlay, such as load balancing on redundant links, which leads to higher network utilization. In addition, VXLAN provides a large number of logical network segments, allowing for large-scale multitenancy in the network. VXLAN is based on the IETF RFC 7348 standard. VXLAN has a 24-bit Virtual Network ID (VNID) space, which allows for 16 million logical networks compared to a traditional VLAN, which supports a maximum of 4096 logical segments. VXLAN eliminates the need for Spanning Tree Protocol (STP) in the data center network, and it provides increased scalability and improved resiliency. VXLAN has become the de facto standard for overlays that are terminated on physical switches or virtual network elements. The traditional Layer 2 extension mechanisms using VXLAN rely on "Flood and Learn" mechanisms. These mechanisms are very inefficient, delaying MAC address convergence and resulting in unnecessary flooding. Also, in a data center environment with VXLANbased Layer 2 extension mechanisms, a Layer 2 domain and an associated subnet might exist across multiple racks and even across all racks in a data center site. With traditional underlay routing mechanisms, routed traffic destined to a VM or a host belonging to the subnet follows an inefficient path in the network, because the network infrastructure is aware only of the existence of the distributed Layer 3 subnet, but it is not aware of the exact location of the hosts behind a leaf switch. With Extreme BGP-EVPN network virtualization, network virtualization is achieved by creating a VXLAN-based overlay network. BGP- EVPN network virtualization leverages BGP EVPN to provide a control plane for the virtual overlay network. BGP EVPN enables control-plane learning for end hosts behind remote VXLAN tunnel endpoints (VTEPs). This learning includes reachability for Layer 2 MAC addresses and Layer 3 host routes. Some key features and benefits of Extreme BGP-EVPN network virtualization are summarized as follows: Active-active vlag pairs vlag pairs for a multiswitch port channel for dual homing of network endpoints are supported at the leaf. Both switches in the vlag pair participate in the BGP-EVPN operations and are capable of actively forwarding traffic. Static anycast gateway With static anycast gateway technology, each leaf is assigned the same default gateway IP and MAC addresses for all connected subnets. This ensures that local traffic is terminated and routed at Layer 3 at the leaf. This also eliminates any suboptimal inefficiencies found with centralized gateways. All leafs are simultaneously active forwarders for all default traffic for which they are enabled. Also, because the static anycast gateway does not rely on any control-plane protocol, it can scale to large deployments. Efficient VXLAN routing With the existence of active-active vlag pairs and the static anycast gateway, all traffic is routed and switched at the leaf. Routed traffic from the network endpoints is terminated in the leaf and is then encapsulated in the VXLAN header to be sent to the remote site. Similarly, traffic from the remote leaf node is VXLAN-encapsulated and must be decapsulated and routed to the destination. This VXLAN routing operation in to and out of the tunnel on the leaf switches is enabled in the Extreme VDX 6740 and 6940 platform ASICs. VXLAN routing performed in a single pass is more efficient than competitive ASICs. Data-plane IP and MAC learning With IP host routes and MAC addresses learned from the data plane and advertised with BGP EVPN, the leaf switches are aware of the reachability of the hosts in the network. Any traffic destined to the hosts takes the most efficient route in the network. 17

18 VXLAN Layer 2 Extension Using Flood and Learn Layer 2 and Layer 3 multitenancy BGP EVPN provides the control plane for VRF routing and for Layer 2 VXLAN extension. BGP EVPN enables a multitenant infrastructure and extends it across the data center to enable traffic isolation between the Layer 2 and Layer 3 domains, while providing efficient routing and switching between the tenant endpoints. Dynamic tunnel discovery With BGP EVPN, the remote VTEPs are automatically discovered. The resulting VXLAN tunnels are also automatically created. This significantly reduces operational expense (OpEx) and eliminates errors in configuration. ARP/ND suppression The BGP-EVPN EVI leafs discover remote IP and MAC addresses and use this information to populate their local ARP tables. Using these entries, the leaf switches respond to any local ARP queries. This eliminates the need for flooding ARP requests in the network infrastructure. Conversational ARP/ND learning Conversational ARP/ND reduces the number of cached ARP/ND entries by programming only active flows into the forwarding plane. This helps to optimize utilization of hardware resources. In many scenarios, there are software requirements for ARP and ND entries beyond the hardware capacity. Conversational ARP/ND limits storage-in-hardware to active ARP/ND entries; aged-out entries are deleted automatically. VM mobility support If a VM moves behind a leaf switch, with data-plane learning, the leaf switch discovers the VM and learns its addressing information. It advertises the reachability to its peers, and when the peers receive the updated information for the reachability of the VM, they update their forwarding tables accordingly. BGP-EVPN-assisted VM mobility leads to faster convergence in the network. Open standards and interoperability BGP EVPN is based on the open standard protocol and is interoperable with implementations from other vendors. This allows the BGP-EVPN-based solution to fit seamlessly in a multivendor environment. VXLAN Layer 2 Extension Using Flood and Learn Let's consider the simple topology shown in Figure 5, which represents VXLAN extension, to understand how VXLAN flood and learn works before going into the details of control-based VXLAN using BGP EVPN and the various network functions that the EVPN control plane enables. FIGURE 5 VTEPs and L2 Extension with Flood and Learn VXLAN tunnel end point (VTEP) may be implemented in hardware (leaf or ToR switch) or in virtualized environments. Each VTEP has a unique IP address and MAC address. Each VTEP can reach other VTEPs over the underlay IP network. 18

19 VXLAN Layer 2 Extension Using Flood and Learn Each VTEP has its own end host/server segment connected to it. In this topology, all hosts belong to one Layer 2 broadcast domain or, in simple terms, one VLAN and one IP subnet. The local VLAN numbers may be different in each VTEP, but they are bound to one VNI number, which is common on all VTEPs. So for all practical purposes, the LAN segment is now identified by a VXLAN VNI, and the VLAN numbers are only locally significant. The logical dashed lines shown inside the IP network between the VTEPs represent the head-end or ingress replication paths. This is used to send what is known as the BUM traffic: Broadcast, Unknown Unicast, and Multicast frames on the Layer 2 segment. The VTEP unicasts these packets to all other VTEPs connected to a VXLAN segment. This may require additional configuration or provisioning of tunnels on each VTEP device to all other devices. Let's consider that H1 wants to communicate with H2: H1 sends an ARP request. VTEP-A learns H1 as a local MAC and also maps this host to the VNI, and because the packet is a broadcast packet, it is encapsulated into the VXLAN packet and replicated; it is then unicast to each of the remote VTEPs participating in this VNI segment. The outer-src-ip is set to , and the outer-dst-ip is the remote VTEP IP. This packet is sent to every VTEP. VTEP-B and VTEP-C decapsulate the packet and flood it into their local VXLAN network. They also learn three pieces of information: the source-ip of VTEP-A, the inner-src-mac of H1, and the VNI. This creates an L2-MAC-to-VTEP-IP binding: {mac H1, VTEP-ip , VNI 10}. When H2 responds to the ARP request, the packet is unicast to H1. This packet is encapsulated in a VXLAN packet by VTEP-B and sent as a unicast IP packet based on its routing table: outer-ip header dst: , src VTEP-A decapsulates the packet and sends it to H1. It also creates an L2-MAC-to-VTEP-IP binding: {MAC H2, VTEP-ip , VNI 10}. Now the communication between H1 and H2 will be unicast. VTEP-A and VTEP-B now know sufficient information to encapsulate the packets between them. The multicast tree is not used. When the hosts are in different subnets, we need a Layer 3 gateway in the network to connect to all VNI segments. As seen in Figure 6, VTEP-C is configured with all VNI numbers in the network and acts as the router or gateway between these VNI segments (see the blue and red dotted arrows routing between VLAN10 and VLAN20). When hosts send ARP messages for the gateway in their respective VLANs, VTEP-C will respond. FIGURE 6 Routing Between VXLANs in a Flood-and-Learn Topology For first-hop router redundancy, multiple VTEPs may be configured with all VNIs, and they may run an FHRP protocol between them. 19

20 BGP EVPN for VXLAN BGP EVPN for VXLAN As we have seen in the VXLAN flood and learn case, the MAC learning is data frame-driven and flooding of broadcast or unknown unicast frames depends on ingress replication by VTEPs in the network. With the BGP EVPN control plane, the MAC learning happens via BGP similar to IPv4/IPv6 route learning in a Layer 3 network. This reduces flooding in the underlay network except for remarkably silent hosts. This control-plane-based MAC learning enables several additional functions with BGP as the unified control plane for both Layer 2 and Layer 3 forwarding in the overlay network. In Figure 7, each VTEP, being a BGP speaker, advertises the MAC and IP addresses of its local hosts to other VTEPs using the BGP EVPN control plane. A BGP route reflector may be used for distribution of this information to the VTEPs. Both VTEP discovery and MAC/IP or MAC/IPv6 host learning happen through the control plane. Since IPv4/IPv6 addresses are also exchanged in the control plane, each VTEP may act as a gateway for the VNI subnets configured on it. A centralized Layer 3 gateway is not required. This feature is also referred to as distributed gateway. Also, since each VTEP is aware of MAC/IP or MAC/IPv6 host bindings, ARP requests need not be flooded between the VTEPS. The VTEP may respond to the ARP requests on behalf of the target host, if the host address has already been learned. This is referred to as ARP/ND suppression in the fabric. FIGURE 7 VTEPs and L2 Extension with the BGP EVPN Control Plane BGP EVPN control-plane-based learning allows more flexibility to control the information flow between the VTEPs. It also enables multitenancy using VRFs similar to MPLS-VPN. Each VTEP may host several tenants and each tenant with a set of VXLAN segments. Depending on the interest, other VTEPs may import the tenant-specific information. This way both Layer 2 and Layer 3 extensions can be provisioned on a tenant basis. BUM traffic may be accommodated either with ingress replication or a multicast tree. Since VTEP discovery also happens through the control plane, setting up ingress replication does not require additional provisioning or configuration about remote VTEPs. Extreme EVPN implementation supports ingress replication. 20

21 BGP EVPN for VXLAN VTEP In an IP fabric, the leaf and border leaf act as VTEPs. Note that only one VTEP is allowed per device. Every VTEP has an overlay interface, which identifies the VTEP IP address. The VTEP information is exchanged, and remote VTEPs are discovered over BGP EVPN. Static Anycast Gateway Each leaf or VTEP has a set of server-facing VLANs that are mapped to VXLAN segments by a VNI number. These VLAN segments have an associated VE interface (a Layer 3 interface for the VLAN). Each tenant VLAN has anycast gateway IPv4/IPv6 addresses and associated anycast gateway MAC addresses. These gateway IP/IPv6 addresses and gateway MAC address are consistent for the VLAN segments shared on all leafs in the fabric. Overlay Gateway Each VTEP or leaf is configured with an overlay gateway. This defines the VTEP IP address, which is used as the source IP when encapsulating packets and is used as the next-hop IP in the EVPN NLRIs. In this validated design, we are using an IPv4 underlay; hence the overlay interface is associated with the IPv4 address of a loopback interface on the leaf. BGP EVPN Control Plane The BGP EVPN control plane is used for VTEP discovery to learn MAC/IP routes from other VTEPs. The exchange of this information takes place using EVPN NLRIs. The NLRI uses the existing AFI of 25 (L2VPN). IANA has assigned BGP EVPNs a SAFI value of 70. The NLRI also carries a tunnel encapsulation attribute. For an IP fabric using VXLAN encapsulation, the attribute is set to VXLAN. In the leaf-spine topology (3-stage Clos or 5-stage Clos), all leafs and border leafs should be enabled with the BGP EVPN Address- Family to exchange EVPN routes (NLRI) and participate in VTEP discovery. Spine and super-spines do not participate in the VTEP functionality. However, selected spines in the spine layer should be enabled with the BGP EVPN Address-Family, and all leafs including border leafs must be peered with the spines who have the BGP EVPN Address-Family enabled. In the deployment model where ebgp is used, a minimum of two spines in the PoD should be enabled with the EVPN Address-Family. Note that all spines participate in the ebgp underlay, but only a few designated spines participate in the EVPN. In the deployment model where ibgp is used, two spines are selected as route reflectors for the EVPN Address-Family, and each VTEP leaf has two ibgp neighbors that are the two spine BGP route reflectors. Each spine BGP route reflector has all VTEP leaf nodes as route-reflector clients and reflects EVPN routes for the VTEP leaf nodes. In the 5-stage Clos topology, a minimum of two super-spines should be enabled with the EVPN Address-Family, and only the spines that are enabled with EVPN are peered with these super-spines. More detailed design is discussed in Network Virtualization with BGP EVPN on page 38. EVPN Route Types EVPN uses different route types to carry various network-layer reachability information. The following are the well-known route types defined in BGP EVPN: Route Type-1 Ethernet Auto Discovery route. This route is used in multihoming cases to achieve split-horizon, aliasing, and fast convergence. 21

22 BGP EVPN for VXLAN Route Type-2 MAC/IP advertisement route: MAC-only route that carries {MAC address of the host, L2VNI of the VXLAN segment}. This route carries only the Layer 2 information of a host. Whenever a VTEP learns a MAC from its server-facing subnets, it advertises this route into BGP. MAC/IP route that carries {MAC address of the host, IPv4/IPv6 address of the host, L2VNI of the VXLAN segment, L3VNI of the tenant VRF of the host}. This route carries both the Layer 2 and Layer 3 information of the hosts. This route is advertised by the VTEP when it learns the IPv4/IPv6 host addresses via ARP or ND from the server-facing subnets. This information enables ARP/ND suppression on other VTEPs. Route Type-3 Inclusive Multicast Ethernet Tag route. This route is required for sending BUM traffic to all VTEPs interested for a given bridge domain or VXLAN segment. Route Type-4 Ethernet Segment route. This route is used for multihoming of server VLAN segments. Note that only VLAGbased multihoming is supported. Route Type-5 IPv4/IPv6 prefix advertisement route {IPv4/IPv6 route, L3VNI, Router-MAC}. This route is advertised for every Layer 3 server-facing subnet behind a VTEP or external routes. Tunnel Attribute Extended community type 0x3, sub-type 0x0c, and tunnel encapsulation type 0x8 (VXLAN). This is included with all EVPN routes. Layer 3 VNI or Tenant VRF Each tenant VRF is configured with a unique Layer 3 VNI. This is required for inter-subnet routing. This VNI must be the same for a tenant VRF on all VTEPs including the border leaf. Both Type-2 and Type-5 routes carry this Layer 3 VNI. Router-MAC Extended Community Extended community type EVPN (0x06) and sub-type 0x03. The router-mac is the MAC address of the VTEP advertising a route. This is also required along with the Layer 3 VNI for inter-subnet routing, as explained in Integrated Routing and Bridging on page 27; and it is carried in both Type-2 MAC/IP routes and Type-5 prefix routes. In the data plane, this MAC address is used as the inner destination MAC address when a packet is routed. MAC-Mobility Attribute Extended community type EVPN (0x06) and sub-type 0x00. Carries a 32-bit sequence number. This enables MAC or station moves between the VTEPs. When a MAC moves, for example, from VTEP-1 to VTEP-2, VTEP-2 advertises a MAC (or MAC/IP) route with a higher sequence number. This update triggers a best-path calculation on other VTEPs, thereby detecting the host move to VTEP-2. ARP Suppression Control-plane distribution of MAC/IP addresses enables ARP suppression in the fabric for Layer 2 extensions between racks. A portion of the fabric is shown in Figure 8 to illustrate the ARP suppression functionality in the fabric. When the hosts come up, they typically ARP for the gateway IP that is hosted by leafs. Let's consider the case where H2 ARPs for the gateway address. Note that both leafs have the same anycast gateway address for the host VXLAN segment. Leaf2 learns the MAC/IP (or ARP) binding for H2. Leaf2 will advertise the MAC/IP route into the BGP EVPN Address-Family. 22

23 BGP EVPN for VXLAN Leaf1 will learn this route and populate it in its MAC/IP binding table. H1 sends an ARP request to H2. Leaf1 will respond on behalf of H2. Extending the same information flow for H1, when Leaf2 learns H1's MAC/IP route, it will respond to ARP requests on behalf of H1. Compared to the data-plane-based learning in Layer 2 extension technologies such as VPLS or VXLAN flood and learn, where ARP traffic is also sent over an overlay network, VXLAN EVPN significantly reduces ARP/ND flooding in the fabric. FIGURE 8 ARP Suppression VLAN Scoping As discussed earlier, in VXLAN networks, each VLAN is mapped to a VNI number of a VXLAN segment. This provides an interesting option to break the 4K limit of the 802.1Q VLAN space. The VLAN tag (or c-tag) on the wire or the port VLAN membership may be locally scoped or locally significant at the leaf level or at the port level within a leaf. VLAN Scoping at the Leaf Level In this case, the VLANs are scoped at the leaf or ToR level. Refer to Figure 9. In this example, VLAN 10 is mapped to VNI 10 on Leaf1, and VLAN 20 is mapped to VNI 10 on Leaf2. By mapping to the same VNI, the two VLAN segments (VLAN 10 and VLAN 20) are on the same bridge domain. With this mapping, hosts on these VLANs have Layer 2 extension between them, and they belong to one VXLAN segment identified by the VNI

24 BGP EVPN for VXLAN FIGURE 9 VLAN Scoping at the Leaf Level VLAN Scoping at the Port Level Within a Leaf VLAN scoping at the port level can be accomplished using the Virtual-Fabric feature on Extreme switches. The Virtual-Fabric feature basically abstracts a VLAN or bridge domain and decouples the VLAN tag (or c-tag) on the wire. Refer to Figure 10. In this example, Port1, VLAN tag 10, and Port2, VLAN tag 20, are mapped to a VLAN 5001, and VLAN 5001 is mapped to VNI With this mapping, the hosts H1 (VLAN 10), H2 (VLAN 20), and H3 (VLAN 501) are bound to one VXLAN segment identified by the VNI FIGURE 10 VLAN Scoping at the Port Level Within a ToR 24

25 BGP EVPN for VXLAN Conversational Learning Conversational learning helps conserve the hardware forwarding table by programming only those ARP/ND or MAC entries for which there are active conversations or traffic flows. With this feature, the control plane may hold more host entries than what the hardware table can support. When there is sufficient space in hardware, all host entries are programmed. When there is no space, conversational learning kicks in and starts aging out the inactive entries. Note that the host subnets are inserted into the hardware (LPM table) regardless of the activity. The host entries are inserted in the hardware (/32 IPv4 or /128 IPv6 host route table) based on the traffic. Integrated Routing and Bridging With the anycast gateway function, each VTEP or leaf acts as an Integrated Routing and Bridging (IRB) device providing Layer 2 extension as well Layer 3 routing between the VXLAN segments in a tenant. Note that the tenant may span multiple leafs. There are two variations of IRB implementation in the IP fabric: asymmetric IRB and symmetric IRB. Asymmetric IRB FIGURE 11 Asymmetric IRB In Figure 11, a tenant, SALES, is provisioned in the fabric with two VNI segments, VNI 10 and VNI 20. Leaf1 has servers connected to it on VNI 10 only. Yet it is provisioned with both VXLAN segment VNI 10 and VNI 20. If H1 in VNI 10 needs to communicate with H3 in VNI 20, Leaf1 routes the packet first between the segments and then bridges the packet on VNI 20 and the packet is sent on the overlay. Leaf2 will decapsulate the VXLAN headers and send the packet to H3. Essentially, the ingress VTEP both routes and bridges the packet; this method is referred as asymmetric IRB. This also means that every VTEP must be configured with all VXLAN segments in a given tenant regardless of any local servers connected to the VNI segment. Symmetric IRB Figure 12 depicts symmetric IRB. Here, every tenant is assigned a Layer 3 VNI. This is analogous to a Layer 3 routing interface between two switches. This VNI must be the same for a given tenant on all leafs where it is provisioned. 25

26 BGP EVPN for VXLAN The MAC/IP host routes are advertised by the VTEP with the L2 VNI as well as an L3 VNI and the router-mac address of the VTEP. When a packet is routed over the L3 VNI, the dst-mac of the inner Ethernet payload is set to the router-mac of the remote VTEP. In Figure 12, routing from H1 to H3 always occurs over this L3 VNI. That is, both leaf devices route the packet once: by the ingress leaf from the server VLAN/VNI to the L3 VNI and by the egress leaf from the L3 VNI to the server VLAN/VNI. A significant advantage of this method is that all VNIs of a given tenant need not be created on all leafs. They are created only when there is server connectivity to those VNIs. In Figure 12, Leaf1 is not configured with VNI 20. Also note that on Leaf2, even though VNI 10 is present, a packet from H3 to H1 will be routed directly on to the L3 VNI of the tenant. This adds the additional requirement that the host routes on all VXLAN segments in a given tenant need to be downloaded to the leaf's forwarding table. FIGURE 12 Symmetric IRB Extreme IRB Implementation Both symmetric and asymmetric IRB methods are implemented on Extreme switches. If the target VNI segment is configured on a VTEP, asymmetric IRB is performed. Otherwise, the packet is routed over the L3 VNI or symmetric routing occurs. Every tenant VRF is assigned with an L3 VNI. In the Extreme implementation, we get the best of both schemes: There is no need to create all server VNIs on all leafs for a tenant. If a target VNI segment is not local and is extended behind one or more remote VTEPs, download the host routes on that target segment into hardware based on traffic activity. Traffic to these hosts will be routed over the L3 VNI. Multitenancy Layer 2 multitenancy is achieved by a MAC-VRF construct used for extending a VLAN between multiple VTEPs or ToRs. In BGP EVPN, multiple tenants can co-exist at the Layer 3 level and share a common IP transport network while having their own separate routing domain in the VXLAN overlay network. Every tenant in the EVPN network is identified by a VRF (VPN routing and forwarding instance), and these tenant VRFs can span multiple leafs in a data center. (Similar to Layer 3 MPLS VPNs with tenant VRFs 26

27 on multiple PE devices). Each VRF can have a set of server-facing VLANs and a Layer 3 VLAN interface with a unique VNI used for symmetric routing purposes. This VNI should be the same if the same tenant VRF is provisioned on other leafs including a border leaf. We recommend the separation of the tenant routing domain from the underlay routing domain (or default VRF), which is used for setting up the overlays or tunnels between the VTEPs. Even if Layer 3 multitenancy is not required in a deployment (this is the case with a single tenant), we recommend moving the server subnets to a separate VRF and keeping a clear separation of underlay and overlay routing domains. By using a separate VRF for server subnets, there is a visibility into host routes and we can leverage the host route optimization in the data plane. A tenant VRF also allows provisioning a L3 VNI that enables symmetric IRB. A tenant VRF may not be needed in the case of pure L2 or VLAN extension between the VTEPs. FIGURE 13 Multitenancy Ingress Replication Although host reachability information is exchanged over the control plane to drastically reduce flooding in a VLAN, certain situations require the flooding of frames, as in traditional Ethernet networks such as but not limited to: MAC aging Silent hosts L2 multicast or broadcast Ingress replication is a technique used to accommodate flooding in such cases by the VTEPs in the IP fabric. Each VTEP for a given VXLAN segment (or server VLAN) computes the list of VTEPs having the same segment using the IMR (Inclusive Multicast Route) routes. Whenever the VTEP must flood a frame in a VXLAN segment, it replicates the frame in hardware and unicasts the frame to each of the VTEPs in the IMR list for that segment. vlag Pair vlag is the solution recommended for leaf-level redundancy. Server multihoming is supported only through vlag behind two VTEPs. Multihoming to two separate VTEPs is not supported. In the validated design, we have two pairs of VTEPs in each PoD operating in vlag mode, and servers are dual-homed to these VTEPs with a port channel. 27

28 BGP EVPN for VXLAN When the two leafs are in vlag mode, they act as one logical VTEP or endpoint. As shown in Figure 14, both leafs are configured with the same VTEP IP address. From other VTEPs in the network, this pair appears as a single VTEP. This is very important because having two physical switches in this mode on each rack does not result in an increased number of VTEPs or additional tunneling requirements on other VTEPs in the network. FIGURE 14 Active-Active vlag 28

29 Validated Designs Pervasive ebgp ibgp Within a PoD and ebgp Between PoDs Hardware and Software Matrix Fabric Infrastructure Configuration Network Virtualization with BGP EVPN This section provides the details of key deployment models with the validated configuration templates. Extreme validated design recommends two models for the IP fabric deployment; these deployment models are categorized based on how the underlay is designed for interconnecting leaf, spine, super-spine, and border-leaf nodes. The first deployment model uses pervasive ebgp for the IPv4 underlay and EVPN peering. The second deployment model uses ibgp for the IPv4 underlay and EVPN peering within the PoD with two spines as route reflectors and ebgp for interconnecting the PoDs. Pervasive ebgp The design shown in Figure 15 uses ebgp as the control plane protocol between the layers of nodes, and each leaf is in its own autonomous system. This design using ebgp as a routing protocol within the data center is based on the IETF draft: Use of BGP for routing in large-scale data centers. [2] By adding the VXLAN EVPN control plane, this design is extended to support Layer 2 extension and Layer 3 multitenancy in the fabric. Figure 16 shows the design for a 3-stage IP fabric using ebgp as the control protocol. Note that the border leafs are connected to the spines in this design. FIGURE 15 Pervasive ebgp in an Optimized 5-Stage IP Fabric 29

30 ibgp Within a PoD and ebgp Between PoDs FIGURE 16 Pervasive ebgp in a 3-Stage IP Fabric ibgp Within a PoD and ebgp Between PoDs The design shown in Figure 17 uses ibgp as the control plane protocol within a PoD, and it uses ebgp between PoDs and superspines. This section is provided for informational purposes only. This validated design focuses on ebgp. However, an ibgp-based deployment is also supported. 30

31 Hardware and Software Matrix FIGURE 17 ibgp Within a PoD and ebgp Between PoDs in an Optimized 5-Stage IP Fabric Hardware and Software Matrix TABLE 1 Platforms Used in This Validated Design Places in the Network Platform Software Version Leaf Nodes VDX 6740 Network OS 7.2.0a VDX S Spine Nodes VDX Q Network OS 7.2.0a Super-Spine Nodes VDX Q Network OS 7.2.0a Edge or Border Leaf VDX Q Network OS 7.2.0a WAN Edge Router MLXe-8 NetIron 6.0c TABLE 2 Supported platforms in IP Fabric Places in the Network Platform Software Version Leaf Nodes VDX 6740 Network OS 7.2.0a VDX Q VDX S Spine Nodes VDX Network OS 7.2.0a Super-Spine Nodes VDX Q Network OS 7.2.0a Edge or Border Leaf VDX Q Network OS 7.2.0a VDX 6740 WAN Edge Router MLXe-8 NetIron 6.0c 31

32 Fabric Infrastructure Configuration Fabric Infrastructure Configuration This section covers the aspects of provisioning the building blocks of the IP fabric underlay infrastructure. This involves the common configurations on the fabric nodes, the loopback interfaces used as the router ID and VTEP address, and the interfaces or links between the fabric nodes (also referred to as the fabric infrastructure links). Node ID Configuration The VDX platforms used as leaf, spine, and super-spine nodes are enabled with VCS ID 1 by default. Since these nodes will be independent in the IP fabric, we must ensure that they do not form a VCS fabric between them. This is achieved by configuring a unique VCS ID on each node. In the validated design, each node spine, leaf, super-spine, and edge leaf is configured with a unique VCS ID. The RBridge ID may be re-used. We recommend using RBridge ID 1 for individual leafs and using RBridge IDs 1 and 2 for the vlag pair. Enable Virtual-Fabric on all leafs and edge leafs: The vlag pair is assigned its own unique VCS ID, and each node in the vlag pair has a separate RBridge ID. For example, in the validated design, Leaf1 is a 2-node vlag pair. vlag peer 1: vlag peer 2: Verify the configuration: 32

33 Fabric Infrastructure Configuration From the primary node of the vlag pair, enable virtual fabric. For instance, as shown above, RBridge 2 is the primary node in the Leaf1 vlag pair. IP Fabric Infrastructure Links All nodes in the IP fabric leafs, spines, and super-spines are interconnected with Layer 3 interfaces. In the validated design, 40-G links are used between the nodes. All these links are configured as Layer 3 interfaces with a /31 IPv4 address. The MTU for these links is set to Jumbo MTU. This is a requirement to handle the VXLAN encapsulation of Ethernet frames. Disable the fabric ISL and trunk features. Loopback Interfaces Each leaf and border leaf needs a loopback interface with a unique IPv4 address to use as the VTEP IP. This is not required on spines and super-spines. This step may be skipped if a VXLAN EVPN overlay is not used in the IP fabric. Each device in the fabric needs one loopback interface with a unique IPv4 address for the purpose of router ID. Configure the IP router ID using the IP address of the loopback 2 interface. 33

34 Fabric Infrastructure Configuration Server-Facing Links Individual Leaf/ToR The server-facing or access links are on the leaf nodes. In the validated design: 10-G links are used for server-facing VLANs. These links are configured as Layer 2 trunks with VLANs associated. The MTU for these links is set to the default: 1500 bytes. Disable fabric ISL and trunk features. Spanning tree is disabled. 2 vlag Pair/ToR vlag configuration involves three steps: Node ID configuration on the pair of devices. Inter-switch links or ISL configuration on both devices. Configuring the server-facing port channels and adding the required VLANs on them. Node ID Configuration on the vlag Pair Refer to Node ID Configuration on page 34 for assigning the node ID to the vlag pair. Pod1-Leaf1-1, rbridge-id 1 Pod1-Leaf1-2, rbridge-id 2 2 If there are L2 switches or bridges between a leaf and servers, spanning tree must be enabled. If there is a possibility of enabling bridges inadvertently under the leaf nodes, we recommend enabling spanning tree and configuring the server ports as edge ports. POD1-Leaf3(conf-if-te-1/0/4)# spanning-tree autoedge 34

35 Fabric Infrastructure Configuration ISL Configuration As shown in the illustration below, the vlag pair is interconnected by two 40-G Ethernet ports for ISL. Server Port-Channel Configuration In the configuration shown below, port channel 113 is configured as a vlag. 35

36 Network Virtualization with BGP EVPN Network Virtualization with BGP EVPN Overlay Gateway Configuration Following are the steps involved in configuring the overlay gateway or VTEP on a leaf and border leaf. Create an overlay gateway, and assign it a name. Enable Layer 2 extension. Associate the loopback interface whose IPv4 address is used as the VTEP IP. Associate the rbridge-id of the leaf switch. Map the VLANs to the VNI number. In this validated design, we're using the auto mapping of VLAN to a VNI. For instance, VLAN 2001 is mapped to VNI (This simplified mapping option should work for most implementations unless there is a specific requirement to map the server VLAN range to a specific VNI range in the VXLAN domain.) ebgp EVPN Configuration for an Optimized 5-Stage Clos Fabric This configuration is applicable to the model shown in Figure 15, where ebgp is used as the control protocol for underlay. BGP Underlay Configuration When enabling network virtualization with EVPN overlay, the underlay configuration needs a few changes to accommodate the BGP peers that exchange only IPv4 routes and the BGP peers that exchange both IPv4 and EVPN routes. This is accomplished by using BGP peer groups. In the 5-stage fabric: Two spines in each PoD exchange only IPv4 Address-Family routes. Two spines in each PoD exchange both IPv4 and EVPN Address-Family routes referred to as EVPN spines. Two super-spines exchange only IPv4 Address-Family routes. Two super-spines exchange both IPv4 and EVPN Address-Family routes referred to as EVPN super-spines. 36

37 Network Virtualization with BGP EVPN Leaf Configuration This is applicable to all leafs. With the EVPN control plane, the configuration needs to accommodate the exchange of EVPN routes only with two designated spines. Peer groups are used to simplify the configuration and also for efficiency in BGP update processing. Configure the directly connected IP addresses of the spines into two peer-groups spine-evpn-group and spine-ip-group. This is required because only 2 spines exchange EVPN routes, but all 4 spines exchange IPv4 routes. (Refer to Network Virtualization with BGP EVPN on page 19 for EVPN Address-Family configuration.) For a simple IP fabric implementation, this may be ignored and all spines can be added to one peer group. Enable MD5 authentication to both peer groups. Enable BFD to both peer groups. Enable the IPv4 Address-Family, and advertise the VTEP IP address. Check the BGP neighbors. The leaf must be peering with all spines within the PoD for IPv4 Address-Family route exchange. POD1-Leaf1-1# show ip bgp summary BGP4 Summary Router ID: Local AS Number: Confederation Identifier: not configured Confederation Peers: Maximum Number of IP ECMP Paths Supported for Load Sharing: 16 Number of Neighbors Configured: 4, UP: 4 Number of Routes Installed: 53, Uses 6360 bytes Number of Routes Advertising to All Neighbors: 43 (14 entries), Uses 840 bytes Number of Attribute Entries Installed: 49, Uses 5635 bytes Neighbor Address AS# State Time Rt:Accepted Filtered Sent ToSend ESTAB 2h 8m10s ESTAB 1h52m 0s ESTAB 1h32m30s ESTAB 1h27m38s BFD Neighbors 37

38 Network Virtualization with BGP EVPN POD1-Leaf1-1# show bfd neighbors Flags: * indicates State is inconsistent across the cluster OurAddr NeighAddr State Int Rbridge-id ======= ========= ===== === ========== UP Fo 45/0/ UP Fo 45/0/ UP Fo 45/0/ UP Fo 45/0/ POD1-Leaf1-1# Check the route table to see the paths to other VTEP IPs in the fabric. For instance, in the table below taken from a leaf, it sees 4 paths (due to 4 spines) to every other VTEP-IP in the fabric both inside the PoD and the VTEPs in another PoD. Spine Configuration This is applicable to the two spines designated to exchange only IPv4 routes with leafs and super-spines. Configure the directly connected leafs IP addresses in one peer group: leaf-group. Configure the directly connected super-spine IPs into another peer group: super-spine-group. 38

39 Network Virtualization with BGP EVPN Enable MD5 authentication and BFD to all peers. Enable the IPv4 Address-Family. Each spine should establish IPv4 Address-Family peering with all leafs inside the PoD and super-spines. (Note that when verifying the peerings, leafs in a vlag pair share one common AS number between them, and super-spines belong to one AS number.) POD1-Spine1# show ip bgp summary BGP4 Summary Router ID: Local AS Number: Confederation Identifier: not configured Confederation Peers: Maximum Number of IP ECMP Paths Supported for Load Sharing: 16 Number of Neighbors Configured: 10, UP: 10 Number of Routes Installed: 24, Uses 2880 bytes Number of Routes Advertising to All Neighbors: 126 (14 entries), Uses 840 bytes Number of Attribute Entries Installed: 22, Uses 2530 bytes Neighbor Address AS# State Time Rt:Accepted Filtered Sent ToSend ESTAB 1h37m34s ESTAB 1h37m33s ESTAB 1h37m32s

40 Network Virtualization with BGP EVPN ESTAB 1h37m32s ESTAB 1h37m32s ESTAB 1h37m33s ESTAB 1h37m31s ESTAB 1h37m32s ESTAB 1h37m32s ESTAB 1h37m32s Check the BFD adjacency with every connected device. POD1-Spine1# show bfd neighbors Flags: * indicates State is inconsistent OurAddr across the cluster NeighAddr State Int Rbridge-id ======= ========= ===== === ========== UP Fo 41/0/ UP Fo 41/0/ UP Fo 41/0/ UP Fo 41/0/ UP Fo 41/0/ UP Fo 41/0/ UP Fo 41/0/ UP Fo 41/0/ UP Fo 41/0/ UP Fo 41/0/ EVPN Spine Configuration This is applicable only on the two spines designated to exchange IPv4 and EVPN routes. Configure the directly connected leafs IP addresses in one peer group: leaf-group. Configure the directly connected super-spine IPs into two peer groups: superspine-ip-group and superspine-evpn-group. The second group will contain only the two super-spines designated to exchange IPv4 and EVPN routes. Enable MD5 authentication to all peers. Enable BFD to all peers with default timer values. Enable the IPv4 Address-Family. 40

41 Network Virtualization with BGP EVPN Super-Spine Configuration This is applicable to two super-spines designated to exchange only IPv4 underlay routes. Peer groups are used to simplify the configuration. Create a peer group for each PoD: pod1_spine-group Add the directly connected neighbor addresses of all spines in PoD1 to this group. pod2_spine-group Add the directly connected neighbor addresses of all spines in PoD2 to this group. Create a separate peer group for the edge leafs edge-group. Add the directly connected neighbor addresses of edge leafs to this group. Enable MD5 authentication to all peer groups. 41

42 Network Virtualization with BGP EVPN Enable BFD to all peer groups. Enable the IPv4 Address-Family. Verify the BGP neighbors and BFD adjacency to each of the BGP neighbors over the connected links. Each super-spine should be peering with 4 spines per PoD and 2 edge leafs for IPv4 Address-Family route exchange. SUPERSPINE-1# show ip bgp summary BGP4 Summary Router ID: Local AS Number: Confederation Identifier: not configured Confederation Peers: Maximum Number of IP ECMP Paths Supported for Load Sharing: 16 Number of Neighbors Configured: 10, UP: 10 Number of Routes Installed: 50, Uses 6000 bytes Number of Routes Advertising to All Neighbors: 126 (14 entries), Uses 840 bytes Number of Attribute Entries Installed: 50, Uses 5750 bytes 42

43 Network Virtualization with BGP EVPN Neighbor Address AS# State Time Rt:Accepted Filtered Sent ToSend ESTAB 2h43m32s ESTAB 2h41m 0s ESTAB 2h19m33s ESTAB 2h 3m40s ESTAB 1h43m54s ESTAB 1h39m 2s ESTAB 3h 2m13s ESTAB 3h 2m 1s ESTAB 3h 2m 0s ESTAB 3h 2m13s BFD session with each BGP peer. SUPERSPINE-1# show bfd neighbors Flags: * indicates State is inconsistent OurAddr across the cluster NeighAddr State Int Rbridge-id ======= ========= ===== === ========== UP Fo 67/1/ UP Fo 67/1/ UP Fo 67/1/ UP Fo 67/1/ UP Fo 67/1/ UP Fo 67/1/ UP Fo 67/1/ UP Fo 67/1/ UP Fo 67/1/ UP Fo 67/1/4 67 EVPN Super-Spine Configuration This is applicable only on the super-spines designated to exchange both IPv4 and EVPN routes. This can be skipped for the IP fabric implementation without the EVPN control plane. Create two peer groups for each PoD, one group to exchange only IPv4 routes and the other group to exchange both IPv4 and EVPN routes. For a simple IP fabric implementation, this may be ignored and all spines in a PoD can be added to one peer group. pod1_spine-ip-group Two spines in each PoD support only IPv4 routes. Add the directly connected neighbor addresses of these two spines to this group. pod1_spine-evpn-group Two spines designated in each PoD support both IPv4 and EVPN routes. Add the directly connected neighbor addresses of these two spines to this group. Similar configuration for PoD2 and other PoDs. Create a separate peer group for the edge leafs edge-group. Add the directly connected neighbor addresses of the edge leafs to this group. Enable MD5 authentication and BFD to all peer groups. Enable the IPv4 Address-Family. 43

44 Network Virtualization with BGP EVPN Border/Edge Leaf Configuration The configuration of edge or border leafs is similar to that of leafs. They peer with the super-spines. They exchange IPv4 routes with all super-spines and EVPN routes with two designated super-spines. Configure a peer group superspine-ip-group. Add the two directly connected neighbor addresses of the two super-spines to the group. These super-spines exchange only IPv4 routes. 44

45 Network Virtualization with BGP EVPN Configure another peer group superspine-evpn-group. Add the two designated super-spine addresses to this group. These super-spines exchange both IPv4 and EVPN routes. For a simple IP fabric implementation, this step may be skipped and all super-spine neighbors may be added to just one peer group. Enable MD5 authentication and BFD to all peer groups. Enable the IPv4 Address-Family, and advertise the VTEP IP address. BGP Overlay Configuration Leaf Configuration This configuration is applicable to all leafs in each of the PoDs. They exchange EVPN routes with two designated spines in their respective PoDs. Enable the EVPN Address-Family. Activate the designated EVPN spines under the EVPN Address-Family. (Use the peer group already configured in the underlay configuration.) Enable the "allowas-in 1" feature on vlag leafs to facilitate learning of the routes between the vlag peers. This is a requirement because the vlag pair is in the same AS number. This is the case in the pervasive ebgp model of underlay. When EVPN routes are advertised into ebgp by a node, the next hop is set to its peering address. This follows the standard BGP behavior. The next hop should always point to the IP address of the VTEP that originated these routes. Enable the "nexthop unchanged" configuration to the peers. 45

46 Network Virtualization with BGP EVPN All leafs should see two EVPN neighbors. (Two spines participate in EVPN route exchange.) POD1-Leaf1-1# show bgp evpn summary BGP4 Summary Router ID: Local AS Number: Confederation Identifier: not configured Confederation Peers: Maximum Number of IP ECMP Paths Supported for Load Sharing: 1 Number of Neighbors Configured: 2, UP: 2 Number of Routes Installed: , Uses bytes Number of Routes Advertising to All Neighbors: (12905 entries), Uses bytes Number of Attribute Entries Installed: 56104, Uses bytes Neighbor Address AS# State Time Rt:Accepted Filtered Sent ToSend ESTAB 4h 1m52s ESTAB 4h 1m58s EVPN Spine Configuration This is applicable only to the two spines in each PoD designated to exchange the EVPN routes with leafs and super-spines. Enable the EVPN Address-Family. Activate the leaf group already created in the underlay configuration into the EVPN Address-Family. Activate the superspine-evpn-group into the EVPN Address-Family. When EVPN routes are advertised into ebgp by a node, the next hop is set to its peering address. This follows the standard BGP behavior. The next hop should always point to the IP address of the VTEP that originated these routes. Enable the "nexthop unchanged" configuration to the peers. 46

47 Network Virtualization with BGP EVPN Each EVPN spine will establish EVPN Address-Family adjacency with all leafs inside the PoD and two designated super-spines. Use the show bgp evpn summary command to verify. EVPN Super-Spine Configuration This is applicable to the super-spines designated for the EVPN route exchange in the fabric with spines and edge leafs. Enable the EVPN Address-Family. Activate the spine-evpn-group peer groups of each PoD into the EVPN Address-Family. Activate the edge leafs peer group into the EVPN Address-Family. When EVPN routes are advertised into ebgp by a node, the next hop is set to its peering address. This follows the standard BGP behavior. The next hop should always point to the IP address of the VTEP that originated these routes. Enable the "nexthop unchanged" configuration to the peers. Each super-spine has two spines in each of the PoDs and two border leafs as EVPN Address-Family neighbors. 47

48 Network Virtualization with BGP EVPN SUPERSPINE-2# show bgp evpn summary BGP4 Summary Router ID: Local AS Number: Confederation Identifier: not configured Confederation Peers: Cluster ID: 1000 Maximum Number of IP ECMP Paths Supported for Load Sharing: 1 Number of Neighbors Configured: 6, UP: 6 Number of Routes Installed: , Uses bytes Number of Routes Advertising to All Neighbors: ( entries), Uses bytes Number of Attribute Entries Installed: 54341, Uses bytes Neighbor Address AS# State Time Rt:Accepted Filtered Sent ToSend ESTAB 6d 1h38m ESTAB 6d 1h38m ESTAB 1d 5h49m ESTAB 1d 5h48m ESTAB 6d 2h 0m ESTAB 6d 2h 0m Border/Edge Leaf Configuration This is applicable to all border leafs in the fabric. Enable the EVPN Address-Family. Activate the superspine-evpn-group peer groups into the EVPN Address-Family. When EVPN routes are advertised into ebgp by a node, the next hop is set to its peering address. This follows standard BGP behavior. The next hop should always point to the IP address of the VTEP that originated these routes. Enable the "next-hop unchanged" configuration to the peers. Each border leaf establishes EVPN peering with two super-spines. Edge-Leaf1# show bgp evpn summary BGP4 Summary Router ID: Local AS Number: Confederation Identifier: not configured Confederation Peers: Maximum Number of IP ECMP Paths Supported for Load Sharing: 1 Number of Neighbors Configured: 2, UP: 2 Number of Routes Installed: 14889, Uses bytes Number of Routes Advertising to All Neighbors: (8786 entries), Uses bytes Number of Attribute Entries Installed: 2554, Uses bytes Neighbor Address AS# State Time Rt:Accepted Filtered Sent ToSend ESTAB 6d 1h41m ESTAB 6d 1h41m

49 Network Virtualization with BGP EVPN ebgp EVPN Configuration for a 3-Stage Clos Fabric This configuration is application to the deployment model shown in Figure 16, where ebgp is used as the underlay routing protocol in a 3-stage Clos fabric. BGP Underlay Configuration When enabling network virtualization with EVPN overlay, the underlay configuration needs a few changes to accommodate the BGP peers that exchange only IPv4 routes and the BGP peers that exchange both IPv4 and EVPN routes. This is accomplished by using BGP peer groups. Two spines exchange only IPv4 Address-Family routes. Two spines exchange both IPv4 and EVPN Address-Family routes. Leaf Configuration This is applicable to all leafs. With the EVPN control plane, the configuration needs to accommodate the exchange of EVPN routes only with two designated spines. Peer groups are used to simplify the configuration and also for efficiency in BGP update processing. Configure the directly connected IP addresses of the spines into two peer groups: spine-evpn-group and spine-ip-group. This is required because only two spines exchange EVPN routes, but all four spines exchange IPv4 routes. Enable MD5 authentication to both peer groups. Enable BFD to both peer groups. Enable the IPv4 Address-Family, and advertise the VTEP IP address. 49

50 Network Virtualization with BGP EVPN Spine Configuration This is applicable to all spines in the 3-stage fabric. Configure the directly connected leafs' IP addresses in one peer group: leaf-group. Configure the directly connected edge-leafs' IPs into a peer group: edge-group. Enable MD5 authentication to all peer groups. Enable BFD to all peer groups. Enable the IPv4 Address-Family. 50

51 Network Virtualization with BGP EVPN Border/Edge Leaf Configuration The configuration of edge or border leafs is similar to that of leafs. They peer with the spines. They exchange IPv4 routes with all spines and EVPN routes with two designated spines. Please note that all spines participate in IPv4 route exchange and only two of them participate in EVPN route exchange. Configure a peer group spine-ip-group. This group consists of the neighbor IP addresses of spines that exchange only IPv4 routes. Configure another peer group spine-evpn-group. This group consists of directly connected IP addresses of the two designated spines that exchange both IPv4 and EVPN routes. Enable MD5 authentication to all peer groups. Enable BFD to both peer groups. Enable the IPv4 Address-Family, and advertise the VTEP IP address. BGP Overlay Configuration Leaf Configuration This is applicable to all leafs. Activate the designated EVPN spines under the EVPN Address-Family (use the peer group already configured in the underlay configuration). 51

52 Network Virtualization with BGP EVPN Enable the "allowas-in 1" feature on vlag leafs to facilitate learning of the routes between the vlag peers. This is a requirement because the vlag pair is in the same AS number. This is the case in the pervasive ebgp model of underlay. This configuration is not required for individual leaf or ToR switches. Enable the next-hop unchanged configuration to the peers. When EVPN routes are advertised into ebgp by a node, the next hop is set to its peering address. This follows standard BGP behavior. The next hop should always point to the IP address of the VTEP that originated these routes. EVPN Spine Configuration This is applicable only to the two spines in the 3-stage fabric designated to exchange EVPN routes with leafs and edge leafs. Enable the EVPN Address-Family. Activate the leaf-group peer group into the EVPN Address-Family. Activate the edge-leaf's peer group into the EVPN Address-Family. Border/Edge Leaf Configuration This is applicable to all edge leafs. Activate the EVPN route exchange with the designated spines for EVPN. 52

53 Network Virtualization with BGP EVPN Tenant Provisioning Tenant provisioning refers to the configuration on leafs to enable server VLANs and network connectivity to tenant VRF contexts and mapping these VLANs and VRFs to the overlay control and forwarding planes to establish Layer 2 extension and Layer 3 multitenancy. This is applicable to both 3-stage and 5-stage Clos fabrics. Enable Conversational Learning of MAC Entries This is applicable to all leafs in the fabric for conservation of L2 forwarding table space. Anycast Gateway MAC Configuration Anycast gateway MAC configuration is applied to all leafs (except edge leafs) in the data center. This is used as the gateway MAC or router MAC for all server-facing subnets. This enables seamless workload move within and across the PoDs in the data center. We recommend setting the U/L bit to 1 in the MAC address to indicate that it is a locally administered MAC address and not to conflict with any real MAC addresses. The MAC addresses must be different for IPv4 and IPv6, but the OUI portion (first three bytes) must be same. Enable Conversational Learning of ARP/ND Host Entries This is required on all leafs and edge leafs. 53

54 Network Virtualization with BGP EVPN VRF, Server VLAN, and Subnet Configuration The underlay routing domain is in the default VRF of a leaf device. This provides the reachability and provisioning of tunnels or overlays to other VTEPs in the network. For server subnets or workloads, we recommend using a separate VRF. This is the separate underlay and overlay routing domains. This also allows the use of an L3 VNI for symmetric IRB, host route visibility, and optimization in the forwarding plane. The following are the steps involved in tenant VRF configuration. 1. Assign a unique RD. Every tenant must have a unique RD value per leaf/tor where it is provisioned. In the validated design, we are using the following format: IPv4_Address:nn where IPv4_Address is the router ID of the VTEP. nn is a unique number for the tenant VRF. This value is re-used on other leafs as well where the same tenant is provisioned. For example, vrf201 has the following RD values on leafs where it is provisioned. On leaf1: :201 On leaf5: :201 On border-leaf1: : Assign a unique L3 VNI number. 3. Assign import and export route targets for IPv4 and IPv6 tenant routes. In the configuration templates below, the following tenant profile is enabled on a leaf: Configure the Tenant VRF Profile: Name: vrf101 L3 VNI: 7101 IPv4 and IPv6: enabled Route-target 101:101 Server-facing VLAN 2001 Assign a Layer 3 Interface for the L3 VNI of the Tenant VRF: This is the routing interface for the Integrated Routing and Bridging (IRB) operation on the leaf. 54

55 Network Virtualization with BGP EVPN Assign a Server-Facing VLAN: Assign a VE (L3) Interface for the Server-Facing VLAN: Advertise Tenant Layer 3 Routes from the Leaf IPv4 IPv6 55

56 Tenant and L2 Extension Between Racks in a 3-Stage Clos Fabric Enable the EVPN Instance for the Tenant VLAN Segments Once the server-facing VLANs are created and mapped to VNI segments on the leaf, those VNI segments must be enabled into the control plane. As was done for the tenant VRF, the VNI segments also require an RD (route distinguisher) and an RT (route target). This is also defined as the MAC-VRF and enables learning remote MAC addresses when the same VLAN segment is extended to other leafs or VTEPs in the fabric. The RD and RT configuration is set to auto in this design for simplicity and may be followed for most of the deployments. Advanced users may define a different scheme of RD and RT. A user-defined RD/RT is not covered in this document. vlag Pair Configuration A vlag pair or redundant ToR requires a few additional configuration steps: Same VTEP IP Separate or unique router IDs The configuration of two leafs in a dual-tor vlag pair is shown side-by-side for comparison. (Please note that the configuration for both switches in the vlag pair can be done from the primary node.) The Loopback1 interface has the same IP address on both nodes; this is used as the VTEP IP under overlay gateway. The Loopback2 interface has a unique IP address on each node; this is used as the IP router ID for the node. Attach both RBridge IDs under the overlay gateway. 56

57 Tenant and L2 Extension Between Racks in a 3-Stage Clos Fabric 57

58 Tenant and L2 Extension Between Racks in a 3-Stage Clos Fabric Use Cases Tenant and L2 Extension Between Racks in a 3-Stage Clos Fabric Tenant and L2 Extension Between PoDs in an Optimized 5-Stage Clos Fabric Tenant Extension Outside the Fabric VLAN Scoping at the ToR Level VLAN Scoping at the Port Level Within a ToR Route Leaking for the Service VRF In this section we illustrate the use cases by using sections of the validated design network topology as appropriate. This will help the reader to further understand the deployment scenarios. Tenant and L2 Extension Between Racks in a 3-Stage Clos Fabric Figure 18 shows a section of the topology to illustrate the following with configuration and verification. Two racks are shown in the diagram. Rack1 has a redundant vlag ToR, leaf1-1 and leaf1-2, referred to as leaf1 collectively. Rack5 has an individual ToR, leaf5. A tenant VRF vrf201 is provisioned on both racks. The tenant has two server VLANs mapped to VNIs 3001 and Server VLAN 3001 is extended between these two racks. VLAN/VNI 3001 is provisioned on both racks, and there are hosts on these racks. Server VLAN 3801 is a VLAN provisioned on Rack1 only, but it belongs to the same tenant. Routing between VNI 3001 and 3801 is required within this tenant both in the same rack and across the racks. This example also illustrates the symmetric and asymmetric routing operation. The configuration on leafs is identical on each of the leafs except for the VTEP IP, router ID, and RD configurations. The vlag pair is represented with one VTEP IP address. The use of anycast gateway addresses for the server-facing VLAN interfaces simplifies the configuration drastically. Please note that the configuration for the vlag pair is done from the primary node. 58

59 Tenant and L2 Extension Between Racks in a 3-Stage Clos Fabric FIGURE 18 Tenant and Layer 2 Extension Between Two Racks Configuration Check the Node ID on Each ToR The RBridge ID is required for the Layer 3 and EVPN configuration on each node. For the vlag pair, Leaf1-2 is the primary node. The configuration for both devices in the pair is done from Leaf1-2. The RBridge IDs are 45 and 46 for Leaf1-1 and Leaf1-2, respectively. These IDs are used for the ports and for the Layer 3 configuration. Leaf5 is an individual ToR with an RBridge ID of

60 Tenant and L2 Extension Between Racks in a 3-Stage Clos Fabric Configuration on the Leaf1 vlag Pair The configuration is shown in three parts for clarity. Common configuration such as port channel and VLANs are shown in one block. The tenant, Layer 3 interfaces, and BGP-EVPN configuration is shown in the second block under each RBridge ID. The common overlay-gateway configuration is shown in the third block. Please note that the entire configuration is applied from the primary node in this two-node vlag pair. The configuration is pretty much the same except for the router ID and RD of the tenant VRF. This makes it easier to automate the provisioning on various nodes. 60

61 Tenant and L2 Extension Between Racks in a 3-Stage Clos Fabric 61

62 Tenant and L2 Extension Between Racks in a 3-Stage Clos Fabric Configuration on Leaf5 62

63 Tenant and L2 Extension Between Racks in a 3-Stage Clos Fabric 63

64 Tenant and L2 Extension Between Racks in a 3-Stage Clos Fabric Verification Verify VLAN Extension Between the Racks Check the L2 extended VLAN on each node. This should show the local L2 trunk ports and also the tunnels to all remote VTEPs where the same VLAN segment is extended. In the following output from the Leaf1 vlag pair, there are five tunnels for VLAN 3001, which indicates that the same VLAN/VNI segment is provisioned on five other VTEPs or ToRs. Note that one of the tunnels, Tu 61442, is destined to Leaf5. Also note that there are four underlay next hops to reach this tunnel destination in the fabric. 64

65 Tenant and L2 Extension Between Racks in a 3-Stage Clos Fabric In the following output shown from Leaf5, Tunnel is destined to the vlag Leaf1 pair's VTEP IP:

66 Tenant and L2 Extension Between Racks in a 3-Stage Clos Fabric VLAN Layer 3 Interface State on the vlag Pair 66

67 Tenant and L2 Extension Between Racks in a 3-Stage Clos Fabric VLAN Layer 3 Interface State on the Leaf5 ToR Local Host Entries on Each Leaf Depending on the port-channel hashing on server-facing links, the ARP entries may be learned on any of the nodes in the vlag pair. Make sure that all host entries are learned collectively in the vlag pair. Remote Host Entries in the Extended VLAN 67

68 Tenant and L2 Extension Between Racks in a 3-Stage Clos Fabric The following table from Leaf5 shows the BGP and ARP entries of the remote hosts behind the Leaf1 pair. Note that the next hop is set to , which is a common VTEP IP of the vlag pair. This causes the redundant leaf to appear as one VTEP in the underlay network, and load balancing is accomplished. In the ARP table, both the local and remote entries are indicated with different types. BGP EVPN for remote entries signify that they were learned over BGP EVPN. The local entries are shown as "Dynamic" entries. Verify Tenant Extension Between the Racks Tenant extension ensures routing between the VXLAN segments within the same tenant. As shown in Figure 18, VNI segment 3802 is provisioned only on the vlag ToR but is part of the tenant on both ToRs. Let's go over a list of verification steps required to ensure that communication between the hosts in VNI 3001 on Leaf5 and hosts in VNI 3802 on vlag Leaf1. RMAC of Each Node There is one RMAC assigned to every VTEP. This information can be obtained by looking at any of the L3 interfaces or the L3 VNI's associated VLAN interface. For the vlag pair, even though they have same VTEP IP, they are assigned a unique router MAC. 68

69 Tenant and L2 Extension Between Racks in a 3-Stage Clos Fabric L3 VNI State on the Nodes L3 VNI 7201 is assigned to the tenant VRF. Make sure that the vlag ToR and Leaf5 have tunnels established to each other and that this VNI is activated on it. As seen in the following table for the output from Leaf1, the tunnel source is the VTEP IP of the vlag, , and the destination IP is the VTEP IP of Leaf5, (Notice additional tunnels in the list; these are destined to other VTEPs where the same tenant is provisioned.) 69

70 Tenant and L2 Extension Between Racks in a 3-Stage Clos Fabric L3 VNI state from Leaf5: 70

71 Tenant and L2 Extension Between Racks in a 3-Stage Clos Fabric Verify the Route to the Remote Subnet of the Same Tenant The following table shows the BGP entry on Leaf5 for the remote subnet of VNI (Note that the host entries are also advertised over BGP, but will be ignored by Leaf5 since this VNI is not locally provisioned and only routing is desired.) There are four entries in the BGP table: two originators in the vlag pair, and those two entries are learned from two spines exchanging EVPN routes. Again, the next hop is the same due to the common VTEP IP used by the vlag pair. 71

72 Tenant and L2 Extension Between PoDs in an Optimized 5-Stage Clos Fabric Tenant and L2 Extension Between PoDs in an Optimized 5-Stage Clos Fabric In this example, we illustrate the extension of a tenant and a Layer 2 segment between racks in two different PoDs. As shown in Figure 19, tenant VRF vrf101 is extended between these two racks: POD1-leaf1 and POD2-leaf1 dual or vlag pair. VXLAN segment 2001 is extended across the PoD. VLAN 3901 is provisioned only on the Leaf1 pair in POD1. 72

73 Tenant and L2 Extension Between PoDs in an Optimized 5-Stage Clos Fabric FIGURE 19 Tenant and Layer 2 Extension Between Two PoDs Connected by Super-Spines Configuration Check the Node ID on Each ToR The RBridge ID is required for the Layer 3 and EVPN configuration on each node. For the POD1 vlag pair, Leaf1-2 is the primary node. The configuration for both devices in the pair is done from Leaf1-2. The RBridge IDs are 45 and 46 for Leaf1-1 and Leaf1-2, respectively. These IDs are used for the ports and for the Layer 3 configuration. For the POD2 vlag pair, Leaf1-2 is the primary node. The configuration for both devices in the pair is done from Leaf1-2. The RBridge IDs are 45 and 46 for Leaf1-1 and Leaf1-2, respectively. 73

74 Tenant and L2 Extension Between PoDs in an Optimized 5-Stage Clos Fabric Configuration on the PoD1-leaf1 vlag Pair The configuration is shown in three parts for clarity. Common configuration such as port channel and VLANs is shown in one block. The tenant, Layer 3 interfaces, and BGP-EVPN configuration is shown in the second block under each RBridge ID. The common overlaygateway configuration is shown in the third block. Please note that the entire configuration is applied from the primary node in this twonode vlag pair. 74

75 Tenant and L2 Extension Between PoDs in an Optimized 5-Stage Clos Fabric 75

76 Tenant and L2 Extension Between PoDs in an Optimized 5-Stage Clos Fabric Configuration on the POD2-leaf1 vlag Pair 76

77 Tenant and L2 Extension Between PoDs in an Optimized 5-Stage Clos Fabric 77

78 Tenant and L2 Extension Between PoDs in an Optimized 5-Stage Clos Fabric Verification Verify VLAN Extension Between the Nodes Check the L2 extended VLAN on each node. This should show the local L2 trunk ports and also the tunnels to all remote VTEPs where the same VLAN segment is extended. In the output below from the POD1-Leaf1 vlag ToR, there are six tunnels for VLAN 2001, which indicates that the same VLAN/VNI segment is provisioned on six other VTEPS or ToRs. Note that one of the tunnels, Tu 61448, is destined to the POD2-Leaf1 vlag ToR. Also note that there are four underlay next hops to reach this tunnel destination in the fabric as there are four spines. The output below from the POD2-Leaf1 vlag shows the state of VLAN

79 Tenant and L2 Extension Between PoDs in an Optimized 5-Stage Clos Fabric VLAN Layer 3 Interface State on the POD1-Leaf1 vlag 79

80 Tenant and L2 Extension Between PoDs in an Optimized 5-Stage Clos Fabric VLAN Layer 3 Interface State on the POD2-Leaf1 vlag Local Host Entries on Each Leaf/ToR Depending on the port-channel hashing on server-facing links, the ARP entries may be learned on any of the nodes in the vlag pair. Make sure that all host entries are learned collectively in the vlag pair. 80

81 Tenant and L2 Extension Between PoDs in an Optimized 5-Stage Clos Fabric Remote Host Entries in the Extended VLAN: BGP and ARP Table on POD1-Leaf1 The following table shows a BGP entry and ARP entries of the remote hosts behind the POD2-leaf1 pair. Note that the next hop is set to , which is the common VTEP IP of the vlag pair. This causes the redundant leaf to appear as one VTEP in the underlay network, and load balancing is accomplished. In the ARP table, both local and remote entries are indicated with different types: "Dynamic" for local entries; and BGP EVPN for remote entries, signifying that they were learned over BGP EVPN and are the local hosts. (Even though is shown as remote, the MAC entry lookup makes it a local host in the vlag pair.) and are the hosts attached to the POD2-Leaf1 pair. 81

82 Tenant and L2 Extension Between PoDs in an Optimized 5-Stage Clos Fabric Remote Host Entries in the Extended VLAN: BGP and ARP Table on POD2-Leaf1 82

83 Tenant and L2 Extension Between PoDs in an Optimized 5-Stage Clos Fabric Verify Tenant Extension Between the Racks Tenant extension ensures routing between the VXLAN segments within the same tenant. As shown in Figure 19, VNI segment 3901 is provisioned only on the POD1-Leaf1 vlag pair, but it is part of the tenant on both leafs. Let's go over a list of verification steps required to ensure that communication between the hosts in VNI 2001 on POD2-Leaf1 and hosts on VNI 3901 on POD1-Leaf1. RMAC of Each Node There is one RMAC assigned to every VTEP. This information can be obtained by looking at any L3 interface or the Layer 3 VNIs associated the VLAN interface. For the vlag pair, even though they have same VTEP IP, they are assigned a unique router MAC address. 83

84 Tenant and L2 Extension Between PoDs in an Optimized 5-Stage Clos Fabric L3 VNI State on the Nodes L3 VNI 7101 is assigned to the tenant VRF. Make sure that the vlag pair and leaf5 have tunnels established to each other and that this VNI is activated on it. As seen in the following table for the output taken from POD1-Leaf1, the tunnel source is the VTEP IP of the vlag ( ), and the destination IP is the vlag VTEP IP of POD2-Leaf1 ( ). (Notice additional tunnels in the list; these are destined to other VTEPs where the same tenant is provisioned.) 84

85 Tenant and L2 Extension Between PoDs in an Optimized 5-Stage Clos Fabric The L3 VNI state from POD2-Leaf1 is shown below. 85

86 Tenant and L2 Extension Between PoDs in an Optimized 5-Stage Clos Fabric Verify the Route to the Remote Subnet of the Same Tenant The following table shows the BGP entry on POD2-Leaf1 for the remote subnet of VNI (Note that the host entries are also advertised over BGP, but they will be ignored by this leaf as this VNI is not locally provisioned and only routing is desired). There are four entries in BGP table: two originators in the vlag pair, and those two entries are learned from two spines exchanging EVPN routes. The next hop is the same due to the common VTEP IP used by the vlag pair. 86

87 Tenant Extension Outside the Fabric Tenant Extension Outside the Fabric In Tenant and L2 Extension Between PoDs in an Optimized 5-Stage Clos Fabric on page 75, we illustrated extending a tenant VRF across racks in two PoDs. In this section, let's see the steps involved in extending the same tenant outside the fabric through the border or edge leafs. Figure 20 shows a section of the validated design. Here, we're extending tenant VRF vrf101 outside the fabric through the edge leaf. The edge leaf is connected to a WAN edge router, and the tenant VRF is extended to the WAN edge. 87

88 Tenant Extension Outside the Fabric FIGURE 20 Tenant Extension Outside the Fabric Through Edge Leafs Configuration We will skip through the configurations of the POD1-Leaf1 and POD2-Leaf1 vlag pairs since they have already been covered earlier and will focus on the configurations of the edge leafs. Edge-Leaf1 Configuration On the edge leaf, we do not recommend any server VLAN segments. For the fabric side, we need only a VNI segment for the purpose of the L3 routing VNI for the tenant VRF. This VNI must be consistent with other leafs for a given tenant. In this example, we're using VNI 7101 as the L3 VNI for the tenant vrf101. For the external-facing side, we need another VLAN for peering with external routers. 88

89 Tenant Extension Outside the Fabric 89

90 Tenant Extension Outside the Fabric Edge-Leaf2 Configuration On the edge leaf, we do not recommend any server VLAN segments. For the fabric side, we need only a VNI segment for the purpose of the L3 routing VNI for the tenant VRF. This VNI must be consistent with other leafs for a given tenant. In this example, we're using VNI 7101 as the L3 VNI for tenant vrf101. For the external facing side, we need another VLAN for peering with external routers. 90

91 Tenant Extension Outside the Fabric 91

92 Tenant Extension Outside the Fabric Verification RMAC of Each Node There is one RMAC assigned to every VTEP. This information can be obtained by looking at any L3 interface or the VLAN interface associated with the Layer 3 VNI. For the vlag pair, even though the nodes have the same VTEP IP, they are assigned a unique router MAC. POD1-Leaf1 Pair POD2-Leaf1 Pair Verify the L3 VNI State on the Nodes Here we need to make sure that the Layer 3 VNI is associated with tunnels to every other node that has been provisioned with the same tenant. For instance, the output from POD1-Leaf1-1 shows three tunnels. Looking at the destination IPs, we can confirm that POD2-Leaf1, Edge-Leaf1, and Edge-Leaf2 have been associated with the Layer 3 VNI of 7101 of tenant vrf101. (The source IP is the VTEP IP of the POD1-Leaf1 vlag pair.) 92

93 Tenant Extension Outside the Fabric The following shows the VNI state from Edge-Leaf1. It is associated with tunnels destined to POD1-Leaf1 ( ) and POD2-Leaf1 ( ). On Edge-Leaf2 also, let's ensure that the tunnels to POD1-Leaf1 ( ) and POD2-Leaf1 ( ) are associated with Layer 3 VNI

94 Tenant Extension Outside the Fabric Verify the Route to a Fabric Segment on the Edge Leaf Let's look at the route entry to the subnet of VLAN/VNI 2001 ( /24). It is advertised by the vlag pairs in two PoDs. Effectively, we should see two equal paths. Since the RMACs are different between vlag peers within the vlag pair, we see four paths, as shown below. Also, note that the route is advertised by the edge leaf to its external BGP peer. (The "show ip bgp routes <prefix> vrf <vrf-name>" command lists the routes sent to the route-table manager after the best-path computations are complete. If this output is not correct, check the "show bgp evpn routes type ipv4-prefix <> tag 0" command.) Similarly, for the route to the VNI 3901 subnet learned from the POD1-Leaf1 vlag pair whose VTEP IP is : 94

95 Tenant Extension Outside the Fabric Verify the Route to an External Network on the Internal Leafs As shown in Figure 20, external network /26 must be reachable from the tenant VRF of the internal leafs. Let us look at the route verification, step by step, starting from the edge leaf. First, verify the route on Edge-Leaf1. As shown, the route is installed in the correct VRF and is pointing to the external next hop of the WAN edge router. The next step is to verify that this route gets advertised by the edge leafs into the fabric in the EVPN Address-Family. The important fields to look at in this output are L3 VNI, Router MAC, RD, RT, and Next Hop, as highlighted below. Now let's look at the BGP entry on one of the internal leafs, say POD1-Leaf1. It should see two paths to the external network as both edge leafs are advertising that network into the fabric. As you see in the output below, there are four entries due to the fact that they're learned from two spines. Essentially, there are two unique entries. 95

96 VLAN Scoping at the ToR Level Verify that the routes are sent to the route table by BGP. VLAN Scoping at the ToR Level VLAN scoping is briefly discussed in VLAN Scoping on page 25. Refer to the Figure 21 for the topology used to illustrate the VLAN scoping at the leaf or ToR level. For the purpose of illustration, we ve chosen a vlag pair and an individual leaf. Both ToRs may be vlag pairs or individual leafs. As seen in the figure, each leaf has a server VLAN that requires a Layer 2 extension to the other rack. Also note that the VLAN numbers are different. By mapping these VLANs to the same VNI number 8000 in this case we achieve bridging or L2 extension between them. The servers now have L2 adjacency between them. In other words, they are in the same bridge domain or broadcast domain. In 96

97 VLAN Scoping at the ToR Level essence, the VLAN tag on the wire between the servers and the leaf is decoupled from the bridge domain. This VLAN tag need not be identical on both sides to have Layer 2 adjacency or extension. In other words, the VLAN number is relevant only at the ToR level. FIGURE 21 VLAN Scoping at the ToR Level Configuration The configuration steps are similar to the L2 extension illustrated in Tenant and L2 Extension Between Racks in a 3-Stage Clos Fabric on page 61. The difference is in the VLAN-to-VNI mapping under the overlay gateway configuration. A sample configuration is shown below for a quick reference; as highlighted, a server VLAN is manually mapped to a VNI number. The table below summarizes the provisioning of L2 extension on the leafs. Leaf 1 Leaf 5 Server traffic is tagged with VLAN 100. Create VLAN 100. Create the VE 100 Layer 3 interface for first-hop routing. Assign the anycast GW address to VE 100. Map VLAN 100 to VNI 8000 under the overlay gateway. Server traffic is tagged with VLAN 20. Create VLAN 20. Create the VE 20 Layer 3 interface for first-hop routing. Assign the anycast GW address to VE 20. Map VLAN 20 to VNI 8000 under the overlay gateway. Complete configurations and verification steps on the leafs in the Figure 21 topology are given in the sections that follow. 97

98 VLAN Scoping at the ToR Level Configuration on the Leaf1 vlag Pair The configuration is shown in three parts for clarity: Common configurations, such as port channel and VLANs, are shown in one block. The tenant, Layer 3 interfaces, and BGP EVPN configurations are shown in the second block under each RBridge ID. The common overlay-gateway configuration is shown in the third block. Please note that the entire configuration is applied from the primary node in this two-node vlag pair. 98

99 VLAN Scoping at the ToR Level 99

100 VLAN Scoping at the ToR Level Configuration on Leaf5 100

101 VLAN Scoping at the ToR Level 101

102 VLAN Scoping at the ToR Level Verification Verify VLAN Extension Between the Racks Check the L2-extended VLAN on each node. This should show the local L2 trunk ports and also the tunnels to all remote VTEPs where the same VLAN segment is extended. In the output below from the Leaf1 vlag pair, there is one tunnel for VLAN 100, which indicates that the same VLAN/VNI segment is provisioned on one other VTEP or ToR. Note that one of the tunnels, Tu 61445, is destined to Leaf5. Also note that there are four underlay next hops to reach this tunnel destination in the fabric. 102

103 VLAN Scoping at the ToR Level In the output below from Leaf5, Tunnel is destined to the vlag Leaf1 pair's VTEP IP VLAN Layer 3 Interfaces State on the vlag Pair VLAN Layer 3 Interfaces State on the Leaf5 ToR 103

104 VLAN Scoping at the ToR Level Local Host Entries on Each Leaf Depending on the port-channel hashing on server-facing links, the ARP entries may be learned on any of the nodes in the vlag pair. Make sure that all host entries are learned collectively in the vlag pair. Remote Host Entries in the Extended VLAN The table below from Leaf5 shows the BGP and ARP entries of a remote host behind the Leaf1 pair. Note that the next hop is set to , which is a common VTEP IP of the vlag pair. There are two entries in BGP since there are two spines exchanging the EVPN routes. In the hardware ARP table, both the local and remote entries are indicated with different types. The local host entries are of type Dynamic, and the remote host entries are of type BGP-EVPN. Note that the remote host entries are shown under the virtual interface of local VLAN 20 on Leaf5 (not VLAN 100 as in the remote ToR). 104

105 VLAN Scoping at the Port Level Within a ToR VLAN Scoping at the Port Level Within a ToR VLAN scoping is briefly discussed in VLAN Scoping on page 25. Port VLAN scoping enables complete abstraction of a bridge domain where the VLAN tags on the server-side data frame on two ports can be different and still be bridged between the ports. The VLAN tag is localized at the port level rather than at the ToR level. Refer to the topology shown in Figure 22. On the vlag leaf, there are two port channels or LAG bundles: po111 and po112. Each has server traffic tagged with an 802.1q VLAN tag of 10 and 30, respectively. From the port VLAN scoping perspective, these tags are referred to as c-tags. The {port,vlan} is added as a member of a virtual-fabric VLAN. In this case, there is a fabric VLAN ID (Note that this number is above the 802.1q VLAN range of 4096.) In summary, VLAN 6000 comprises two members (port, vlan). (Unlike the ports in traditional VLAN cases.) (po111, vlan tag 10) (po112, vlan tag 30) On Leaf5, VLAN 40 is mapped to VNI On the Leaf1 pair, VLAN 6000 is mapped to VNI Thus we're providing Layer 2 extension within and between the leafs for server-side traffic with different dot1q VLAN tags. FIGURE 22 VLAN Scoping at the Port Level Within a ToR Configuration The configuration steps are similar to the L2 extension illustrated in VLAN Scoping at the ToR Level on page 99. The difference is in the virtual-fabric port-vlan scoping on the vlag pair. A sample configuration is given below as a quick reference for port-vlan scoping. In this example, {po111, c-tag 10} and {Te 1/0/3, c-tag 20} are mapped to VLAN With this configuration, it is possible to bridge traffic on these ports with the specified dot1q tags. 3 Multiple c-tags on the same L2 port cannot be mapped to a VLAN. 105

106 VLAN Scoping at the Port Level Within a ToR Configuration on the Leaf1 vlag Pair The configuration is shown in three parts for clarity: Common configurations, such as port channel and VLANs, are shown in one block. The tenant, Layer 3 interfaces, and BGP EVPN configurations are shown in the second block under each RBridge ID. The common overlay-gateway configuration is shown in the third block. Please note that the entire configuration is applied from the primary node in this two-node vlag pair. 106

107 VLAN Scoping at the Port Level Within a ToR 107

108 VLAN Scoping at the Port Level Within a ToR Configuration on Leaf5 108

109 VLAN Scoping at the Port Level Within a ToR 109

110 VLAN Scoping at the Port Level Within a ToR Verification Verify VLAN Extension Between the Racks Check the L2 extended VLAN on each node. This should show the local L2 trunk ports and also the tunnels to all remote VTEPs where the same VLAN segment is extended. In the output below from the Leaf1 vlag pair, there is one tunnel for VLAN 6000, which indicates that the same VLAN/VNI segment is provisioned on one other VTEP or ToR. Note that one of the tunnels, Tu 61445, is destined to Leaf5. Also note that there are four underlay next hops to reach this tunnel destination in the fabric. 110

111 VLAN Scoping at the Port Level Within a ToR In the output below from Leaf5, Tunnel is destined to the vlag Leaf1 pair's VTEP IP Local Host Entries on Each Leaf Depending on the port-channel hashing on server-facing links, the ARP entries may be learned on any of the nodes in the vlag pair. Make sure that all host entries are learned collectively in the vlag pair. Remote Host Entries in the Extended VLAN 111

112 Route Leaking for the Service VRF The table below from Leaf5 shows the BGP and ARP entries of the remote hosts behind the Leaf1 pair. Note that the next hop is set to , which is a common VTEP IP of the vlag pair. There are two entries in BGP since there are two spines exchanging the EVPN routes. In the ARP table, both the local and remote entries are indicated with different types: BGP-EVPN for remote entries, signifying that they were learned over BGP EVPN; Dynamic for local entries. Note that the remote host entries are imported into the virtual interface of local VLAN 40 on Leaf5. Route Leaking for the Service VRF With network virtualization for multitenant environments, typically the tenant VRFs are extended to the border leaf and they are connected to the service VRF through a firewall/nat/lb appliance to a service VRF. This poses a challenge of VRF and interface scalability on the border leaf. In these cases, we recommend provisioning multiple border leafs and distributing the tenants across them. FIGURE 23 Services Provisioning on the Border Leaf 112

113 Route Leaking for the Service VRF A service VRF with route leaking addresses the scalability requirements on the border leaf for certain controlled deployments. The routes to the services are leaked to the tenants in the fabric and vice-versa without the need to extend these tenant VRFs to the border leaf. As shown in Figure 24, the edge leaf does not have the tenant VRFs provisioned on it. The routes from the tenants are imported into the service VRF, and the service VRF typically advertises a default route toward the tenants in the fabric. There are other possible variations with this approach. One may connect the storage directly to the service VRF itself. It is also possible to connect to the Internet directly from the service VRF if the tenants have globally scoped addresses or if address translation occurs elsewhere. FIGURE 24 Service VRF with Route Leaking on the Border Leaf Since the routes between the tenants and the service VRF are leaked between each other, consider the following points: Unique IP addressing is needed for the tenants. Provisioning a per-tenant stateful firewall would be a challenge. One device must be able to handle all the transactions. So carefully consider the scale requirements of the firewall. Intertenant traffic is possible through the service VRF because all routes are imported there. To prevent this, we recommend having the necessary safeguards inside the tenants. FIGURE 25 Topology of the Service VRF with Route Leaking from Tenants Figure 25 shows a part of the validated topology to illustrate route leaking between tenant VRFs and the service VRF. As shown, there are two tenant VRFs in the fabric: VRF202 and VRF203. Also note that VRF202 is also extended to Leaf5 (in other words, the tenant is provisioned on two racks). These tenants are expected to have access to a common service attached to the border leaf. The border leafs have been configured with a service VRF. Each VRF has its own L3 VNI for symmetric routing. 113

114 Route Leaking for the Service VRF The routes from tenants are leaked into the service VRF, and routes from Service are leaked into all tenant VRFs using export/import route targets, as shown in the table below. Leaf1 VLAG Pair Leaf5 Edge-Leaf1 Edge-Leaf2 Tenant vrf202 vrf202, L3VNI 7202 vrf202, L3VNI 7202 Service, L3VNI 8190 Service, L3VNI 8190 Export RT 202:202 Export RT 202:202 Import RT 202:202 Import RT 202:202 Import RT 202:202 Import RT 202:202 Export RT 8190:8190 Export RT 8190:8190 Import RT 8190:8190 Import RT 203:203 Import RT 203:203 Tenant vrf203 vrf203, L3VNI 7203 Not provisioned Export RT 203:203 Import RT 203:203 Import RT 8190:8190 As explained in the earlier sections on routing and in tenant extension illustrations, when the routes are exported or advertised from the VRF, the L3VNI associated with the VRF is also included with the route. This creates an asymmetry in the L3VNI numbers in this case. For example, see the table below: Leaf1 Pair VRF vrf202 Edge-Leaf1 VRF Service Edge-Leaf2 VRF Service Advertise EVPN type-5 prefix route /24 and type-2 host routes /32 and /32. Export RT 202:202 Next hop L3VNI 7202 The received route matches import RT 8190:8190. But the L3VNI is 8190 and not 7202 (of vrf202). Create a VE interface and associate with VNI The received route /24 matches import RT 202:202. But the L3VNI is 7202 and not 8190 (of VRF service). Create a VE interface and associate with VNI Advertise EVPN prefix route 0/0 and /24. RT 8190:8190 Next hop L3VNI 8190 The received route /24 matches import RT 202:202. But the L3VNI is 7202 and not 8190 (of VRF service). Create a VE interface and associate with VNI Advertise EVPN prefix route 0/0 and /24. RT 8190:8190 Next hop L3VNI 8190 Similarly for the tenant VRF vrf203: Leaf1 Pair VRF vrf203 Edge-Leaf1 VRF Service Edge-Leaf2 VRF Service Advertise EVPN type-5 prefix route /24 and type-2 host routes /32 and /32. RT 203:203 Next hop L3VNI 7203 The received route matches the import RT 8190:8190. But the L3VNI is 8190 and not 7203 (of vrf203). Create a VE interface and associate with VNI Received route /24 matches import RT 203:203. But L3VNI is 7203 and not 8190 (of VRF service). Create a VE interface and associate with VNI Advertise EVPN prefix route 0/0 and /24. Export RT 8190:8190 Next hop L3VNI 8190 Received route /24 matches import RT 203:203. But L3VNI is 7203 and not 8190 (of VRF service). Create a VE interface and associate with VNI Advertise EVPN prefix routes 0/0 and /24. Export RT 8190:8190 Next hop L3VNI

115 Route Leaking for the Service VRF In summary: On the leafs, we must create one additional VE interface in the default VRF and associate it with a VNI number equal to the L3VNI of the service VRF. On the border leaf, for every tenant that is leaked into the service VRF, create a VE interface in the default VRF and associate it with the VNI number equal to the L3VNI of the tenant. These additional VNIs must be activated in the EVPN instance by the leafs and border leafs. Leaf1 Pair Leaf5 Border Leafs VNI 8190, VLAN/VE 8190 in the default VRF VNI 8190, VLAN/VE 8190 in the default VRF VNI 7202, VLAN/VE 7202 in the default VRF VNI 7203, VLAN/VE 7203 in the default VRF Configuration The following sections provide the incremental configuration relevant to the route leaking between the services and the tenant VRFs. A default route and a subnet route are injected from the service VRF of the edge leaf into the fabric, and the tenants import it. The tenants' VLAN subnets and host routes are similarly imported by the service VRF. Configuration on the Leaf1 vlag Pair The Leaf1 vlag pair has both vrf202 and vrf203 tenant VRFs. 115

116 Route Leaking for the Service VRF 116

117 Route Leaking for the Service VRF Configuration on Leaf5 Leaf5 has been provisioned with just the vrf202 tenant VRF. 117

118 Route Leaking for the Service VRF Configuration on the Edge Leaf The edge leaf is provisioned with only the service VRF. In this illustration, the edge leaf advertises two routes: a default route (say to a service appliance) and a subnet route (say of a VLAN connecting storage network). 118

119 Route Leaking for the Service VRF 119

120 Route Leaking for the Service VRF Verification Route Learning from the Service VRF into Tenants In the topology used in this illustration, the Service VRF is advertising a default route and a subnet route toward the tenants in the fabric as an EVPN type-5 prefix route. The tenants (VRFs) on the leafs import these routes. Route Origination from the Service VRF of the Edge Leaf: Service VRF Routing Table Service VRF BGP Entries Advertising the Routes into EVPN 120

121 Route Leaking for the Service VRF Routes Received by the Leaf1 vlag Pair from the Service VRF: EVPN Routes Received from Edge Leafs There are two entries for the default route from each edge leaf, as there are two EVPN spines in the fabric. Also note that the Leaf1 vlag pair has both vrf202 and vrf203 tenants. The routes received from edge leafs are imported into both VRFs. The following output is taken from one of the nodes in the vlag pair. Verification steps are the same for the second node also. 121

122 Route Leaking for the Service VRF VE Interface States Tenant VRF vrf

123 Route Leaking for the Service VRF Tenant VRF vrf

124 Route Leaking for the Service VRF Routes Received by Leaf5 from the Service VRF Leaf5 receives the routes advertised by the two edge leafs from two EVPN spine neighbors. The CLI output shows the BGP entry for the default route. 124

125 Route Leaking for the Service VRF Leaf5 Tenant VRF vrf202 Leaf5 imports the routes received from the service into tenant VRF vrf

126 Route Leaking for the Service VRF 126

127 Route Leaking for the Service VRF Route Learning into the Service VRF from Tenants The service VRF on the edge leaf learns hosts and subnet routes from the tenants in EVPN type-2 and type-5 routes respectively. Leaf1 advertises subnet and hosts routes from tenants vrf202 and vrf203. Leaf5 advertises subnet and hosts routes from tenant vrf202. Tenant vrf202 has the same subnet extended (L2 extension) between Leaf1 and Leaf5. So verification should include the host entries as well to ensure that they point to the correct VTEP IP of the ToR to which they're connected. Leaf1 Leaf5 Edge-Leaf Tenant vrf202 Tenant vrf202 VRF service Subnet: Subnet: Subnets as trap routes: / / /24 Hosts: Hosts: / Hosts routes behind VTEP next hops: > Leaf1 VTEP IP , VE VNI 7202 Tenant vrf203 Subnet: /24 Hosts: Tenant vrf202 not provisioned > Leaf1 VTEP IP , VE VNI > Leaf5 VTEP IP , VE VNI > Leaf5 VTEP IP , VE VNI > Leaf1 VTEP IP , VE VNI > Leaf1 VTEP IP , VE VNI 7203 Edge-Leaf1 Note that the subnet routes in the route table point to the VTEP next hops, but in hardware they're programmed as trap entries to facilitate conversational host route download into the hardware. The EVPN entry for one of the subnets, /24, is shown below. This route is advertised by both the Leaf1 vlag pair (two nodes) and Leaf5 (individual ToR). In the vlag pair, both the nodes advertise the routes into BGP EVPN. So we see three BGP entries received from two EVPN spines; hence a total of six entries. 127

128 Route Leaking for the Service VRF VE Interface States 128

129 Route Leaking for the Service VRF Routes Received from Tenant vrf202 Routes Received from Tenant vrf

130 vlag Active/Active Pair Leaf 130

131 vlag Active/Active Pair Leaf Design Considerations Scale The following table gives various scale parameters and platforms used in this validated test topology. Note that this is not a measure of the maximum scale that can be supported with Extreme switches in an IP fabric. Parameter PoD1 PoD2 Border Leaf Platform used as leaf VDX S VDX 6740 VDX Q Platform used as spine VDX Q VDX Q N/A Number of server racks/leafs 8 8 N/A Number of spines 4 4 N/A Number of tenant VRFs per rack Number of tenants local to the leaf (not extended to other racks) 4 4 N/A Number of tenants extended within the PoD to all racks N/A Number of server VLAN segments per rack N/A Number of VLANs used for L3 VNI of tenant VRFs per rack Number of L2 VNIs per rack N/A Number of L2 VNIs (server VLAN segments) extended within the PoD to all leafs/racks N/A ARP-suppressed VLANs per leaf/rack N/A ND-suppressed VLANs per leaf/rack N/A Platform used as super-spine VDX Q Number of super-spines 4 Number of tenants extended between the PoDs 16 ARP/ND Suppression Guidelines This feature is enabled on a per-vlan basis. Enabling this feature involves the hardware ACL table, and this resource is shared with other ACL features as well. ARP/ND suppression is needed only on server-facing VLANs. Enable ARP/ND suppression on both nodes of vlag pairs. On individual non-redundant leafs, suppression is required only if the VLAN is L2-extended to other leafs. Use the DAI TCAM profile. With this profile, the validated scale is 64 and 12 VLANs for IPv4 and IPv6 respectively per leaf/rack. 131

132 vlag Active/Active Pair Leaf In the case of a vlag pair, the profile configuration must be set for each RBridge in the pair. Recommendations for ISL Ports in a vlag Pair Leaf We recommend picking ISL ports from the same port group on the switch. Port-group information about the leaf platforms is given in the Extreme VDX hardware installation guides. For redundancy, we recommend having a minimum of two ISL ports between the switches in the vlag pair. The bandwidth requirement for ISL links depends on the number of fabric links and the traffic pattern. The ISL links are primarily used for routed traffic received over the L3 VNI depending on the router MAC used in the data packet. A good rule of thumb is to provision links with half the bandwidth of the fabric links. For example, if there are four 40G fabric links on each switch, provision two 40G links as ISL between the switches. Fabric Link Tracking on a vlag Pair With BGP/EVPN network virtualization, two spines are designated to exchange EVPN AFI routes. Loss of both links connecting these EVPN spines would result in a traffic black-hole for the tenants. In a vlag ToR, we can prevent this by tracking the links to EVPN spines and isolating the node from the fabric if it loses those links by shutting down the remaining fabric links and server port-channel member ports. On each node of the vlag pair, identify the links connected to the spines that exchange EVPN routes. Track these links under other fabric links and the server-facing port-channel member ports. The steps are shown in the following captures from one of the nodes in a vlag leaf. Repeat the steps on the other node as well. 132

133 vlag Active/Active Pair Leaf Track these two links under the remaining fabric ports. Track under the server-facing port-channel member ports. 133

134 vlag Active/Active Pair Leaf L2 Loop Detection and Prevention Extreme leaf platforms provide two options for L2 loop detection and prevention. Detect MAC move and shut the L2 port. BGP EVPN dampening mechanism for L2 routes or MAC routes. We recommend the following configuration to make the L2 port-shut take precedence. With this configuration, the L2 port will be shut down if a MAC moves 5 times within an interval of 10s. BGP TTL Security This is applicable for ebgp peering only. This configuration can be applied to a specific neighbor or a peer group. 134

Network Virtualization in IP Fabric with BGP EVPN

Network Virtualization in IP Fabric with BGP EVPN EXTREME VALIDATED DESIGN Network Virtualization in IP Fabric with BGP EVPN Network Virtualization in IP Fabric with BGP EVPN Version 2.0 9035383 February 2018 2018, Extreme Networks, Inc. All Rights Reserved.

More information

IP Fabric Reference Architecture

IP Fabric Reference Architecture IP Fabric Reference Architecture Technical Deep Dive jammon@brocade.com Feng Shui of Data Center Design 1. Follow KISS Principle Keep It Simple 2. Minimal features 3. Minimal configuration 4. Configuration

More information

Solution Guide. Infrastructure as a Service: EVPN and VXLAN. Modified: Copyright 2016, Juniper Networks, Inc.

Solution Guide. Infrastructure as a Service: EVPN and VXLAN. Modified: Copyright 2016, Juniper Networks, Inc. Solution Guide Infrastructure as a Service: EVPN and VXLAN Modified: 2016-10-16 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net All rights reserved.

More information

Data Center Configuration. 1. Configuring VXLAN

Data Center Configuration. 1. Configuring VXLAN Data Center Configuration 1. 1 1.1 Overview Virtual Extensible Local Area Network (VXLAN) is a virtual Ethernet based on the physical IP (overlay) network. It is a technology that encapsulates layer 2

More information

Hierarchical Fabric Designs The Journey to Multisite. Lukas Krattiger Principal Engineer September 2017

Hierarchical Fabric Designs The Journey to Multisite. Lukas Krattiger Principal Engineer September 2017 Hierarchical Fabric Designs The Journey to Multisite Lukas Krattiger Principal Engineer September 2017 A Single Fabric, a Single Data Center External Layer-3 Network Pod 1 Leaf/ Topologies (aka Folded

More information

VXLAN Overview: Cisco Nexus 9000 Series Switches

VXLAN Overview: Cisco Nexus 9000 Series Switches White Paper VXLAN Overview: Cisco Nexus 9000 Series Switches What You Will Learn Traditional network segmentation has been provided by VLANs that are standardized under the IEEE 802.1Q group. VLANs provide

More information

Contents. EVPN overview 1

Contents. EVPN overview 1 Contents EVPN overview 1 EVPN network model 1 MP-BGP extension for EVPN 2 Configuration automation 3 Assignment of traffic to VXLANs 3 Traffic from the local site to a remote site 3 Traffic from a remote

More information

Implementing VXLAN. Prerequisites for implementing VXLANs. Information about Implementing VXLAN

Implementing VXLAN. Prerequisites for implementing VXLANs. Information about Implementing VXLAN This module provides conceptual information for VXLAN in general and configuration information for layer 2 VXLAN on Cisco ASR 9000 Series Router. For configuration information of layer 3 VXLAN, see Implementing

More information

EXTREME VALIDATED DESIGN. Extreme IP Fabric Architecture

EXTREME VALIDATED DESIGN. Extreme IP Fabric Architecture EXTREME VALIDATED DESIGN 53-1004890-04 April 2018 2018, Extreme Networks, Inc. All Rights Reserved. Extreme Networks and the Extreme Networks logo are trademarks or registered trademarks of Extreme Networks,

More information

Introduction to External Connectivity

Introduction to External Connectivity Before you begin Ensure you know about Programmable Fabric. Conceptual information is covered in the Introduction to Cisco Programmable Fabric and Introducing Cisco Programmable Fabric (VXLAN/EVPN) chapters.

More information

Cloud Data Center Architecture Guide

Cloud Data Center Architecture Guide Cloud Data Center Architecture Guide Modified: 2018-08-21 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net Juniper Networks, the Juniper Networks

More information

Extreme Networks How to Build Scalable and Resilient Fabric Networks

Extreme Networks How to Build Scalable and Resilient Fabric Networks Extreme Networks How to Build Scalable and Resilient Fabric Networks Mikael Holmberg Distinguished Systems Engineer Fabrics MLAG IETF TRILL Cisco FabricPath Extreme (Brocade) VCS Juniper QFabric IEEE Fabric

More information

HPE FlexFabric 5940 Switch Series

HPE FlexFabric 5940 Switch Series HPE FlexFabric 5940 Switch Series EVPN Configuration Guide Part number: 5200-2002b Software version: Release 25xx Document version: 6W102-20170830 Copyright 2017 Hewlett Packard Enterprise Development

More information

VXLAN Design with Cisco Nexus 9300 Platform Switches

VXLAN Design with Cisco Nexus 9300 Platform Switches Guide VXLAN Design with Cisco Nexus 9300 Platform Switches Guide October 2014 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 39 Contents What

More information

Configuring VXLAN EVPN Multi-Site

Configuring VXLAN EVPN Multi-Site This chapter contains the following sections: About VXLAN EVPN Multi-Site, page 1 Guidelines and Limitations for VXLAN EVPN Multi-Site, page 2 Enabling VXLAN EVPN Multi-Site, page 2 Configuring VNI Dual

More information

Unicast Forwarding. Unicast. Unicast Forwarding Flows Overview. Intra Subnet Forwarding (Bridging) Unicast, on page 1

Unicast Forwarding. Unicast. Unicast Forwarding Flows Overview. Intra Subnet Forwarding (Bridging) Unicast, on page 1 Unicast, on page 1 Unicast Flows Overview Intra and inter subnet forwarding are the possible unicast forwarding flows in the VXLAN BGP EVPN fabric, between leaf/tor switch VTEPs. They are explained in

More information

Configuring VXLAN EVPN Multi-Site

Configuring VXLAN EVPN Multi-Site This chapter contains the following sections: About VXLAN EVPN Multi-Site, page 1 Licensing Requirements for VXLAN EVPN Multi-Site, page 2 Guidelines and Limitations for VXLAN EVPN Multi-Site, page 2 Enabling

More information

Ethernet VPN (EVPN) in Data Center

Ethernet VPN (EVPN) in Data Center Ethernet VPN (EVPN) in Data Center Description and Design considerations Vasilis Stavropoulos Sparkle GR EVPN in Data Center The necessity for EVPN (what it is, which problems it solves) EVPN with MPLS

More information

Ethernet VPN (EVPN) and Provider Backbone Bridging-EVPN: Next Generation Solutions for MPLS-based Ethernet Services. Introduction and Application Note

Ethernet VPN (EVPN) and Provider Backbone Bridging-EVPN: Next Generation Solutions for MPLS-based Ethernet Services. Introduction and Application Note White Paper Ethernet VPN (EVPN) and Provider Backbone Bridging-EVPN: Next Generation Solutions for MPLS-based Ethernet Services Introduction and Application Note Last Updated: 5/2014 Ethernet VPN (EVPN)

More information

Optimizing Layer 2 DCI with OTV between Multiple VXLAN EVPN Fabrics (Multifabric)

Optimizing Layer 2 DCI with OTV between Multiple VXLAN EVPN Fabrics (Multifabric) White Paper Optimizing Layer 2 DCI with OTV between Multiple VXLAN EVPN Fabrics (Multifabric) What You Will Learn This document describes how to achieve a VXLAN EVPN multifabric design by integrating Virtual

More information

H3C S6520XE-HI Switch Series

H3C S6520XE-HI Switch Series H3C S6520XE-HI Switch Series EVPN Configuration Guide New H3C Technologies Co., Ltd. http://www.h3c.com.hk Software version: Release 1108 Document version: 6W100-20171228 Copyright 2017, New H3C Technologies

More information

Configuring VXLAN EVPN Multi-Site

Configuring VXLAN EVPN Multi-Site This chapter contains the following sections: About VXLAN EVPN Multi-Site, on page 1 Licensing Requirements for VXLAN EVPN Multi-Site, on page 2 Guidelines and Limitations for VXLAN EVPN Multi-Site, on

More information

Provisioning Overlay Networks

Provisioning Overlay Networks This chapter has the following sections: Using Cisco Virtual Topology System, page 1 Creating Overlays, page 2 Creating Network using VMware, page 4 Creating Subnetwork using VMware, page 4 Creating Routers

More information

Huawei CloudEngine Series. VXLAN Technology White Paper. Issue 06 Date HUAWEI TECHNOLOGIES CO., LTD.

Huawei CloudEngine Series. VXLAN Technology White Paper. Issue 06 Date HUAWEI TECHNOLOGIES CO., LTD. Issue 06 Date 2016-07-28 HUAWEI TECHNOLOGIES CO., LTD. 2016. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of

More information

Virtual Extensible LAN and Ethernet Virtual Private Network

Virtual Extensible LAN and Ethernet Virtual Private Network Virtual Extensible LAN and Ethernet Virtual Private Network Contents Introduction Prerequisites Requirements Components Used Background Information Why you need a new extension for VLAN? Why do you chose

More information

Building Data Center Networks with VXLAN EVPN Overlays Part I

Building Data Center Networks with VXLAN EVPN Overlays Part I BRKDCT-2949 Building Data Center Networks with VXLAN EVPN Overlays Part I Lukas Krattiger, Principal Engineer Cisco Spark How Questions? Use Cisco Spark to communicate with the speaker after the session

More information

IP fabrics - reloaded

IP fabrics - reloaded IP fabrics - reloaded Joerg Ammon Senior Principal Systems Engineer 2017-11-09 2017 Extreme Networks, Inc. All rights reserved Extreme Networks Acquisition update Oct 30, 2017:

More information

Traffic Load Balancing in EVPN/VXLAN Networks. Tech Note

Traffic Load Balancing in EVPN/VXLAN Networks. Tech Note Traffic Load Balancing in EVPN/VXLAN Networks Tech Note December 2017 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net Juniper Networks assumes no

More information

MP-BGP VxLAN, ACI & Demo. Brian Kvisgaard System Engineer, CCIE SP #41039 November 2017

MP-BGP VxLAN, ACI & Demo. Brian Kvisgaard System Engineer, CCIE SP #41039 November 2017 MP-BGP VxLAN, ACI & Demo Brian Kvisgaard System Engineer, CCIE SP #41039 November 2017 Datacenter solutions Programmable Fabric Classic Ethernet VxLAN-BGP EVPN standard-based Cisco DCNM Automation Modern

More information

Pluribus Data Center Interconnect Validated

Pluribus Data Center Interconnect Validated Design Guide Pluribus Data Center Interconnect Validated Design Guide www.pluribusnetworks.com Terminology Reference This is a glossary of acronyms and terms used throughout this document. AS BFD BGP L2VPN

More information

Internet Engineering Task Force (IETF) Request for Comments: N. Bitar Nokia R. Shekhar. Juniper. J. Uttaro AT&T W. Henderickx Nokia March 2018

Internet Engineering Task Force (IETF) Request for Comments: N. Bitar Nokia R. Shekhar. Juniper. J. Uttaro AT&T W. Henderickx Nokia March 2018 Internet Engineering Task Force (IETF) Request for Comments: 8365 Category: Standards Track ISSN: 2070-1721 A. Sajassi, Ed. Cisco J. Drake, Ed. Juniper N. Bitar Nokia R. Shekhar Juniper J. Uttaro AT&T

More information

Technical Brief. Achieving a Scale-Out IP Fabric with the Adaptive Cloud Fabric Architecture.

Technical Brief. Achieving a Scale-Out IP Fabric with the Adaptive Cloud Fabric Architecture. Technical Brief Achieving a Scale-Out IP Fabric with the Adaptive Cloud Fabric Architecture www.pluribusnetworks.com Terminology Reference This is a glossary of acronyms and terms used throughout this

More information

BESS work on control planes for DC overlay networks A short overview

BESS work on control planes for DC overlay networks A short overview BESS work on control planes for DC overlay networks A short overview Jorge Rabadan IETF99, July 2017 Prague 1 Agenda EVPN in a nutshell BESS work on EVPN for NVO3 networks EVPN in the industry today Future

More information

VXLAN Multipod Design for Intra-Data Center and Geographically Dispersed Data Center Sites

VXLAN Multipod Design for Intra-Data Center and Geographically Dispersed Data Center Sites White Paper VXLAN Multipod Design for Intra-Data Center and Geographically Dispersed Data Center Sites May 17, 2016 Authors Max Ardica, Principal Engineer INSBU Patrice Bellagamba, Distinguish System Engineer

More information

Internet Engineering Task Force (IETF) ISSN: A. Sajassi Cisco J. Uttaro AT&T May 2018

Internet Engineering Task Force (IETF) ISSN: A. Sajassi Cisco J. Uttaro AT&T May 2018 Internet Engineering Task Force (IETF) Request for Comments: 8388 Category: Informational ISSN: 2070-1721 J. Rabadan, Ed. S. Palislamovic W. Henderickx Nokia A. Sajassi Cisco J. Uttaro AT&T May 2018 Usage

More information

Border Provisioning Use Case in VXLAN BGP EVPN Fabrics - Multi-Site

Border Provisioning Use Case in VXLAN BGP EVPN Fabrics - Multi-Site Border Provisioning Use Case in VXLAN BGP EVPN Fabrics - Multi-Site This chapter explains LAN Fabric border provisioning using EVPN Multi-Site feature. Overview, page 1 Prerequisites, page 1 Limitations,

More information

Cisco ACI Multi-Pod/Multi-Site Deployment Options Max Ardica Principal Engineer BRKACI-2003

Cisco ACI Multi-Pod/Multi-Site Deployment Options Max Ardica Principal Engineer BRKACI-2003 Cisco ACI Multi-Pod/Multi-Site Deployment Options Max Ardica Principal Engineer BRKACI-2003 Agenda ACI Introduction and Multi-Fabric Use Cases ACI Multi-Fabric Design Options ACI Stretched Fabric Overview

More information

Higher scalability to address more Layer 2 segments: up to 16 million VXLAN segments.

Higher scalability to address more Layer 2 segments: up to 16 million VXLAN segments. This chapter tells how to configure Virtual extensible LAN (VXLAN) interfaces. VXLANs act as Layer 2 virtual networks over Layer 3 physical networks to stretch Layer 2 networks. About VXLAN Encapsulation

More information

VXLAN EVPN Multihoming with Cisco Nexus 9000 Series Switches

VXLAN EVPN Multihoming with Cisco Nexus 9000 Series Switches White Paper VXLAN EVPN Multihoming with Cisco Nexus 9000 Series Switches 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 27 Contents Introduction...

More information

Spirent TestCenter EVPN and PBB-EVPN AppNote

Spirent TestCenter EVPN and PBB-EVPN AppNote Spirent TestCenter EVPN and PBB-EVPN AppNote Executive summary 2 Overview of EVPN 2 Relevant standards 3 Test case: Single Home Test Scenario for EVPN 4 Overview 4 Objective 4 Topology 4 Step-by-step instructions

More information

Implementing VXLAN in DataCenter

Implementing VXLAN in DataCenter Implementing VXLAN in DataCenter LTRDCT-1223 Lilian Quan Technical Marketing Engineering, INSBU Erum Frahim Technical Leader, ecats John Weston Technical Leader, ecats Why Overlays? Robust Underlay/Fabric

More information

LARGE SCALE IP ROUTING LECTURE BY SEBASTIAN GRAF

LARGE SCALE IP ROUTING LECTURE BY SEBASTIAN GRAF LARGE SCALE IP ROUTING LECTURE BY SEBASTIAN GRAF MODULE 07 - MPLS BASED LAYER 2 SERVICES 1 by Xantaro MPLS BASED LAYER 2 VPNS USING MPLS FOR POINT-TO-POINT LAYER 2 SERVICES 2 by Xantaro Why are Layer-2

More information

Designing Mul+- Tenant Data Centers using EVPN- IRB. Neeraj Malhotra, Principal Engineer, Cisco Ahmed Abeer, Technical Marke<ng Engineer, Cisco

Designing Mul+- Tenant Data Centers using EVPN- IRB. Neeraj Malhotra, Principal Engineer, Cisco Ahmed Abeer, Technical Marke<ng Engineer, Cisco Designing Mul+- Tenant Data Centers using EVPN- IRB Neeraj Malhotra, Principal Engineer, Cisco Ahmed Abeer, Technical Marke

More information

VXLAN EVPN Multi-Site Design and Deployment

VXLAN EVPN Multi-Site Design and Deployment White Paper VXLAN EVPN Multi-Site Design and Deployment 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 55 Contents What you will learn... 4

More information

Multi-site Datacenter Network Infrastructures

Multi-site Datacenter Network Infrastructures Multi-site Datacenter Network Infrastructures Petr Grygárek rek 2009 Petr Grygarek, Advanced Computer Networks Technologies 1 Why Multisite Datacenters? Resiliency against large-scale site failures (geodiversity)

More information

MPLS VPN--Inter-AS Option AB

MPLS VPN--Inter-AS Option AB The feature combines the best functionality of an Inter-AS Option (10) A and Inter-AS Option (10) B network to allow a Multiprotocol Label Switching (MPLS) Virtual Private Network (VPN) service provider

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Configuring BGP Autodiscovery for LDP VPLS Release NCE0035 Modified: 2017-01-24 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net

More information

Agile Data Center Solutions for the Enterprise

Agile Data Center Solutions for the Enterprise Solution Brief Agile Data Center Solutions for the Enterprise IP Fabrics: Paving the Way to Digital Transformation The data center sits at the core of the business, housing mission critical applications

More information

Open Compute Network Operating System Version 1.1

Open Compute Network Operating System Version 1.1 Solution Guide Open Compute Network Operating System Version 1.1 Data Center Solution - EVPN with VXLAN 2016 IP Infusion Inc. All Rights Reserved. This documentation is subject to change without notice.

More information

H3C S7500E-X Switch Series

H3C S7500E-X Switch Series H3C S7500E-X Switch Series EVPN Configuration Guide Hangzhou H3C Technologies Co., Ltd. http://www.h3c.com Software version: S7500EX-CMW710-R7523P01 Document version: 6W100-20160830 Copyright 2016, Hangzhou

More information

MPLS VPN Inter-AS Option AB

MPLS VPN Inter-AS Option AB First Published: December 17, 2007 Last Updated: September 21, 2011 The feature combines the best functionality of an Inter-AS Option (10) A and Inter-AS Option (10) B network to allow a Multiprotocol

More information

EXTREME VALIDATED DESIGN. Extreme VCS Fabric with IP Storage

EXTREME VALIDATED DESIGN. Extreme VCS Fabric with IP Storage EXTREME VALIDATED DESIGN 53-1004936-03 April 2018 2018, Extreme Networks, Inc. All Rights Reserved. Extreme Networks and the Extreme Networks logo are trademarks or registered trademarks of Extreme Networks,

More information

White Paper. Huawei Campus Switches VXLAN Technology. White Paper

White Paper. Huawei Campus Switches VXLAN Technology. White Paper White Paper Huawei Campus Switches VXLAN Technology White Paper 1 Terms Abbreviation VXLAN NVo3 BUM VNI VM VTEP SDN Full English Name Virtual Extensible Local Area Network Network Virtualization over L3

More information

Configuring Virtual Private LAN Services

Configuring Virtual Private LAN Services Virtual Private LAN Services (VPLS) enables enterprises to link together their Ethernet-based LANs from multiple sites via the infrastructure provided by their service provider. This module explains VPLS

More information

Enterprise. Nexus 1000V. L2/L3 Fabric WAN/PE. Customer VRF. MPLS Backbone. Service Provider Data Center-1 Customer VRF WAN/PE OTV OTV.

Enterprise. Nexus 1000V. L2/L3 Fabric WAN/PE. Customer VRF. MPLS Backbone. Service Provider Data Center-1 Customer VRF WAN/PE OTV OTV. 2 CHAPTER Cisco's Disaster Recovery as a Service (DRaaS) architecture supports virtual data centers that consist of a collection of geographically-dispersed data center locations. Since data centers are

More information

Configuring MPLS L3VPN

Configuring MPLS L3VPN Contents Configuring MPLS L3VPN 1 MPLS L3VPN overview 1 Introduction to MPLS L3VPN 1 MPLS L3VPN concepts 2 MPLS L3VPN packet forwarding 5 MPLS L3VPN networking schemes 5 MPLS L3VPN routing information

More information

Configuring MPLS and EoMPLS

Configuring MPLS and EoMPLS 37 CHAPTER This chapter describes how to configure multiprotocol label switching (MPLS) and Ethernet over MPLS (EoMPLS) on the Catalyst 3750 Metro switch. MPLS is a packet-switching technology that integrates

More information

Provisioning Overlay Networks

Provisioning Overlay Networks This chapter has the following sections: Using Cisco Virtual Topology System, page 1 Creating Overlays, page 2 Creating Network using VMware, page 3 Creating Subnetwork using VMware, page 4 Creating Routers

More information

Implementing MPLS VPNs over IP Tunnels

Implementing MPLS VPNs over IP Tunnels The MPLS VPNs over IP Tunnels feature lets you deploy Layer 3 Virtual Private Network (L3VPN) services, over an IP core network, using L2TPv3 multipoint tunneling instead of MPLS. This allows L2TPv3 tunnels

More information

Intended status: Standards Track. Cisco Systems October 22, 2018

Intended status: Standards Track. Cisco Systems October 22, 2018 BESS WorkGroup Internet-Draft Intended status: Standards Track Expires: April 25, 2019 Ali. Sajassi Mankamana. Mishra Samir. Thoria Patrice. Brissette Cisco Systems October 22, 2018 AC-Aware Bundling Service

More information

VXLAN Deployment Use Cases and Best Practices

VXLAN Deployment Use Cases and Best Practices VXLAN Deployment Use Cases and Best Practices Azeem Suleman Solutions Architect Cisco Advanced Services Contributions Thanks to the team: Abhishek Saxena Mehak Mahajan Lilian Quan Bradley Wong Mike Herbert

More information

VXLAN EVPN Fabric and automation using Ansible

VXLAN EVPN Fabric and automation using Ansible VXLAN EVPN Fabric and automation using Ansible Faisal Chaudhry, Principal Architect Umair Arshad, Sr Network Consulting Engineer Lei Tian, Solution Architecture Cisco Spark How Questions? Use Cisco Spark

More information

Introduction to Segment Routing

Introduction to Segment Routing Segment Routing (SR) is a flexible, scalable way of doing source routing. Overview of Segment Routing, page 1 How Segment Routing Works, page 2 Examples for Segment Routing, page 3 Benefits of Segment

More information

Configuring Virtual Private LAN Service (VPLS) and VPLS BGP-Based Autodiscovery

Configuring Virtual Private LAN Service (VPLS) and VPLS BGP-Based Autodiscovery Configuring Virtual Private LAN Service (VPLS) and VPLS BGP-Based Autodiscovery Finding Feature Information, page 1 Configuring VPLS, page 1 Configuring VPLS BGP-based Autodiscovery, page 17 Finding Feature

More information

Segment Routing on Cisco Nexus 9500, 9300, 9200, 3200, and 3100 Platform Switches

Segment Routing on Cisco Nexus 9500, 9300, 9200, 3200, and 3100 Platform Switches White Paper Segment Routing on Cisco Nexus 9500, 9300, 9200, 3200, and 3100 Platform Switches Authors Ambrish Mehta, Cisco Systems Inc. Haider Salman, Cisco Systems Inc. 2017 Cisco and/or its affiliates.

More information

Hochverfügbarkeit in Campusnetzen

Hochverfügbarkeit in Campusnetzen Hochverfügbarkeit in Campusnetzen Für die deutsche Airheads Community 04. Juli 2017, Tino H. Seifert, System Engineer Aruba Differences between Campus Edge and Campus Core Campus Edge In many cases no

More information

Creating and Managing Admin Domains

Creating and Managing Admin Domains This chapter has the following sections: Admin Domain Overview, page 1 Viewing Admin Domain, page 2 Creating an Admin Domain, page 2 Creating DCI Interconnect Profiles, page 6 Admin Domain Overview The

More information

VXLAN Technical Brief A standard based Data Center Interconnection solution Dell EMC Networking Data Center Technical Marketing February 2017

VXLAN Technical Brief A standard based Data Center Interconnection solution Dell EMC Networking Data Center Technical Marketing February 2017 VXLAN Technical Brief A standard based Data Center Interconnection solution Dell EMC Networking Data Center Technical Marketing February 2017 A Dell EMC VXLAN Technical White Paper 1 THIS WHITE PAPER IS

More information

Feature Information for BGP Control Plane, page 1 BGP Control Plane Setup, page 1. Feature Information for BGP Control Plane

Feature Information for BGP Control Plane, page 1 BGP Control Plane Setup, page 1. Feature Information for BGP Control Plane Feature Information for, page 1 Setup, page 1 Feature Information for Table 1: Feature Information for Feature Releases Feature Information PoAP diagnostics 7.2(0)N1(1) Included a new section on POAP Diagnostics.

More information

DCI. DataCenter Interconnection / Infrastructure. Arnaud Fenioux

DCI. DataCenter Interconnection / Infrastructure. Arnaud Fenioux DCI DataCenter Interconnection / Infrastructure Arnaud Fenioux What is DCI? DataCenter Interconnection Or DataCenter Infrastructure? 2 From interconnection to infrastructure Interconnection Dark fiber

More information

VXLAN Cisco and/or its affiliates. All rights reserved. Cisco Public

VXLAN Cisco and/or its affiliates. All rights reserved. Cisco Public VXLAN Presentation ID 1 Virtual Overlay Encapsulations and Forwarding Ethernet Frames are encapsulated into an IP frame format New control logic for learning and mapping VM identity (MAC address) to Host

More information

MPLS design. Massimiliano Sbaraglia

MPLS design. Massimiliano Sbaraglia MPLS design Massimiliano Sbaraglia - MPLS layer 2 VPN diagram flowchart - MPLS layer 2 VPN pseudowire VPWS diagram - MPLS layer 2 VPN VPLS diagram - MPLS layer 2 EVPN diagram - MPLS layer 3 VPN diagram

More information

Attilla de Groot Attilla de Groot Sr. Systems Engineer, HCIE #3494 Cumulus Networks

Attilla de Groot Attilla de Groot Sr. Systems Engineer, HCIE #3494 Cumulus Networks EVPN to the host Host multitenancy Attilla de Groot Attilla de Groot Sr. Systems Engineer, HCIE #3494 Cumulus Networks 1 Agenda EVPN to the Host Multi tenancy use cases Deployment issues Host integration

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Configuring a Two-Tiered Virtualized Data Center for Large Enterprise Networks Release NCE 33 Modified: 2016-08-01 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California

More information

Cisco Nexus 7000 Series NX-OS VXLAN Configuration Guide

Cisco Nexus 7000 Series NX-OS VXLAN Configuration Guide First Published: 2015-05-07 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 2016

More information

SharkFest 18 US. BGP is not only a TCP session https://goo.gl/mh3ex4

SharkFest 18 US. BGP is not only a TCP session https://goo.gl/mh3ex4 SharkFest 18 US BGP is not only a TCP session https://goo.gl/mh3ex4 Learning about the protocol that holds networks together Werner Fischer Principal Consultant avodaq AG History and RFCs Direction for

More information

Network Configuration Example

Network Configuration Example Network Configuration Example MetaFabric Architecture 2.0: Configuring Virtual Chassis Fabric and VMware NSX Modified: 2017-04-14 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089

More information

EVPN for VXLAN Tunnels (Layer 3)

EVPN for VXLAN Tunnels (Layer 3) EVPN for VXLAN Tunnels (Layer 3) In This Chapter This section provides information about EVPN for VXLAN tunnels (Layer 3). Topics in this section include: Applicability on page 312 Overview on page 313

More information

HP FlexFabric 7900 Switch Series

HP FlexFabric 7900 Switch Series HP FlexFabric 7900 Switch Series MCE Configuration Guide Part number: 5998-6188 Software version: Release 2117 and Release 2118 Document version: 6W100-20140805 Legal and notice information Copyright 2014

More information

Deploy Application Load Balancers with Source Network Address Translation in Cisco DFA

Deploy Application Load Balancers with Source Network Address Translation in Cisco DFA White Paper Deploy Application Load Balancers with Source Network Address Translation in Cisco DFA Last Updated: 1/27/2016 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco

More information

Cisco VTS. Enabling the Software Defined Data Center. Jim Triestman CSE Datacenter USSP Cisco Virtual Topology System

Cisco VTS. Enabling the Software Defined Data Center. Jim Triestman CSE Datacenter USSP Cisco Virtual Topology System Cisco Virtual Topology System Cisco VTS Enabling the Software Defined Data Center Jim Triestman CSE Datacenter USSP jtriestm@cisco.com VXLAN Fabric: Choice of Automation and Programmability Application

More information

Configuring MPLS L3VPN

Configuring MPLS L3VPN Contents Configuring MPLS L3VPN 1 MPLS L3VPN overview 1 MPLS L3VPN concepts 2 MPLS L3VPN packet forwarding 4 MPLS L3VPN networking schemes 5 MPLS L3VPN routing information advertisement 8 Inter-AS VPN

More information

H3C S6520XE-HI Switch Series

H3C S6520XE-HI Switch Series H3C S6520XE-HI Switch Series EVPN Command Reference New H3C Technologies Co., Ltd. http://www.h3c.com.hk Software version: Release 1108 Document version: 6W100-20171228 Copyright 2017, New H3C Technologies

More information

Cisco APIC Layer 3 Networking Configuration Guide

Cisco APIC Layer 3 Networking Configuration Guide First Published: 2017-09-22 Last Modified: 2018-02-06 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387)

More information

Configuring BGP: RT Constrained Route Distribution

Configuring BGP: RT Constrained Route Distribution Configuring BGP: RT Constrained Route Distribution BGP: RT Constrained Route Distribution is a feature that can be used by service providers in Multiprotocol Label Switching (MPLS) Layer 3 VPNs to reduce

More information

Protecting an EBGP peer when memory usage reaches level 2 threshold 66 Configuring a large-scale BGP network 67 Configuring BGP community 67

Protecting an EBGP peer when memory usage reaches level 2 threshold 66 Configuring a large-scale BGP network 67 Configuring BGP community 67 Contents Configuring BGP 1 Overview 1 BGP speaker and BGP peer 1 BGP message types 1 BGP path attributes 2 BGP route selection 6 BGP route advertisement rules 6 BGP load balancing 6 Settlements for problems

More information

HP A5820X & A5800 Switch Series MPLS. Configuration Guide. Abstract

HP A5820X & A5800 Switch Series MPLS. Configuration Guide. Abstract HP A5820X & A5800 Switch Series MPLS Configuration Guide Abstract This document describes the software features for the HP 5820X & 5800 Series products and guides you through the software configuration

More information

MPLS L3VPN. The MPLS L3VPN model consists of three kinds of devices: PE CE Site 2. Figure 1 Network diagram for MPLS L3VPN model

MPLS L3VPN. The MPLS L3VPN model consists of three kinds of devices: PE CE Site 2. Figure 1 Network diagram for MPLS L3VPN model is a kind of PE-based L3VPN technology for service provider VPN solutions. It uses BGP to advertise VPN routes and uses to forward VPN packets on service provider backbones. provides flexible networking

More information

Extreme IP Fabrics Deliver Automated Cloud Data Center Scale and Agility

Extreme IP Fabrics Deliver Automated Cloud Data Center Scale and Agility Solution Brief Extreme IP Fabrics Deliver Automated Cloud Data Center Scale and Agility Highlights Delivers superior scalability for large to mega-scale data centers leveraging a cloud-proven, standard

More information

MPLS VPN over mgre. Finding Feature Information. Last Updated: November 1, 2012

MPLS VPN over mgre. Finding Feature Information. Last Updated: November 1, 2012 MPLS VPN over mgre Last Updated: November 1, 2012 The MPLS VPN over mgre feature overcomes the requirement that a carrier support multiprotocol label switching (MPLS) by allowing you to provide MPLS connectivity

More information

HP 5920 & 5900 Switch Series

HP 5920 & 5900 Switch Series HP 5920 & 5900 Switch Series MCE Configuration Guide Part number: 5998-2896 Software version: Release2207 Document version: 6W100-20121130 Legal and notice information Copyright 2012 Hewlett-Packard Development

More information

Configuring VPLS. VPLS overview. Operation of VPLS. Basic VPLS concepts

Configuring VPLS. VPLS overview. Operation of VPLS. Basic VPLS concepts Contents Configuring VPLS 1 VPLS overview 1 Operation of VPLS 1 VPLS packet encapsulation 4 H-VPLS implementation 5 Hub-spoke VPLS implementation 7 Multi-hop PW 8 VPLS configuration task list 9 Enabling

More information

Operation Manual MCE H3C S3610&S5510 Series Ethernet Switches. Table of Contents

Operation Manual MCE H3C S3610&S5510 Series Ethernet Switches. Table of Contents Table of Contents Table of Contents Chapter 1 MCE Overview... 1-1 1.1 MCE Overview... 1-1 1.1.1 Introduction to BGP/MPLS VPN... 1-1 1.1.2 BGP/MPLS VPN Concepts... 1-2 1.1.3 Introduction to MCE... 1-5 1.1.4

More information

HPE FlexFabric 7900 Switch Series

HPE FlexFabric 7900 Switch Series HPE FlexFabric 7900 Switch Series VXLAN Configuration Guide Part number: 5998-8254R Software version: Release 213x Document version: 6W101-20151113 Copyright 2015 Hewlett Packard Enterprise Development

More information

Cisco HyperFlex Systems

Cisco HyperFlex Systems White Paper Cisco HyperFlex Systems Install and Manage Cisco HyperFlex Systems in a Cisco ACI Environment Original Update: January 2017 Updated: March 2018 Note: This document contains material and data

More information

VXLAN Design Using Dell EMC S and Z series Switches

VXLAN Design Using Dell EMC S and Z series Switches VXLAN Design Using Dell EMC S and Z series Switches Standard based Data Center Interconnect using Static VXLAN. Dell Networking Data Center Technical Marketing March 2017 A Dell EMC Data Center Interconnect

More information

Building Blocks in EVPN VXLAN for Multi-Service Fabrics. Aldrin Isaac Co-author RFC7432 Juniper Networks

Building Blocks in EVPN VXLAN for Multi-Service Fabrics. Aldrin Isaac Co-author RFC7432 Juniper Networks Building Blocks in EVPN VXLAN for Multi-Service Fabrics Aldrin Isaac Co-author RFC7432 Juniper Networks Network Subsystems Network Virtualization Bandwidth Broker TE LAN Fabric WAN Fabric LAN WAN EVPN

More information

HP FlexFabric 5930 Switch Series

HP FlexFabric 5930 Switch Series HP FlexFabric 5930 Switch Series MCE Configuration Guide Part number: 5998-4625 Software version: Release 2406 & Release 2407P01 Document version: 6W101-20140404 Legal and notice information Copyright

More information

EVPN Multicast. Disha Chopra

EVPN Multicast. Disha Chopra EVPN Multicast Disha Chopra Agenda EVPN Multicast Optimizations Introduction to EVPN Multicast (BUM) IGMP Join/Leave Sync Routes Selective Multicast Ethernet Tag Route Use Case 2 EVPN BUM Traffic Basics

More information

Virtual Hub & Spoke with BGP EVPNs

Virtual Hub & Spoke with BGP EVPNs Virtual Hub & Spoke with BGP EVPNs draft-keyupate-evpn-virtual-hub-00 Keyur Patel, Ali Sajassi, John Drake, Wim Henderickx IETF 94, November 2015, Yokohama, Japan Presentation_ID 2009 Cisco Systems, Inc.

More information