Network Virtualization in IP Fabric with BGP EVPN

Size: px
Start display at page:

Download "Network Virtualization in IP Fabric with BGP EVPN"

Transcription

1 EXTREME VALIDATED DESIGN Network Virtualization in IP Fabric with BGP EVPN Network Virtualization in IP Fabric with BGP EVPN Version February 2018

2 2018, Extreme Networks, Inc. All Rights Reserved. Extreme Networks and the Extreme Networks logo are trademarks or registered trademarks of Extreme Networks, Inc. in the United States and/or other countries. All other names are the property of their respective owners. For additional information on Extreme Networks Trademarks please see Specifications and product availability are subject to change without notice. 2017, Brocade Communications Systems, Inc. All Rights Reserved. Brocade, the B-wing symbol, and MyBrocade are registered trademarks of Brocade Communications Systems, Inc., in the United States and in other countries. Other brands, product names, or service names mentioned of Brocade Communications Systems, Inc. are listed at Other marks may belong to third parties. Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document at any time, without notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade sales office for information on feature and product availability. Export of technical data contained in this document may require an export license from the United States government. The authors and Brocade Communications Systems, Inc. assume no liability or responsibility to any person or entity with respect to the accuracy of this document or any loss, cost, liability, or damages arising from the information contained herein or the computer programs that accompany it. The product described by this document may contain open source software covered by the GNU General Public License or other open source license agreements. To find out which open source software is included in Brocade products, view the licensing terms applicable to the open source software, and obtain a copy of the programming source code, please visit 2

3 Contents Contents... 3 List of Figures... 5 Preface... 7 Extreme Validated Designs... 7 Purpose of This Document... 7 Target Audience... 7 Authors... 8 Document History... 8 Introduction... 9 Technology Overview Terminology Functional Components of IP Fabric Leaf-Spine Layer 3 Clos Topology (Two-Tier) Optimized 5-Stage Layer 3 Clos Topology (Three-Tier) Edge Services and Border Leafs Extreme IP Fabric Underlay Routing Network Virtualization with BGP VxLAN Based EVPN Validated Designs ebgp Deployment Model Hardware and Software Matrix Stage Fabric Fabric Infrastructure Configuration BGP Underlay Configuration BGP EVPN and VxLAN Overlay Stage Fabric Spine Configuration Super-Spine Configuration Edge-Leaf Configuration Use Cases Simple 3-stage BGP VxLAN Based EVPN Fabric Illustration Configuration Verification L2 and L3 Extension between Racks Configuration Verification VLAN Scoping at the ToR Level Configuration Verification VLAN Scoping at the Port Level within a ToR Configuration Verification Layer-2 Handoff with VPLS Configuration Verification Layer-3 Handoff with MPLS/L3VPN Configuration Verification

4 Design Considerations BGP Route scale considerations BGP TTL Security Appendix1: VDX Leaf configuration Node ID Configuration IP Fabric Infrastructure Links Loopback Interfaces vlag Pair/ToR Node ID Configuration on vlag Pair ISL Configuration Server Port-Channel Configuration Loopback interfaces BGP Configuration Tenant Provisioning Anycast Gateway MAC Configuration Enable Conversational Learning of MAC and ARP/ND Host Entries VRFs, Server VLANs, and Subnets Configuration Advertise Tenant Layer 3 Routes from the Leaf Enable EVPN Instance for the Tenant VLAN Segments vlag Pair Configuration References

5 List of Figures Figure 1 Leaf-Spine L3 Clos Topology Figure 2 Optimized 5-Stage L3 Clos Topology Figure 3 ebgp for Underlay Figure 4 ibgp for Underlay Figure 5 VTEPs and L2 Extension with Flood and Learn Figure 6 Routing Between VxLAN networks in Flood and Learn topology Figure 7 VTEPs and L2 Extension with BGP EVPN Control-plane Figure 8 ARP Suppression Figure 9 VLAN Scoping at the Leaf/ToR Level Figure 10 VLAN Scoping at the Port Level within a ToR Figure 11 Asymmetric IRB Figure 12 Symmetric IRB Figure 13 Multitenancy Figure 14 MCT Pair for Dual-homing and Leaf Redundancy Figure 15 EBGP based 3-stage IP Fabric Figure 16 EBGP based Optimized 5-stage IP Fabric Figure 17 Illustration topology for a simple 3-stage fabric Figure 18 L2/L3 Extension between Racks Figure 19 VLAN Scoping at the ToR Level Figure 20 Port VLAN scoping within the ToR Figure 21 Layer-2 Handoff with VPLS Figure 22 Layer-3 Handoff with MPLS/L3VPN

6 6

7 Preface Extreme Validated Designs Helping customers consider, select, and deploy network solutions for current and planned needs is our mission. Extreme Validated Designs offer a fast track to success by accelerating that process. Validated designs are repeatable reference network architectures that have been engineered and tested to address specific use cases and deployment scenarios. They document systematic steps and best practices that help administrators, architects, and engineers plan, design, and deploy physical and virtual network technologies. Leveraging these validated network architectures accelerates deployment speed, increases reliability and predictability, and reduces risk. Extreme Validated Designs incorporate network and security principles and technologies across the ecosystem of service provider, data center, campus, and wireless networks. Each Extreme Validated Design provides a standardized network architecture for a specific use case, incorporating technologies and feature sets across Extreme products and partner offerings. All Extreme Validated Designs follow best-practice recommendations and allow for customer-specific network architecture variations that deliver additional benefits. The variations are documented and supported to provide ongoing value, and all Extreme Validated Designs are continuously maintained to ensure that every design remains supported as new products and software versions are introduced. By accelerating time-to-value, reducing risk, and offering the freedom to incorporate creative, supported variations, these validated network architectures provide a tremendous value-add for building and growing a flexible network infrastructure. Purpose of This Document This Extreme validated design provides guidance for designing and implementing IP fabric in a data center network using Extreme hardware and software. It details the Extreme reference architecture for enabling Network virtualization using BGP VxLAN based EVPN. It should be noted that not all features such as automation practices, zero-touch provisioning, and monitoring of the Extreme IP Fabric are included in this document. Future versions of this document are planned to include these aspects of the Extreme IP Fabric solution. The design practices documented here follow the best-practice recommendations, but there are variations to the design that are supported as well. Target Audience This document is written for Extreme systems engineers, partners, and customers who design, implement, and support data center networks. This document is intended for experienced data center architects and engineers. This document assumes that the reader has a good understanding of data center switching and routing features and of Multi-Protocol BGP/MPLS VPN[5] for understanding multitenancy in VXLAN EVPN networks. 7

8 Authors Krish Padmanabhan Sr Principal Engineer, System and Solution Engineering Eldho Jacob Principal Engineer, System and Solution Engineering The authors would like to acknowledge the following at Extreme Networks for their technical guidance in developing this validated design: Abdul Khader Director, System and Solution Engineering Vivek Baveja Director, Product Management The authors would also like to acknowledge the following for their meticulous review of the document. Wim van Laarhoven Lavanya Venkatesan Document History Date Part Number Description February Network Virtualization with BGP EVPN in IP Fabric (SLX platforms) 8

9 Introduction Extreme has expanded its product portfolio with SLX platforms and SLX-OS positioned for network virtualization architectures to meet the growing customer demand for higher levels of scale, agility, and operational efficiency. This document describes cloud-optimized network designs using Extreme IP Fabrics for building data-center sites. The configurations and design practices documented here are fully validated and conform to the Extreme IP Fabric reference architectures. The intention of this Extreme Validated Design document is to provide reference configurations and document best practices for building cloud-scale data-center networks using Extreme SLX switches and Extreme IP Fabric architectures. This document describes the following architectures: Extreme IP Fabric deployed in 3-stage and optimized 5-stage folded Clos topologies Network virtualization and multi-tenancy using BGP EVPN in these 3-stage and 5-stage fabrics 9

10 10

11 Technology Overview Extreme IP Fabric provides a Layer 3 Clos deployment architecture for data center sites. In an Extreme IP Fabric, all links in the Clos topology are Layer 3 links. It includes the networking architecture; the protocols used to build the network; turnkey automation features used to provision, manage, and monitor the networking infrastructure; and the hardware differentiation with Extreme SLX and VDX switches. The following sections describe the validated design for data center sites with Extreme IP Fabrics. Because the infrastructure is built on IP, advantages like the following are leveraged: loop-free communication using industry-standard routing protocols, ECMP, very high solution scale, and standards-based interoperability. 11

12 Terminology Term ARP AS ASN BD BFD BGP BUM CE DCI ebgp ECMP EVPN GVLAN ibgp IP IRB MAC MP-BGP MPLS ND NLRI OpEx PE PoD QoS RD RIOT RT ToR UDP vlag VLAN VM VNI VPLS VPN VRF VTEP VXLAN Description Address Resolution Protocol Autonomous System Autonomous System Number Bridge Domain Bidirectional Forwarding Detection Border Gateway Protocol Broadcast, Unknown unicast, and Multicast Classical Ethernet Data Center Interconnect External Border Gateway Protocol Equal Cost Multi-Path Ethernet Virtual Private Network Global VLAN Internal Border Gateway Protocol Internet Protocol Integrated Routing and Bridging Media Access Control Multi-Protocol Border Gateway Protocol Multi-Protocol Label Switching Neighbor Discovery Network Layer Reachability Information Operational Expenses Provider Edge Router Point of Delivery Quality of Service Route Distinguisher Routing In/Out of Tunnels (in a single pass in the ASIC or SoC) Route Target Top of Rack switch User Datagram Protocol Virtual Link Aggregation Group Virtual Local Area Network Virtual Machine VXLAN Network Identifier Virtual Private Lan Service Virtual Private Network VPN Routing and Forwarding instance VXLAN Tunnel Endpoint Virtual Extensible Local Area Network 12

13 Functional Components of IP Fabric Leaf-Spine Layer 3 Clos Topology (Two-Tier) The leaf-spine topology has become the de facto standard for networking topologies when building medium- to large-scale data center infrastructures. The leaf-spine topology is adapted from Clos telecommunications networks. The Extreme IP Fabric within a PoD resembles a two-tier or 3-stage folded Clos fabric. The two-tier leaf-spine topology is shown in Figure 1. The bottom layer of the IP fabric has the leaf devices (top-of-rack switches), and the top layer has spines. The role of the leaf is to provide connectivity to the endpoints in the data center network. These endpoints include tenant workloads such as compute and storage devices, as well as other networking devices like routers, switches, load balancers, firewalls, and any other physical or virtual networking endpoints. Because all endpoints connect only to the leaf, policy enforcement, including security, traffic-path selection, QoS marking, traffic policing, and shaping, is implemented at the leaf. More importantly, Leafs act as the first hop gateways with Anycast gateway addresses for the server segments to facilitate mobility with the VXLAN overlay. A set of Leafs act as Border-leafs or Edge-leafs that provide connectivity to services such as Firewall, load balancer, storage etc., and external connectivity to the PoD. The role of the spine is to provide connectivity between leafs. The major role of the spine is to participate in the control-plane and data- plane operations for traffic forwarding between leafs. The spine devices serve two purposes: BGP control plane (route distribution using BGP protocol and its extensions), And data-plane IP forwarding based on the outer IP header in the underlay network. Since there are no network endpoints connected to the spine, tenant VRFs or VXLAN segments are not created on spines. Their routing table size requirements are also very light to accommodate just the underlay reachability. Note that all spine devices need not act as BGP route reflectors; only selected spines in the spine layer can act as BGP route reflectors in the overlay design. More details are provided in BGP EVPN Control Plane. As a design principle, the following requirements apply to the leaf-spine topology: Each compute/storage rack has a Leaf or ToR (Top of the Rack) switch. The rack may have a pair of redundant switches in a MCT/vLAG pair referred to as a dual ToR or a MCT pair ToR or a vlag pair ToR. Dual ToR provides node and link redundancy to the workloads in the rack. Leafs and border-leafs connects to all spines in the PoD. These links are referred to as Fabric Infrastructure Links. Spines are not interconnected with each other. Leafs are not interconnected with each other for data-plane purposes. (Leafs in a MCT or vlag pair are interconnected for control-plane operations such as forming a server-facing LAG.) The network endpoints do not connect to the spines. This type of topology has the predictable latency and also provides the ECMP forwarding in the underlay network. The number of hops between two leaf devices is always two within the fabric. This topology also enables easier scale out in the horizontal direction as the data center expands and is limited by the port density and bandwidth supported by the spine devices. This validated design recommends the same hardware in the spine layer. Mixing different hardware is not recommended. IP Fabric Infrastructure Links All fabric nodes leafs, spines, and border-leafs are interconnected with Layer 3 IPv4 interfaces. The validated design has, 40-GbE links are used between the fabric nodes. All these links are configured as Layer 3 interfaces with /31 IPv4 address. 13

14 The MTU for these links is set to jumbo MTU. This is a requirement to handle the VXLAN encapsulation of Ethernet frames. Multiple parallel links between two nodes in the fabric must be avoided. Server-Facing Links The server-facing or access links are on the leaf nodes connecting the workloads. These links are either individual or a LAG in case of a dual- ToR. In the validated design, 10/25-GbE links as trunk ports with associated VLANs. Spanning-tree is typically disabled for server connectivity unless there are downstream L2 switches. Figure 1 Leaf-Spine L3 Clos Topology SPINE Internet MPLS Network WAN EDGE BORDER LEAF Rack-1 Rack-2 Rack-N Edge Services Optimized 5-Stage Layer 3 Clos Topology (Three-Tier) Multiple PoDs based on leaf-spine topologies can be connected for higher scale in an optimized 5-stage folded Clos (three-tier) topology. This topology adds a new tier to the network, known as a super-spine. This architecture is recommended for interconnecting several EVPN VXLAN PoDs. Super-spines function similar to spines: BGP control plane and IP forwarding based on the outer IP header in the underlay network. No endpoints are connected to the super-spine. Figure 2 shows four super-spine switches connecting the spine switches across multiple data center PoDs. The connection between the spines and the super-spines follows the Clos principles: Each spine connects to all super-spines in the network. In the validated design, both 40 and 100-GbE links were tested independently. It is not recommended to mix the links of different bandwidths between two layers of the IP fabric. Border-leafs connect to the super-spines. Neither spines nor super-spines are interconnected with each other. 14

15 Figure 2 Optimized 5-Stage L3 Clos Topology Edge Services and Border Leafs For two-tier and three-tier data center topologies, the role of the border leaf in the network is to provide external connectivity to the data center site. In addition, since all traffic enters and exits the data center through the border leaf switches, they present the ideal location in the network to connect network services like firewalls, load balancers, and edge VPN routers. The border leaf switches connect to the WAN edge devices in the network to provide external connectivity to the data center site. As a design principle, two border leaf switches are recommended for redundancy. The WAN edge devices provide the interfaces to the Internet and DCI solutions. For DCI, these devices function as the Provider Edge (PE) routers, enabling connections to other data center sites through WAN technologies like Multiprotocol Label Switching (MPLS) VPN and Virtual Private LAN Services (VPLS). The Extreme validated design for DCI solutions is discussed in a separate validated design document. There are several ways that the border leafs connect to the data center site. In three-tier (super-spine) architectures, the border leafs are connected to the super-spines as depicted in Figure 2. In two-tier topologies, the border leafs are connected to the spines as depicted in Figure 1. Certain topologies may use the spine as border leafs (known as a border spine), overloading two functions into one. This topology adds additional forwarding requirements to spines they need to be aware of the tenants, VNIs, and VXLAN tunnel encapsulation and de-encapsulation functions. Extreme IP Fabric Underlay Routing IP fabric collectively refers to the following: IPv4 network address assignments to the links connecting the nodes in the fabric: spines, leafs, super-spines, and border leafs. Control-plane protocol used for reachability between the nodes. A smaller scale topology might benefit from a link-state protocol such as OSPF. (Note that this is not validated in this design, though it is supported). Large scale topologies, however, typically use BGP. Extreme validated design recommends BGP as the protocol for underlay network reachability. Resiliency feature such as BFD. 15

16 There are several underlay deployment options. When using BGP as the only routing protocol in the fabric, there are two models: ebgp for Underlay ebgp peering between each tier of nodes: between the leaf and the spine; between the spine and the super-spine; and between the super-spine and the border leaf. ibgp for Underlay ibgp peering between the leaf and the spine within the PoD and spines as BGP route reflectors. ebgp peering between the PoDs through the super-spine layer for inter-pod reachability. (Note that this is not validated in this design, though it is supported). ebgp for Underlay This deployment model refers to the usage of ebgp peering between the leaf and the spine in the fabric. In this model, each leaf node is assigned its own autonomous system (AS) number. The other nodes are grouped based on their role in the fabric, and each of these groups is assigned a separate AS number, as shown in Figure 3. Using ebgp in an IP fabric is simple and also provides the ability to apply BGP policies for traffic engineering on a per-leaf or per-rack basis since each leaf or rack in a PoD is assigned a unique AS number. Private AS numbers are used in this validated design. One design consideration for the AS number assignment is that a 2-byte AS number provides a maximum of 1023 private AS numbers (ASN to ASN 65534); We recommend using 4-byte private AS numbers (ASN 4,200,000,000 to 4,294,967,294) for scalability when there are multiple PoDs. Each leaf in a PoD is assigned its own AS number. (Note that a dual-tor is considered as a single leaf and both nodes in this pair will have same AS number). All spines inside a PoD belong to one AS. All super-spines are configured in one AS. Edge or border leafs belong to a separate AS. Each leaf peers with all spines using ebgp. Each spine peers with all super-spines using ebgp. There is no ebgp peering between leafs. There is no ebgp peering between spines. There is no ebgp peering between super-spines. 16

17 Figure 3 ebgp for Underlay ibgp for Underlay In this deployment model, each PoD and edge services PoD is configured with a unique AS number, as shown in Figure 4. The spines and leafs in a PoD are configured with the same AS number. The ibgp design is different than the ebgp design because ibgp must be fully meshed with all BGPenabled devices in an IP fabric. In order to avoid the full mesh of BGP peering, route reflectors must be used in the fabric. Each spine acts as a routereflector for underlay IPv4 routes to the leaf nodes inside the PoD. ibgp peering is between the spine and the leaf in a PoD, and all spines in a PoD act as BGP route reflectors to the leafs for the underlay. Note that this model is given for informational purposes only and not part of this validated design. However ibgp model is supported. 17

18 18 Figure 4 ibgp for Underlay

19 Network Virtualization with BGP VxLAN Based EVPN Network virtualization is the process of creating virtual, logical networks on physical infrastructures. With network virtualization, multiple physical networks can be consolidated to form a logical network. Conversely, a physical network can be segregated to form multiple virtual networks. Virtual networks are created through a combination of hardware and software elements spanning the networking, storage, and computing infrastructure. Network virtualization solutions leverage the benefits of software in terms of agility and programmability, along with the performance acceleration and scale of application-specific hardware. Virtual Extensible LAN (VXLAN) is an overlay technology that provides Layer 2 connectivity for workloads residing across the data center network. VXLAN creates a logical network overlay on top of physical networks, extending Layer 2 domains across Layer 3 boundaries. VXLAN provides decoupling of the virtual topology provided by the VXLAN tunnels from the physical topology of the network. It leverages Layer 3 benefits in the underlay, such as load balancing on redundant links, which leads to higher network utilization. In addition, VXLAN provides a large number of logical network segments, allowing for large-scale multitenancy in the network. VXLAN is based on the IETF RFC 7348 standard. VXLAN has a 24-bit Virtual Network ID (VNI) space, which allows for 16 million logical networks compared to a traditional VLAN, which supports a maximum of 4096 logical segments. VXLAN eliminates the need for Spanning Tree Protocol (STP) in the data center network, and it provides increased scalability and improved resiliency. VXLAN has become the de facto standard for overlays that are terminated on physical switches or virtual network elements. The traditional Layer 2 extension mechanisms using VXLAN rely on "Flood and Learn" mechanisms. These mechanisms are very inefficient, delaying MAC address convergence and resulting in unnecessary flooding. Also, in a data center environment with VXLAN- based Layer 2 extension mechanisms, a Layer 2 domain and an associated subnet might exist across multiple racks and even across all racks in a data center site. With traditional underlay routing mechanisms, routed traffic destined to a VM or a host belonging to the subnet follows an inefficient path in the network, because the network infrastructure is aware only of the existence of the distributed Layer 3 subnet, but it is not aware of the exact location of the hosts behind a leaf switch. With the Extreme BGP-EVPN, network virtualization is achieved by creating a VXLAN-based overlay network. It leverages BGP EVPN to provide a control plane for the virtual overlay network. BGP EVPN enables control-plane learning for end hosts behind remote VXLAN tunnel endpoints (VTEPs). This learning includes reachability for Layer 2 MAC addresses and Layer 3 host routes. Some key features and benefits of Extreme BGP-EVPN network virtualization are summarized as follows: Active-active MCT/vLAG pairs Multi-Chassis port channel for dual homing of network endpoints are supported at the leaf. Both switches in a MCT/vLAG pair participate in the BGP-EVPN operations and are capable of actively forwarding traffic. Static anycast gateway with static anycast gateway technology, each leaf is assigned the same default gateway IP and MAC addresses for all connected subnets. This ensures that local traffic is terminated and routed at Layer 3 at the leaf. This also eliminates any suboptimal inefficiencies found with centralized gateways. All leafs are simultaneously active forwarders for all default traffic for which they are enabled. Also, because the static anycast gateway does not rely on any control-plane protocol, it can scale to large deployments. Efficient VXLAN routing with the gateway moved to the leaf, routing of packets between VXLAN networks occur at the leaf. Routed traffic from the network endpoints is terminated in the leaf and is then encapsulated in the VXLAN header to be sent to the remote site. Similarly, traffic from the remote leaf node is VXLAN-encapsulated and gets decapsulated and routed to the destination. This VXLAN routing operation in to and out of the tunnel (RIOT) on the leaf switches is enabled in the Extreme SLX and VDX platform ASICs. VXLAN routing performed in a single pass is more efficient than competitive ASICs. Data-plane IP and MAC learning and Control-plane With IP host routes and MAC addresses learned from the data plane and advertised with BGP EVPN, the leaf switches are aware of the reachability of the hosts in the network. Any traffic destined to the hosts takes the most efficient route in the network. Layer 2 and Layer 3 multitenancy BGP EVPN provides the control plane for VRF routing and for Layer 2 VXLAN extension. BGP EVPN enables a multitenant infrastructure and extends it across the data center to enable traffic isolation between the Layer 2 and Layer 3 domains, while providing efficient routing and switching between the tenant endpoints. Dynamic tunnel discovery With BGP EVPN, the remote VTEPs are automatically discovered. The resulting VXLAN tunnels are also automatically created. This significantly reduces operational expense (OpEx) and eliminates errors in configuration. 19

20 ARP/ND suppression The BGP-EVPN EVI leafs discover remote IP and MAC addresses and use this information to populate their local ARP tables. Using these entries, the leaf switches respond to any local ARP queries. This eliminates the need for flooding ARP requests in the network infrastructure. Conversational ARP/ND learning Conversational ARP/ND reduces the number of cached ARP/ND entries by programming only active flows into the forwarding plane. This helps to optimize utilization of hardware resources. In many scenarios, there are software requirements for ARP and ND entries beyond the hardware capacity. Conversational ARP/ND limits storage-in-hardware to active ARP/ND entries; aged-out entries are deleted automatically. VM mobility support if a VM moves behind a leaf switch, with data-plane learning, the leaf switch discovers the VM and learns its addressing information. It advertises the reachability to its peers, and when the peers receive the updated information for the reachability of the VM, they update their forwarding tables accordingly. BGP-EVPN-assisted VM mobility leads to faster convergence in the network. Open standards and interoperability BGP EVPN is based on the open standard protocol and is interoperable with implementations from other vendors. This allows the BGP-EVPN-based solution to fit seamlessly in a multivendor environment. VXLAN Layer 2 Extension Using Flood and Learn Let's consider the simple topology shown in Figure 5, which represents VXLAN extension, to understand how VXLAN flood and learn works before going into the details of control-based VXLAN using BGP EVPN and the various network functions that the EVPN control plane enables. Figure 5 VTEPs and L2 Extension with Flood and Learn Ingress Replication MAC-H VTEP-A VLAN10, VNI10 IP Network Underlay I VTEP-B VLAN20, VNI10 MAC-H VTEP-C VLAN30, VNI10 MAC-H VTEP-C L2 table MAC VTEP-IP H H H3 Local Eth port VXLAN tunnel end point (VTEP) may be implemented in hardware (leaf or ToR switch) or in virtualized environments. Each VTEP has a unique IP address and MAC address. Each VTEP can reach other VTEPs over the underlay IP network. Each VTEP has its own end host/server segment connected to it. In this topology, all hosts belong to one Layer 2 broadcast domain or, in simple terms, one VLAN and one IP subnet. The local VLAN numbers may be different in each VTEP, but they are bound to one VNI number, which is common on all VTEPs. So for all practical purposes, the LAN segment is now identified by a VXLAN VNI, and the VLAN numbers are only locally significant. The logical dashed lines shown inside the IP network between the VTEPs represent the head-end or ingress replication paths. This is used to send what is known as the BUM traffic: Broadcast, Unknown Unicast, and Multicast frames on the Layer 2 segment. The VTEP unicasts these packets to all other VTEPs connected to a VXLAN segment. This may require additional configuration or provisioning of tunnels on each VTEP device to all other devices. 20

21 Let's consider that H1 wants to communicate with H2: H1 sends an ARP request. VTEP-A learns H1 as a local MAC and also maps this host to the VNI, and because the packet is a broadcast packet, it is encapsulated into the VXLAN packet and replicated; it is then unicast to each of the remote VTEPs participating in this VNI segment. The outer-src-ip is set to , and the outer-dst-ip is the remote VTEP IP. This packet is sent to every VTEP. VTEP-B and VTEP-C decapsulate the packet and flood it into their local VXLAN network. They also learn three pieces of information: the source-ip of VTEP-A, the inner-src-mac of H1, and the VNI. This creates an L2-MAC-to-VTEP- IP binding: {mac H1, VTEP-ip , VNI 10}. When H2 responds to the ARP request, the packet is unicast to H1. This packet is encapsulated in a VXLAN packet by VTEP-B and sent as a unicast IP packet based on its routing table: o Outer-IP header dst: , src VTEP-A decapsulates the packet and sends it to H1. It also creates an L2-MAC-to-VTEP-IP binding: {MAC H2, VTEP-IP , VNI 10}. Now the communication between H1 and H2 will be unicast. VTEP-A and VTEP-B now know sufficient information to encapsulate the packets directly between them. When the hosts are in different subnets, we need a Layer 3 gateway in the network to connect to all VNI segments. As seen in Figure 6, VTEP-C is configured with all VNI numbers in the network and acts as the router or gateway between these VNI segments (see the blue and red dotted arrows routing between VLAN10 and VLAN20). When hosts send ARP messages for the gateway in their respective VLANs, VTEP-C will respond. For first-hop router redundancy, multiple VTEPs may be configured with all VNIs, and they may run an FHRP protocol between them. Figure 6 Routing Between VxLAN networks in Flood and Learn topology VTEP-A VLAN10, VNI10 VTEP-B VLAN20, VNI20 Host-A VLAN10 IP network underlay Host-B VLNA20 VTEP-C VLAN10, VNI10, GW-IP VLAN20, VNI20, GW-IP VLAN30, VNI30, GW-IP Host-C VLAN10 21

22 BGP EVPN for VXLAN As we have seen in the VXLAN flood and learn case, the MAC learning is data frame-driven and flooding of broadcast or unknown unicast frames depends on ingress replication by VTEPs in the network. With the BGP EVPN control plane, the MAC learning happens via BGP similar to IPv4/IPv6 route learning in a Layer 3 network. This reduces flooding in the underlay network except for remarkably silent hosts. This control-plane-based MAC learning enables several additional functions with BGP as the unified control plane for both Layer 2 and Layer 3 forwarding in the overlay network. In Figure 7, each VTEP, being a BGP speaker, advertises the MAC and IP addresses of its local hosts to other VTEPs using the BGP EVPN control plane. A BGP route reflector may be used for distribution of this information to the VTEPs. Both VTEP discovery and MAC/IP or MAC/IPv6 host learning happen through the control plane. Since IPv4/IPv6 addresses are also exchanged in the control plane, each VTEP may act as a gateway for the VNI subnets configured on it. A centralized Layer 3 gateway is not required. This feature is also referred to as distributed gateway. Also, since each VTEP is aware of MAC/IP or MAC/IPv6 host bindings, ARP requests need not be flooded between the VTEPS. The VTEP may respond to the ARP requests on behalf of the target host, if the host address has already been learned. This is referred to as ARP/ND suppression in the fabric. Figure 7 VTEPs and L2 Extension with BGP EVPN Control-plane VTEP-A VLAN10, VNI10 BGP sessions between VTEPs Exchanging EVPN Address family VTEP-B VLAN20, VNI10 MAC-H I MAC-H VTEP-C VLAN30, VNI10 MAC-H BGP EVPN control-plane-based learning allows more flexibility to control the information flow between the VTEPs. It enables layer-2 multi-tenancy using MAC-VRF constructs. In a simple terms, each VLAN or a Bridge-domain can be considered as a MAC-VRF, and MAC addresses from the remote VTEPs get downloaded into it. BGP-EVPN also enables layer-3 multitenancy using VRFs similar to MPLS-VPN. Each VTEP may host several tenants and each tenant with a set of VXLAN segments. Depending on the interest, other VTEPs may import the tenant-specific information. This way both Layer 2 and Layer 3 extensions can be provisioned on a tenant basis. BUM traffic is accommodated using ingress replication at the VTEP. Since VTEP discovery also happens through the control plane, setting up ingress replication does not require additional provisioning or configuration about remote VTEPs. Let s look at the functional components of the BGP EVPN implementation of a data center IP Fabric. VTEP In an IP fabric, the leaf and border leaf act as VTEPs. Note that only one VTEP is allowed per device. Every VTEP has an overlay interface, which identifies the VTEP IP address. The VTEP information is exchanged, and remote VTEPs are discovered over BGP EVPN. 22

23 Static Anycast Gateway Each leaf or VTEP has a set of server-facing VLANs that are mapped to VXLAN segments by a VNI number. These VLAN segments have an associated VE interface (a Layer 3 interface for the VLAN). Each tenant VLAN has anycast gateway IPv4/IPv6 addresses and associated anycast gateway MAC addresses. These gateway IP/IPv6 addresses and gateway MAC address are consistent for the VLAN segments shared on all leafs in the fabric. Overlay Gateway Each VTEP or leaf is configured with an overlay gateway. This defines the VTEP IP address, which is used as the source IP when encapsulating packets and is used as the next-hop IP in the EVPN NLRIs. In this validated design, we are using an IPv4 underlay; hence the overlay interface is associated with the IPv4 address of a loopback interface on the leaf. BGP EVPN Control Plane The BGP EVPN control plane is used for VTEP discovery to learn MAC/IP routes from other VTEPs. The exchange of this information takes place using EVPN NLRIs. The NLRI uses the existing AFI of 25 (L2VPN). IANA has assigned BGP EVPNs a SAFI value of 70. The NLRI also carries a tunnel encapsulation attribute. For an IP fabric using VXLAN encapsulation, the attribute is set to VXLAN. In the leaf-spine topology (3-stage Clos or 5-stage Clos), all leafs and border leafs should be enabled with the BGP EVPN Address- Family to exchange EVPN routes (NLRI) and participate in VTEP discovery. Spine and super-spines do not participate in the VTEP functionality. However, selected spines in the spine layer are enabled with the BGP EVPN Address-Family for distribution of routes, and all leafs including border leafs must be peered with these spines who have the BGP EVPN Address-Family enabled. In the deployment model where ebgp is used, a minimum of two spines in a 3-stage PoD should be enabled with the EVPN Address-Family. Note that all spines participate in the ebgp underlay, but only a few designated spines participate in the EVPN. In the deployment model where ibgp is used, two spines are selected as route reflectors for the EVPN Address-Family, and each VTEP leaf has two ibgp neighbors that are the two spine BGP route reflectors. Each spine BGP route reflector has all VTEP leaf nodes as route-reflector clients and reflects EVPN routes for the VTEP leaf nodes. In the 5-stage Clos topology, a minimum of two super-spines should be enabled with the EVPN Address-Family. EVPN Route Types EVPN uses different route types to carry various network-layer reachability information. The following are the well-known route types defined in BGP EVPN: Route Type-1 Ethernet Auto Discovery route. This route is used in multi-homing cases to achieve split-horizon, aliasing, and fast convergence. Route Type-2 MAC/IP advertisement route: MAC-only route that carries {MAC address of the host, L2VNI of the VXLAN segment}. This route carries only the Layer 2 information of a host. Whenever a VTEP learns a MAC from its server-facing subnets, it advertises this route into BGP. MAC/IP route that carries {MAC address of the host, IPv4/IPv6 address of the host, L2VNI of the VXLAN segment, L3VNI of the tenant VRF of the host}. This route carries both the Layer 2 and Layer 3 information of the hosts. This route is advertised by the VTEP when it learns the IPv4/IPv6 host addresses via ARP or ND from the server-facing subnets. This information enables ARP/ND suppression on other VTEPs. Route Type-3 Inclusive Multicast Ethernet Tag route. This route is required for sending BUM traffic to all VTEPs interested for a given bridge domain or VXLAN segment. Route Type-4 Ethernet Segment route. This route is used for multi-homing of server VLAN segments. Note that only MCT or VLAG- based multi-homing is supported. Route Type-5 IPv4/IPv6 prefix advertisement route {IPv4/IPv6 route, L3VNI, Router-MAC}. This route is advertised for every Layer 3 serverfacing subnet behind a VTEP or external routes. 23

24 Tunnel Attribute Extended community type 0x3, sub-type 0x0c, and tunnel encapsulation type 0x8 (VXLAN). This is included with all EVPN routes. Layer 3 VNI or Tenant VRF Each tenant VRF is configured with a unique Layer 3 VNI. This is required for inter-subnet routing. This VNI must be the same for a tenant VRF on all VTEPs including the border leaf. Both Type-2 and Type-5 routes carry this Layer 3 VNI. Router-MAC Extended Community Extended community type EVPN (0x06) and sub-type 0x03. The router-mac is the MAC address of the VTEP advertising a route. This is also required along with the Layer 3 VNI for inter-subnet routing, as explained in Integrated Routing and Bridging section; and it is carried in both Type-2 MAC/IP routes and Type-5 prefix routes. In the data plane, this MAC address is used as the inner destination MAC address when a packet is routed. MAC-Mobility Attribute Extended community type EVPN (0x06) and sub-type 0x00. Carries a 32-bit sequence number. This enables MAC or station moves between the VTEPs. When a MAC moves, for example, from VTEP-1 to VTEP-2, VTEP-2 advertises a MAC (or MAC/IP) route with a higher sequence number. This update triggers a best-path calculation on other VTEPs, thereby detecting the host move to VTEP- 2. ARP Suppression Control-plane distribution of MAC/IP addresses enables ARP suppression in the fabric for Layer 2 extensions between racks. A portion of the fabric is shown in Figure 8 to illustrate the ARP suppression functionality in the fabric. When the hosts come up, they typically ARP for the gateway IP that is hosted by leafs. Let's consider the case where H2 ARPs for the gateway address. Note that both leafs have the same anycast gateway address for the host VXLAN segment. Leaf2 learns the MAC/IP (or ARP) binding for H2. Leaf2 will advertise the MAC/IP route into the BGP EVPN Address-Family. Leaf1 will learn this route and populate it in its MAC/IP binding table. H1 sends an ARP request to H2. Leaf1 will respond on behalf of H2. Extending the same information flow for H1, when Leaf2 learns H1's MAC/IP route, it will respond to ARP requests on behalf of H1. Compared to the data-plane-based learning in Layer 2 extension technologies such as VPLS or VXLAN flood and learn, where ARP traffic is also sent over an overlay network, VXLAN EVPN significantly reduces ARP/ND flooding in the fabric. 24

25 Figure 8 ARP Suppression (2) BGP update to Leaf1 M2/ ; Leaf updates its host table Leaf1 VNI 10 Anycast Gateway Leaf2 VNI 10 Anycast Gateway (3) H1 sends ARP for H2; Leaf1 responds on behalf of H2 (1) H2 sends ARP for Gateway IP; Leaf2 update its host table H1 MAC M1 IP VLAN 10 H2 MAC M2 IP VLAN 10 VLAN Scoping As discussed earlier, in VXLAN networks, each VLAN is mapped to a VNI number of a VXLAN segment. This provides an interesting option to break the 4K limit of the 802.1Q VLAN space. The VLAN tag (or c-tag) on the wire or the port VLAN membership may be locally scoped or locally significant at the leaf level or at the port level within a leaf. VLAN Scoping at the Leaf Level In this case, the VLANs are scoped at the leaf or ToR level. Refer to Figure 9. In this example, - VLAN 10 is mapped to VNI 10 on Leaf1 - VLAN 20 is mapped to VNI 10 on Leaf2. By mapping to the same VNI, the two VLAN segments (VLAN 10 and VLAN 20) are on the same bridge domain. With this mapping, hosts on these VLANs have Layer 2 extension between them, and they belong to one VXLAN segment identified by the VNI

26 Figure 9 VLAN Scoping at the Leaf/ToR Level Leaf1 VLAN10 VNI 10 Anycast GW Leaf2 VLAN20 VNI 10 Anycast GW L2 Extension or same Bridge-domain H1 MAC M1 IP VLAN 10 H2 MAC M2 IP VLAN 20 VLAN Scoping at the Port Level within a Leaf VLAN scoping at the port level can be accomplished using the Bridge-domain feature of SLX devices. (VDX platforms support the same functionality with virtual fabric GVLAN feature). This basically abstracts a VLAN or bridge domain and decouples the VLAN tag (or c-tag) on the wire. Refer to Figure 10. In this example, Port1, VLAN tag 10, and Port2, VLAN tag 20, are mapped to bridge-domain BD 100, and BD 100 is mapped to VNI With this mapping, the hosts H1 (VLAN 10), H2 (VLAN 20), and H3 (VLAN 501) are bound to one VXLAN segment identified by the VNI Figure 10 VLAN Scoping at the Port Level within a ToR Leaf1 BD 100 VNI 4196 BD 100: - Port1, tag 10 - Port2, tag 20 Anycast GW Port1 Port2 Port1 Leaf2 BD 100 VNI 4196 BD 100: Port1, tag 501 Anycast GW L2 Extension or same Bridge-domain H1 MAC M1 IP VLAN tag 10 H2 MAC M2 IP VLAN tag 20 H3 MAC M3 IP VLAN tag

27 Conversational Learning Conversational learning helps conserve the hardware forwarding table by programming only those ARP/ND or MAC entries for which there are active conversations or traffic flows. With this feature, the control plane may hold more host entries than what the hardware table can support. When there is sufficient space in hardware, all host entries are programmed. When there is no space, conversational learning kicks in and starts aging out the inactive entries. Note that the host subnets are inserted into the hardware (LPM table) regardless of the activity. The host entries are inserted in the hardware (/32 IPv4 or /128 IPv6 host route table) based on the traffic. Integrated Routing and Bridging With the anycast gateway function, each VTEP or leaf acts as an Integrated Routing and Bridging (IRB) device providing Layer 2 extension as well Layer 3 routing between the VXLAN segments in a tenant. Note that the tenant may span multiple leafs. There are two variations of IRB implementation in the IP fabric: asymmetric IRB and symmetric IRB. Asymmetric IRB Figure 11 Asymmetric IRB Leaf1 VLAN 10 VNI 10, Anycast GW VLAN 20 VNI 20, Anycast GW Tenant VRF SALES Leaf2 VLAN 10 VNI 10, Anycast GW VLAN 20 VNI 20, Anycast GW Tenant VRF SALES H1 MAC M1 IP VLAN 10, VNI 10 H2 MAC M2 IP VLAN 10, VNI 10 H3 MAC M3 IP VLAN 20, VNI 20 In Figure 11, a tenant, SALES, is provisioned in the fabric with two VNI segments, VNI 10 and VNI 20. Leaf1 has servers connected to it on VNI 10 only. However, it is provisioned with both VLAN 10 and VLAN 20 and mapped to VNI 10 and VNI 20 respectively. Similar configuration is done on Leaf2. Both Leafs act as first-hop gateways for these VLANs with anycast gateway address. If H1 in VNI 10 needs to communicate with H3 in VNI 20, Leaf1 routes the packet first between the segments and then bridges the packet on VNI 20 and the packet is sent on the Overlay. Leaf2 will decapsulate the VXLAN headers and send the packet to H3. Essentially, the ingress VTEP both routes and bridges the packet; this method is referred as asymmetric IRB. This also means that every VTEP must be configured with all VLANs irrespective of the existence of local workloads on those VLANs. Symmetric IRB Figure 12 depicts symmetric IRB. Here, every tenant is assigned a Layer 3 VNI. This is analogous to a Layer 3 routing interface between two switches. This VNI must be the same for a given tenant on all leafs where it is provisioned. 27

28 The MAC/IP host routes are advertised by the VTEP with the L2 VNI as well as an L3 VNI and the router-mac address of the VTEP. When a packet is routed over the L3 VNI, the dst-mac of the inner Ethernet payload is set to the router-mac of the remote VTEP. In Figure 12, routing from H1 to H3 always occurs over this L3 VNI. That is, both leaf devices route the packet once: by the ingress leaf from the server VLAN/VNI to the L3 VNI and by the egress leaf from the L3 VNI to the server VLAN/VNI. A significant advantage of this method is that all VNIs of a given tenant need not be created on all leafs. They are created only when there is server connectivity to those VNIs. In Figure 12, Leaf1 is not configured with VNI 20. Also note that on Leaf2, even though VNI 10 is present, a packet from H3 to H1 will be routed directly on to the L3 VNI of the tenant. This adds the additional requirement that the host routes on all VXLAN segments in a given tenant need to be downloaded to the leaf's forwarding table. Figure 12 Symmetric IRB L3 VNI 2000 Leaf1 RMAC RM1 VLAN 10 VNI 10, Anycast GW Tenant VRF Sales, L3 VNI 2000 Leaf2 RMAC RM2 VLAN 10 VNI 10, Anycast GW VLAN 20 VNI 20, Anycast GW Tenant VRF Sales, L3 VNI 2000 H1 MAC M1 IP VLAN 10, VNI 10 H2 MAC M2 IP VLAN 10, VNI 10 H3 MAC M3 IP VLAN 20, VNI 20 Extreme IRB Implementation Both symmetric and asymmetric IRB methods are implemented on Extreme switches. If the target VNI segment is configured on a VTEP, asymmetric IRB is performed. Otherwise, the packet is routed over the L3 VNI or symmetric routing occurs. Every tenant VRF is assigned with an L3 VNI. In the Extreme BGP EVPN implementation, we get the best of both schemes: There is no need to create all server VNIs on all leafs for a tenant. If a target VNI segment is not local and is extended behind one or more remote VTEPs, download the host routes on that target segment into hardware based on traffic activity. Traffic to these hosts will be routed over the L3 VNI. Multitenancy Layer 2 multitenancy is achieved by a MAC-VRF construct used for extending a VLAN between multiple VTEPs or ToRs. In BGP EVPN, multiple tenants can co-exist at the Layer 3 level and share a common IP transport network while having their own separate routing domain in the VXLAN overlay network. Every tenant in the EVPN network is identified by a VRF (VPN routing and forwarding instance), and these tenant VRFs can span multiple leafs in a data center. (Similar to Layer 3 MPLS VPNs with tenant VRFs on multiple PE devices.). Each VRF can have a set of server-facing VLANs, routing interfaces for those VLANs with anycast gateways, and a Layer 3 VNI used for symmetric routing purposes. This VNI should be the same if the same tenant VRF is provisioned on other leafs including a border leaf. We recommend the separation of the tenant routing domain from the underlay routing domain (or default VRF), which is used for setting up the overlays or tunnels between the VTEPs. 28

EXTREME VALIDATED DESIGN. Network Virtualization in IP Fabric with BGP EVPN

EXTREME VALIDATED DESIGN. Network Virtualization in IP Fabric with BGP EVPN EXTREME VALIDATED DESIGN Network Virtualization in IP Fabric with BGP EVPN 53-1004308-07 April 2018 2018, Extreme Networks, Inc. All Rights Reserved. Extreme Networks and the Extreme Networks logo are

More information

IP Fabric Reference Architecture

IP Fabric Reference Architecture IP Fabric Reference Architecture Technical Deep Dive jammon@brocade.com Feng Shui of Data Center Design 1. Follow KISS Principle Keep It Simple 2. Minimal features 3. Minimal configuration 4. Configuration

More information

Solution Guide. Infrastructure as a Service: EVPN and VXLAN. Modified: Copyright 2016, Juniper Networks, Inc.

Solution Guide. Infrastructure as a Service: EVPN and VXLAN. Modified: Copyright 2016, Juniper Networks, Inc. Solution Guide Infrastructure as a Service: EVPN and VXLAN Modified: 2016-10-16 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net All rights reserved.

More information

Data Center Configuration. 1. Configuring VXLAN

Data Center Configuration. 1. Configuring VXLAN Data Center Configuration 1. 1 1.1 Overview Virtual Extensible Local Area Network (VXLAN) is a virtual Ethernet based on the physical IP (overlay) network. It is a technology that encapsulates layer 2

More information

EXTREME VALIDATED DESIGN. Extreme IP Fabric Architecture

EXTREME VALIDATED DESIGN. Extreme IP Fabric Architecture EXTREME VALIDATED DESIGN 53-1004890-04 April 2018 2018, Extreme Networks, Inc. All Rights Reserved. Extreme Networks and the Extreme Networks logo are trademarks or registered trademarks of Extreme Networks,

More information

Hierarchical Fabric Designs The Journey to Multisite. Lukas Krattiger Principal Engineer September 2017

Hierarchical Fabric Designs The Journey to Multisite. Lukas Krattiger Principal Engineer September 2017 Hierarchical Fabric Designs The Journey to Multisite Lukas Krattiger Principal Engineer September 2017 A Single Fabric, a Single Data Center External Layer-3 Network Pod 1 Leaf/ Topologies (aka Folded

More information

Implementing VXLAN. Prerequisites for implementing VXLANs. Information about Implementing VXLAN

Implementing VXLAN. Prerequisites for implementing VXLANs. Information about Implementing VXLAN This module provides conceptual information for VXLAN in general and configuration information for layer 2 VXLAN on Cisco ASR 9000 Series Router. For configuration information of layer 3 VXLAN, see Implementing

More information

Introduction to External Connectivity

Introduction to External Connectivity Before you begin Ensure you know about Programmable Fabric. Conceptual information is covered in the Introduction to Cisco Programmable Fabric and Introducing Cisco Programmable Fabric (VXLAN/EVPN) chapters.

More information

Unicast Forwarding. Unicast. Unicast Forwarding Flows Overview. Intra Subnet Forwarding (Bridging) Unicast, on page 1

Unicast Forwarding. Unicast. Unicast Forwarding Flows Overview. Intra Subnet Forwarding (Bridging) Unicast, on page 1 Unicast, on page 1 Unicast Flows Overview Intra and inter subnet forwarding are the possible unicast forwarding flows in the VXLAN BGP EVPN fabric, between leaf/tor switch VTEPs. They are explained in

More information

VXLAN Overview: Cisco Nexus 9000 Series Switches

VXLAN Overview: Cisco Nexus 9000 Series Switches White Paper VXLAN Overview: Cisco Nexus 9000 Series Switches What You Will Learn Traditional network segmentation has been provided by VLANs that are standardized under the IEEE 802.1Q group. VLANs provide

More information

Contents. EVPN overview 1

Contents. EVPN overview 1 Contents EVPN overview 1 EVPN network model 1 MP-BGP extension for EVPN 2 Configuration automation 3 Assignment of traffic to VXLANs 3 Traffic from the local site to a remote site 3 Traffic from a remote

More information

Configuring VXLAN EVPN Multi-Site

Configuring VXLAN EVPN Multi-Site This chapter contains the following sections: About VXLAN EVPN Multi-Site, page 1 Guidelines and Limitations for VXLAN EVPN Multi-Site, page 2 Enabling VXLAN EVPN Multi-Site, page 2 Configuring VNI Dual

More information

Extreme Networks How to Build Scalable and Resilient Fabric Networks

Extreme Networks How to Build Scalable and Resilient Fabric Networks Extreme Networks How to Build Scalable and Resilient Fabric Networks Mikael Holmberg Distinguished Systems Engineer Fabrics MLAG IETF TRILL Cisco FabricPath Extreme (Brocade) VCS Juniper QFabric IEEE Fabric

More information

HPE FlexFabric 5940 Switch Series

HPE FlexFabric 5940 Switch Series HPE FlexFabric 5940 Switch Series EVPN Configuration Guide Part number: 5200-2002b Software version: Release 25xx Document version: 6W102-20170830 Copyright 2017 Hewlett Packard Enterprise Development

More information

Cloud Data Center Architecture Guide

Cloud Data Center Architecture Guide Cloud Data Center Architecture Guide Modified: 2018-08-21 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net Juniper Networks, the Juniper Networks

More information

Configuring VXLAN EVPN Multi-Site

Configuring VXLAN EVPN Multi-Site This chapter contains the following sections: About VXLAN EVPN Multi-Site, page 1 Licensing Requirements for VXLAN EVPN Multi-Site, page 2 Guidelines and Limitations for VXLAN EVPN Multi-Site, page 2 Enabling

More information

VXLAN Design with Cisco Nexus 9300 Platform Switches

VXLAN Design with Cisco Nexus 9300 Platform Switches Guide VXLAN Design with Cisco Nexus 9300 Platform Switches Guide October 2014 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 39 Contents What

More information

Configuring VXLAN EVPN Multi-Site

Configuring VXLAN EVPN Multi-Site This chapter contains the following sections: About VXLAN EVPN Multi-Site, on page 1 Licensing Requirements for VXLAN EVPN Multi-Site, on page 2 Guidelines and Limitations for VXLAN EVPN Multi-Site, on

More information

Ethernet VPN (EVPN) in Data Center

Ethernet VPN (EVPN) in Data Center Ethernet VPN (EVPN) in Data Center Description and Design considerations Vasilis Stavropoulos Sparkle GR EVPN in Data Center The necessity for EVPN (what it is, which problems it solves) EVPN with MPLS

More information

Huawei CloudEngine Series. VXLAN Technology White Paper. Issue 06 Date HUAWEI TECHNOLOGIES CO., LTD.

Huawei CloudEngine Series. VXLAN Technology White Paper. Issue 06 Date HUAWEI TECHNOLOGIES CO., LTD. Issue 06 Date 2016-07-28 HUAWEI TECHNOLOGIES CO., LTD. 2016. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of

More information

Traffic Load Balancing in EVPN/VXLAN Networks. Tech Note

Traffic Load Balancing in EVPN/VXLAN Networks. Tech Note Traffic Load Balancing in EVPN/VXLAN Networks Tech Note December 2017 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net Juniper Networks assumes no

More information

Provisioning Overlay Networks

Provisioning Overlay Networks This chapter has the following sections: Using Cisco Virtual Topology System, page 1 Creating Overlays, page 2 Creating Network using VMware, page 4 Creating Subnetwork using VMware, page 4 Creating Routers

More information

Optimizing Layer 2 DCI with OTV between Multiple VXLAN EVPN Fabrics (Multifabric)

Optimizing Layer 2 DCI with OTV between Multiple VXLAN EVPN Fabrics (Multifabric) White Paper Optimizing Layer 2 DCI with OTV between Multiple VXLAN EVPN Fabrics (Multifabric) What You Will Learn This document describes how to achieve a VXLAN EVPN multifabric design by integrating Virtual

More information

H3C S6520XE-HI Switch Series

H3C S6520XE-HI Switch Series H3C S6520XE-HI Switch Series EVPN Configuration Guide New H3C Technologies Co., Ltd. http://www.h3c.com.hk Software version: Release 1108 Document version: 6W100-20171228 Copyright 2017, New H3C Technologies

More information

BESS work on control planes for DC overlay networks A short overview

BESS work on control planes for DC overlay networks A short overview BESS work on control planes for DC overlay networks A short overview Jorge Rabadan IETF99, July 2017 Prague 1 Agenda EVPN in a nutshell BESS work on EVPN for NVO3 networks EVPN in the industry today Future

More information

Virtual Extensible LAN and Ethernet Virtual Private Network

Virtual Extensible LAN and Ethernet Virtual Private Network Virtual Extensible LAN and Ethernet Virtual Private Network Contents Introduction Prerequisites Requirements Components Used Background Information Why you need a new extension for VLAN? Why do you chose

More information

Internet Engineering Task Force (IETF) Request for Comments: N. Bitar Nokia R. Shekhar. Juniper. J. Uttaro AT&T W. Henderickx Nokia March 2018

Internet Engineering Task Force (IETF) Request for Comments: N. Bitar Nokia R. Shekhar. Juniper. J. Uttaro AT&T W. Henderickx Nokia March 2018 Internet Engineering Task Force (IETF) Request for Comments: 8365 Category: Standards Track ISSN: 2070-1721 A. Sajassi, Ed. Cisco J. Drake, Ed. Juniper N. Bitar Nokia R. Shekhar Juniper J. Uttaro AT&T

More information

Ethernet VPN (EVPN) and Provider Backbone Bridging-EVPN: Next Generation Solutions for MPLS-based Ethernet Services. Introduction and Application Note

Ethernet VPN (EVPN) and Provider Backbone Bridging-EVPN: Next Generation Solutions for MPLS-based Ethernet Services. Introduction and Application Note White Paper Ethernet VPN (EVPN) and Provider Backbone Bridging-EVPN: Next Generation Solutions for MPLS-based Ethernet Services Introduction and Application Note Last Updated: 5/2014 Ethernet VPN (EVPN)

More information

Internet Engineering Task Force (IETF) ISSN: A. Sajassi Cisco J. Uttaro AT&T May 2018

Internet Engineering Task Force (IETF) ISSN: A. Sajassi Cisco J. Uttaro AT&T May 2018 Internet Engineering Task Force (IETF) Request for Comments: 8388 Category: Informational ISSN: 2070-1721 J. Rabadan, Ed. S. Palislamovic W. Henderickx Nokia A. Sajassi Cisco J. Uttaro AT&T May 2018 Usage

More information

IP fabrics - reloaded

IP fabrics - reloaded IP fabrics - reloaded Joerg Ammon Senior Principal Systems Engineer 2017-11-09 2017 Extreme Networks, Inc. All rights reserved Extreme Networks Acquisition update Oct 30, 2017:

More information

Pluribus Data Center Interconnect Validated

Pluribus Data Center Interconnect Validated Design Guide Pluribus Data Center Interconnect Validated Design Guide www.pluribusnetworks.com Terminology Reference This is a glossary of acronyms and terms used throughout this document. AS BFD BGP L2VPN

More information

Technical Brief. Achieving a Scale-Out IP Fabric with the Adaptive Cloud Fabric Architecture.

Technical Brief. Achieving a Scale-Out IP Fabric with the Adaptive Cloud Fabric Architecture. Technical Brief Achieving a Scale-Out IP Fabric with the Adaptive Cloud Fabric Architecture www.pluribusnetworks.com Terminology Reference This is a glossary of acronyms and terms used throughout this

More information

Implementing VXLAN in DataCenter

Implementing VXLAN in DataCenter Implementing VXLAN in DataCenter LTRDCT-1223 Lilian Quan Technical Marketing Engineering, INSBU Erum Frahim Technical Leader, ecats John Weston Technical Leader, ecats Why Overlays? Robust Underlay/Fabric

More information

Building Data Center Networks with VXLAN EVPN Overlays Part I

Building Data Center Networks with VXLAN EVPN Overlays Part I BRKDCT-2949 Building Data Center Networks with VXLAN EVPN Overlays Part I Lukas Krattiger, Principal Engineer Cisco Spark How Questions? Use Cisco Spark to communicate with the speaker after the session

More information

VXLAN EVPN Multihoming with Cisco Nexus 9000 Series Switches

VXLAN EVPN Multihoming with Cisco Nexus 9000 Series Switches White Paper VXLAN EVPN Multihoming with Cisco Nexus 9000 Series Switches 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 27 Contents Introduction...

More information

Border Provisioning Use Case in VXLAN BGP EVPN Fabrics - Multi-Site

Border Provisioning Use Case in VXLAN BGP EVPN Fabrics - Multi-Site Border Provisioning Use Case in VXLAN BGP EVPN Fabrics - Multi-Site This chapter explains LAN Fabric border provisioning using EVPN Multi-Site feature. Overview, page 1 Prerequisites, page 1 Limitations,

More information

MPLS VPN--Inter-AS Option AB

MPLS VPN--Inter-AS Option AB The feature combines the best functionality of an Inter-AS Option (10) A and Inter-AS Option (10) B network to allow a Multiprotocol Label Switching (MPLS) Virtual Private Network (VPN) service provider

More information

Higher scalability to address more Layer 2 segments: up to 16 million VXLAN segments.

Higher scalability to address more Layer 2 segments: up to 16 million VXLAN segments. This chapter tells how to configure Virtual extensible LAN (VXLAN) interfaces. VXLANs act as Layer 2 virtual networks over Layer 3 physical networks to stretch Layer 2 networks. About VXLAN Encapsulation

More information

Cisco ACI Multi-Pod/Multi-Site Deployment Options Max Ardica Principal Engineer BRKACI-2003

Cisco ACI Multi-Pod/Multi-Site Deployment Options Max Ardica Principal Engineer BRKACI-2003 Cisco ACI Multi-Pod/Multi-Site Deployment Options Max Ardica Principal Engineer BRKACI-2003 Agenda ACI Introduction and Multi-Fabric Use Cases ACI Multi-Fabric Design Options ACI Stretched Fabric Overview

More information

VXLAN Multipod Design for Intra-Data Center and Geographically Dispersed Data Center Sites

VXLAN Multipod Design for Intra-Data Center and Geographically Dispersed Data Center Sites White Paper VXLAN Multipod Design for Intra-Data Center and Geographically Dispersed Data Center Sites May 17, 2016 Authors Max Ardica, Principal Engineer INSBU Patrice Bellagamba, Distinguish System Engineer

More information

Spirent TestCenter EVPN and PBB-EVPN AppNote

Spirent TestCenter EVPN and PBB-EVPN AppNote Spirent TestCenter EVPN and PBB-EVPN AppNote Executive summary 2 Overview of EVPN 2 Relevant standards 3 Test case: Single Home Test Scenario for EVPN 4 Overview 4 Objective 4 Topology 4 Step-by-step instructions

More information

Designing Mul+- Tenant Data Centers using EVPN- IRB. Neeraj Malhotra, Principal Engineer, Cisco Ahmed Abeer, Technical Marke<ng Engineer, Cisco

Designing Mul+- Tenant Data Centers using EVPN- IRB. Neeraj Malhotra, Principal Engineer, Cisco Ahmed Abeer, Technical Marke<ng Engineer, Cisco Designing Mul+- Tenant Data Centers using EVPN- IRB Neeraj Malhotra, Principal Engineer, Cisco Ahmed Abeer, Technical Marke

More information

MPLS VPN Inter-AS Option AB

MPLS VPN Inter-AS Option AB First Published: December 17, 2007 Last Updated: September 21, 2011 The feature combines the best functionality of an Inter-AS Option (10) A and Inter-AS Option (10) B network to allow a Multiprotocol

More information

MP-BGP VxLAN, ACI & Demo. Brian Kvisgaard System Engineer, CCIE SP #41039 November 2017

MP-BGP VxLAN, ACI & Demo. Brian Kvisgaard System Engineer, CCIE SP #41039 November 2017 MP-BGP VxLAN, ACI & Demo Brian Kvisgaard System Engineer, CCIE SP #41039 November 2017 Datacenter solutions Programmable Fabric Classic Ethernet VxLAN-BGP EVPN standard-based Cisco DCNM Automation Modern

More information

Multi-site Datacenter Network Infrastructures

Multi-site Datacenter Network Infrastructures Multi-site Datacenter Network Infrastructures Petr Grygárek rek 2009 Petr Grygarek, Advanced Computer Networks Technologies 1 Why Multisite Datacenters? Resiliency against large-scale site failures (geodiversity)

More information

EXTREME VALIDATED DESIGN. Extreme VCS Fabric with IP Storage

EXTREME VALIDATED DESIGN. Extreme VCS Fabric with IP Storage EXTREME VALIDATED DESIGN 53-1004936-03 April 2018 2018, Extreme Networks, Inc. All Rights Reserved. Extreme Networks and the Extreme Networks logo are trademarks or registered trademarks of Extreme Networks,

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Configuring BGP Autodiscovery for LDP VPLS Release NCE0035 Modified: 2017-01-24 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net

More information

Configuring Virtual Private LAN Services

Configuring Virtual Private LAN Services Virtual Private LAN Services (VPLS) enables enterprises to link together their Ethernet-based LANs from multiple sites via the infrastructure provided by their service provider. This module explains VPLS

More information

VXLAN Deployment Use Cases and Best Practices

VXLAN Deployment Use Cases and Best Practices VXLAN Deployment Use Cases and Best Practices Azeem Suleman Solutions Architect Cisco Advanced Services Contributions Thanks to the team: Abhishek Saxena Mehak Mahajan Lilian Quan Bradley Wong Mike Herbert

More information

LARGE SCALE IP ROUTING LECTURE BY SEBASTIAN GRAF

LARGE SCALE IP ROUTING LECTURE BY SEBASTIAN GRAF LARGE SCALE IP ROUTING LECTURE BY SEBASTIAN GRAF MODULE 07 - MPLS BASED LAYER 2 SERVICES 1 by Xantaro MPLS BASED LAYER 2 VPNS USING MPLS FOR POINT-TO-POINT LAYER 2 SERVICES 2 by Xantaro Why are Layer-2

More information

H3C S7500E-X Switch Series

H3C S7500E-X Switch Series H3C S7500E-X Switch Series EVPN Configuration Guide Hangzhou H3C Technologies Co., Ltd. http://www.h3c.com Software version: S7500EX-CMW710-R7523P01 Document version: 6W100-20160830 Copyright 2016, Hangzhou

More information

Enterprise. Nexus 1000V. L2/L3 Fabric WAN/PE. Customer VRF. MPLS Backbone. Service Provider Data Center-1 Customer VRF WAN/PE OTV OTV.

Enterprise. Nexus 1000V. L2/L3 Fabric WAN/PE. Customer VRF. MPLS Backbone. Service Provider Data Center-1 Customer VRF WAN/PE OTV OTV. 2 CHAPTER Cisco's Disaster Recovery as a Service (DRaaS) architecture supports virtual data centers that consist of a collection of geographically-dispersed data center locations. Since data centers are

More information

Configuring MPLS and EoMPLS

Configuring MPLS and EoMPLS 37 CHAPTER This chapter describes how to configure multiprotocol label switching (MPLS) and Ethernet over MPLS (EoMPLS) on the Catalyst 3750 Metro switch. MPLS is a packet-switching technology that integrates

More information

VXLAN EVPN Multi-Site Design and Deployment

VXLAN EVPN Multi-Site Design and Deployment White Paper VXLAN EVPN Multi-Site Design and Deployment 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 55 Contents What you will learn... 4

More information

White Paper. Huawei Campus Switches VXLAN Technology. White Paper

White Paper. Huawei Campus Switches VXLAN Technology. White Paper White Paper Huawei Campus Switches VXLAN Technology White Paper 1 Terms Abbreviation VXLAN NVo3 BUM VNI VM VTEP SDN Full English Name Virtual Extensible Local Area Network Network Virtualization over L3

More information

Provisioning Overlay Networks

Provisioning Overlay Networks This chapter has the following sections: Using Cisco Virtual Topology System, page 1 Creating Overlays, page 2 Creating Network using VMware, page 3 Creating Subnetwork using VMware, page 4 Creating Routers

More information

Creating and Managing Admin Domains

Creating and Managing Admin Domains This chapter has the following sections: Admin Domain Overview, page 1 Viewing Admin Domain, page 2 Creating an Admin Domain, page 2 Creating DCI Interconnect Profiles, page 6 Admin Domain Overview The

More information

Implementing MPLS VPNs over IP Tunnels

Implementing MPLS VPNs over IP Tunnels The MPLS VPNs over IP Tunnels feature lets you deploy Layer 3 Virtual Private Network (L3VPN) services, over an IP core network, using L2TPv3 multipoint tunneling instead of MPLS. This allows L2TPv3 tunnels

More information

Configuring MPLS L3VPN

Configuring MPLS L3VPN Contents Configuring MPLS L3VPN 1 MPLS L3VPN overview 1 Introduction to MPLS L3VPN 1 MPLS L3VPN concepts 2 MPLS L3VPN packet forwarding 5 MPLS L3VPN networking schemes 5 MPLS L3VPN routing information

More information

Open Compute Network Operating System Version 1.1

Open Compute Network Operating System Version 1.1 Solution Guide Open Compute Network Operating System Version 1.1 Data Center Solution - EVPN with VXLAN 2016 IP Infusion Inc. All Rights Reserved. This documentation is subject to change without notice.

More information

Configuring VPLS. VPLS overview. Operation of VPLS. Basic VPLS concepts

Configuring VPLS. VPLS overview. Operation of VPLS. Basic VPLS concepts Contents Configuring VPLS 1 VPLS overview 1 Operation of VPLS 1 VPLS packet encapsulation 4 H-VPLS implementation 5 Hub-spoke VPLS implementation 7 Multi-hop PW 8 VPLS configuration task list 9 Enabling

More information

DCI. DataCenter Interconnection / Infrastructure. Arnaud Fenioux

DCI. DataCenter Interconnection / Infrastructure. Arnaud Fenioux DCI DataCenter Interconnection / Infrastructure Arnaud Fenioux What is DCI? DataCenter Interconnection Or DataCenter Infrastructure? 2 From interconnection to infrastructure Interconnection Dark fiber

More information

MPLS design. Massimiliano Sbaraglia

MPLS design. Massimiliano Sbaraglia MPLS design Massimiliano Sbaraglia - MPLS layer 2 VPN diagram flowchart - MPLS layer 2 VPN pseudowire VPWS diagram - MPLS layer 2 VPN VPLS diagram - MPLS layer 2 EVPN diagram - MPLS layer 3 VPN diagram

More information

Hochverfügbarkeit in Campusnetzen

Hochverfügbarkeit in Campusnetzen Hochverfügbarkeit in Campusnetzen Für die deutsche Airheads Community 04. Juli 2017, Tino H. Seifert, System Engineer Aruba Differences between Campus Edge and Campus Core Campus Edge In many cases no

More information

Configuring Virtual Private LAN Service (VPLS) and VPLS BGP-Based Autodiscovery

Configuring Virtual Private LAN Service (VPLS) and VPLS BGP-Based Autodiscovery Configuring Virtual Private LAN Service (VPLS) and VPLS BGP-Based Autodiscovery Finding Feature Information, page 1 Configuring VPLS, page 1 Configuring VPLS BGP-based Autodiscovery, page 17 Finding Feature

More information

Intended status: Standards Track. Cisco Systems October 22, 2018

Intended status: Standards Track. Cisco Systems October 22, 2018 BESS WorkGroup Internet-Draft Intended status: Standards Track Expires: April 25, 2019 Ali. Sajassi Mankamana. Mishra Samir. Thoria Patrice. Brissette Cisco Systems October 22, 2018 AC-Aware Bundling Service

More information

Deploy Application Load Balancers with Source Network Address Translation in Cisco DFA

Deploy Application Load Balancers with Source Network Address Translation in Cisco DFA White Paper Deploy Application Load Balancers with Source Network Address Translation in Cisco DFA Last Updated: 1/27/2016 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco

More information

VXLAN Technical Brief A standard based Data Center Interconnection solution Dell EMC Networking Data Center Technical Marketing February 2017

VXLAN Technical Brief A standard based Data Center Interconnection solution Dell EMC Networking Data Center Technical Marketing February 2017 VXLAN Technical Brief A standard based Data Center Interconnection solution Dell EMC Networking Data Center Technical Marketing February 2017 A Dell EMC VXLAN Technical White Paper 1 THIS WHITE PAPER IS

More information

Deploy VPLS. APNIC Technical Workshop October 23 to 25, Selangor, Malaysia Hosted by:

Deploy VPLS. APNIC Technical Workshop October 23 to 25, Selangor, Malaysia Hosted by: Deploy VPLS APNIC Technical Workshop October 23 to 25, 2017. Selangor, Malaysia Hosted by: Issue Date: [201609] Revision: [01] Acknowledgement Cisco Systems 2 VPLS Overview 3 Virtual Private LAN Service

More information

VXLAN EVPN Fabric and automation using Ansible

VXLAN EVPN Fabric and automation using Ansible VXLAN EVPN Fabric and automation using Ansible Faisal Chaudhry, Principal Architect Umair Arshad, Sr Network Consulting Engineer Lei Tian, Solution Architecture Cisco Spark How Questions? Use Cisco Spark

More information

EVPN Multicast. Disha Chopra

EVPN Multicast. Disha Chopra EVPN Multicast Disha Chopra Agenda EVPN Multicast Optimizations Introduction to EVPN Multicast (BUM) IGMP Join/Leave Sync Routes Selective Multicast Ethernet Tag Route Use Case 2 EVPN BUM Traffic Basics

More information

Network Configuration Example

Network Configuration Example Network Configuration Example MetaFabric Architecture 2.0: Configuring Virtual Chassis Fabric and VMware NSX Modified: 2017-04-14 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089

More information

Segment Routing on Cisco Nexus 9500, 9300, 9200, 3200, and 3100 Platform Switches

Segment Routing on Cisco Nexus 9500, 9300, 9200, 3200, and 3100 Platform Switches White Paper Segment Routing on Cisco Nexus 9500, 9300, 9200, 3200, and 3100 Platform Switches Authors Ambrish Mehta, Cisco Systems Inc. Haider Salman, Cisco Systems Inc. 2017 Cisco and/or its affiliates.

More information

HP FlexFabric 7900 Switch Series

HP FlexFabric 7900 Switch Series HP FlexFabric 7900 Switch Series MCE Configuration Guide Part number: 5998-6188 Software version: Release 2117 and Release 2118 Document version: 6W100-20140805 Legal and notice information Copyright 2014

More information

Introduction to Segment Routing

Introduction to Segment Routing Segment Routing (SR) is a flexible, scalable way of doing source routing. Overview of Segment Routing, page 1 How Segment Routing Works, page 2 Examples for Segment Routing, page 3 Benefits of Segment

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Configuring a Two-Tiered Virtualized Data Center for Large Enterprise Networks Release NCE 33 Modified: 2016-08-01 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California

More information

EVPN for VXLAN Tunnels (Layer 3)

EVPN for VXLAN Tunnels (Layer 3) EVPN for VXLAN Tunnels (Layer 3) In This Chapter This section provides information about EVPN for VXLAN tunnels (Layer 3). Topics in this section include: Applicability on page 312 Overview on page 313

More information

Feature Information for BGP Control Plane, page 1 BGP Control Plane Setup, page 1. Feature Information for BGP Control Plane

Feature Information for BGP Control Plane, page 1 BGP Control Plane Setup, page 1. Feature Information for BGP Control Plane Feature Information for, page 1 Setup, page 1 Feature Information for Table 1: Feature Information for Feature Releases Feature Information PoAP diagnostics 7.2(0)N1(1) Included a new section on POAP Diagnostics.

More information

InterAS Option B. Information About InterAS. InterAS and ASBR

InterAS Option B. Information About InterAS. InterAS and ASBR This chapter explains the different InterAS option B configuration options. The available options are InterAS option B, InterAS option B (with RFC 3107), and InterAS option B lite. The InterAS option B

More information

Attilla de Groot Attilla de Groot Sr. Systems Engineer, HCIE #3494 Cumulus Networks

Attilla de Groot Attilla de Groot Sr. Systems Engineer, HCIE #3494 Cumulus Networks EVPN to the host Host multitenancy Attilla de Groot Attilla de Groot Sr. Systems Engineer, HCIE #3494 Cumulus Networks 1 Agenda EVPN to the Host Multi tenancy use cases Deployment issues Host integration

More information

HP A5820X & A5800 Switch Series MPLS. Configuration Guide. Abstract

HP A5820X & A5800 Switch Series MPLS. Configuration Guide. Abstract HP A5820X & A5800 Switch Series MPLS Configuration Guide Abstract This document describes the software features for the HP 5820X & 5800 Series products and guides you through the software configuration

More information

MPLS VPN. 5 ian 2010

MPLS VPN. 5 ian 2010 MPLS VPN 5 ian 2010 What this lecture is about: IP CEF MPLS architecture What is MPLS? MPLS labels Packet forwarding in MPLS MPLS VPNs 3 IP CEF & MPLS Overview How does a router forward packets? Process

More information

VXLAN Cisco and/or its affiliates. All rights reserved. Cisco Public

VXLAN Cisco and/or its affiliates. All rights reserved. Cisco Public VXLAN Presentation ID 1 Virtual Overlay Encapsulations and Forwarding Ethernet Frames are encapsulated into an IP frame format New control logic for learning and mapping VM identity (MAC address) to Host

More information

HP 5920 & 5900 Switch Series

HP 5920 & 5900 Switch Series HP 5920 & 5900 Switch Series MCE Configuration Guide Part number: 5998-2896 Software version: Release2207 Document version: 6W100-20121130 Legal and notice information Copyright 2012 Hewlett-Packard Development

More information

Configuring BGP: RT Constrained Route Distribution

Configuring BGP: RT Constrained Route Distribution Configuring BGP: RT Constrained Route Distribution BGP: RT Constrained Route Distribution is a feature that can be used by service providers in Multiprotocol Label Switching (MPLS) Layer 3 VPNs to reduce

More information

Protecting an EBGP peer when memory usage reaches level 2 threshold 66 Configuring a large-scale BGP network 67 Configuring BGP community 67

Protecting an EBGP peer when memory usage reaches level 2 threshold 66 Configuring a large-scale BGP network 67 Configuring BGP community 67 Contents Configuring BGP 1 Overview 1 BGP speaker and BGP peer 1 BGP message types 1 BGP path attributes 2 BGP route selection 6 BGP route advertisement rules 6 BGP load balancing 6 Settlements for problems

More information

Building Blocks in EVPN VXLAN for Multi-Service Fabrics. Aldrin Isaac Co-author RFC7432 Juniper Networks

Building Blocks in EVPN VXLAN for Multi-Service Fabrics. Aldrin Isaac Co-author RFC7432 Juniper Networks Building Blocks in EVPN VXLAN for Multi-Service Fabrics Aldrin Isaac Co-author RFC7432 Juniper Networks Network Subsystems Network Virtualization Bandwidth Broker TE LAN Fabric WAN Fabric LAN WAN EVPN

More information

HP FlexFabric 5930 Switch Series

HP FlexFabric 5930 Switch Series HP FlexFabric 5930 Switch Series MCE Configuration Guide Part number: 5998-4625 Software version: Release 2406 & Release 2407P01 Document version: 6W101-20140404 Legal and notice information Copyright

More information

Securizarea Calculatoarelor și a Rețelelor 32. Tehnologia MPLS VPN

Securizarea Calculatoarelor și a Rețelelor 32. Tehnologia MPLS VPN Platformă de e-learning și curriculă e-content pentru învățământul superior tehnic Securizarea Calculatoarelor și a Rețelelor 32. Tehnologia MPLS VPN MPLS VPN 5-ian-2010 What this lecture is about: IP

More information

VXLAN Design Using Dell EMC S and Z series Switches

VXLAN Design Using Dell EMC S and Z series Switches VXLAN Design Using Dell EMC S and Z series Switches Standard based Data Center Interconnect using Static VXLAN. Dell Networking Data Center Technical Marketing March 2017 A Dell EMC Data Center Interconnect

More information

Configure L2VPN Autodiscovery and Signaling

Configure L2VPN Autodiscovery and Signaling This chapter describes the L2VPN Autodiscovery and Signaling feature which enables the discovery of remote Provider Edge (PE) routers and the associated signaling in order to provision the pseudowires.

More information

Configuring MPLS L3VPN

Configuring MPLS L3VPN Contents Configuring MPLS L3VPN 1 MPLS L3VPN overview 1 MPLS L3VPN concepts 2 MPLS L3VPN packet forwarding 4 MPLS L3VPN networking schemes 5 MPLS L3VPN routing information advertisement 8 Inter-AS VPN

More information

MPLS VPN over mgre. Finding Feature Information. Last Updated: November 1, 2012

MPLS VPN over mgre. Finding Feature Information. Last Updated: November 1, 2012 MPLS VPN over mgre Last Updated: November 1, 2012 The MPLS VPN over mgre feature overcomes the requirement that a carrier support multiprotocol label switching (MPLS) by allowing you to provide MPLS connectivity

More information

Cisco Dynamic Fabric Automation Architecture. Miroslav Brzek, Systems Engineer

Cisco Dynamic Fabric Automation Architecture. Miroslav Brzek, Systems Engineer Cisco Dynamic Fabric Automation Architecture Miroslav Brzek, Systems Engineer mibrzek@cisco.com Agenda DFA Overview Optimized Networking Fabric Properties Control Plane Forwarding Plane Virtual Fabrics

More information

SharkFest 18 US. BGP is not only a TCP session https://goo.gl/mh3ex4

SharkFest 18 US. BGP is not only a TCP session https://goo.gl/mh3ex4 SharkFest 18 US BGP is not only a TCP session https://goo.gl/mh3ex4 Learning about the protocol that holds networks together Werner Fischer Principal Consultant avodaq AG History and RFCs Direction for

More information

Cisco Nexus 7000 Series NX-OS VXLAN Configuration Guide

Cisco Nexus 7000 Series NX-OS VXLAN Configuration Guide First Published: 2015-05-07 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 2016

More information

BGP IN THE DATA CENTER

BGP IN THE DATA CENTER BGP IN THE DATA CENTER A PACKET DESIGN E-BOOK Contents Page 3 : BGP the Savior Page 4 : Traditional Data Center Architecture Traffic Flows Scalability Spanning Tree Protocol (STP) Page 6 : CLOS Architecture

More information

HPE FlexFabric 7900 Switch Series

HPE FlexFabric 7900 Switch Series HPE FlexFabric 7900 Switch Series VXLAN Configuration Guide Part number: 5998-8254R Software version: Release 213x Document version: 6W101-20151113 Copyright 2015 Hewlett Packard Enterprise Development

More information

LTRDCT-2781 Building and operating VXLAN BGP EVPN Fabrics with Data Center Network Manager

LTRDCT-2781 Building and operating VXLAN BGP EVPN Fabrics with Data Center Network Manager LTRDCT-2781 Building and operating VXLAN BGP EVPN Fabrics with Data Center Network Manager Henrique Molina, Technical Marketing Engineer Matthias Wessendorf, Technical Marketing Engineer Cisco Spark How

More information

Internet Engineering Task Force (IETF) Request for Comments: 7024 Category: Standards Track

Internet Engineering Task Force (IETF) Request for Comments: 7024 Category: Standards Track Internet Engineering Task Force (IETF) Request for Comments: 7024 Category: Standards Track ISSN: 2070-1721 H. Jeng J. Uttaro AT&T L. Jalil Verizon B. Decraene Orange Y. Rekhter Juniper Networks R. Aggarwal

More information