Solution Guide. Infrastructure as a Service: EVPN and VXLAN. Modified: Copyright 2016, Juniper Networks, Inc.

Similar documents
Traffic Load Balancing in EVPN/VXLAN Networks. Tech Note

Cloud Data Center Architecture Guide

Network Configuration Example

Network Configuration Example

Data Center Configuration. 1. Configuring VXLAN

Network Virtualization in IP Fabric with BGP EVPN

Contents. EVPN overview 1

VXLAN Overview: Cisco Nexus 9000 Series Switches

IP Fabric Reference Architecture

EXTREME VALIDATED DESIGN. Network Virtualization in IP Fabric with BGP EVPN

Network Configuration Example

Network Configuration Example

Introduction to External Connectivity

Ethernet VPN (EVPN) in Data Center

Network Configuration Example

HPE FlexFabric 5940 Switch Series

Network Configuration Example

Implementing VXLAN. Prerequisites for implementing VXLANs. Information about Implementing VXLAN

Network Configuration Example

Network Configuration Example

Configuring VXLAN EVPN Multi-Site

Network Configuration Example

Hierarchical Fabric Designs The Journey to Multisite. Lukas Krattiger Principal Engineer September 2017

VXLAN Design with Cisco Nexus 9300 Platform Switches

Configuring VXLAN EVPN Multi-Site

Network Configuration Example

Network Configuration Example

Unicast Forwarding. Unicast. Unicast Forwarding Flows Overview. Intra Subnet Forwarding (Bridging) Unicast, on page 1

Network Configuration Example

Huawei CloudEngine Series. VXLAN Technology White Paper. Issue 06 Date HUAWEI TECHNOLOGIES CO., LTD.

Network Configuration Example

Multi-site Datacenter Network Infrastructures

VXLAN EVPN Multihoming with Cisco Nexus 9000 Series Switches

Evolved Campus Core: An EVPN Framework for Campus Networks. Vincent Celindro JNCIE #69 / CCIE #8630

Network Configuration Example

Junos OS Multiple Instances for Label Distribution Protocol Feature Guide Release 11.4 Published: Copyright 2011, Juniper Networks, Inc.

Optimizing Layer 2 DCI with OTV between Multiple VXLAN EVPN Fabrics (Multifabric)

H3C S6520XE-HI Switch Series

Configuring VXLAN EVPN Multi-Site

Technical Brief. Achieving a Scale-Out IP Fabric with the Adaptive Cloud Fabric Architecture.

Provisioning Overlay Networks

VXLAN Multipod Design for Intra-Data Center and Geographically Dispersed Data Center Sites

Using IPsec with Multiservices MICs on MX Series Routers

Higher scalability to address more Layer 2 segments: up to 16 million VXLAN segments.

MPLS VPN--Inter-AS Option AB

Pluribus Data Center Interconnect Validated

Cisco ACI Multi-Pod/Multi-Site Deployment Options Max Ardica Principal Engineer BRKACI-2003

VXLAN EVPN Multi-Site Design and Deployment

Network Configuration Example

Network Configuration Example

MPLS VPN Inter-AS Option AB

Internet Engineering Task Force (IETF) Request for Comments: N. Bitar Nokia R. Shekhar. Juniper. J. Uttaro AT&T W. Henderickx Nokia March 2018

Network Configuration Example

Network Configuration Example

Extreme Networks How to Build Scalable and Resilient Fabric Networks

Virtual Extensible LAN and Ethernet Virtual Private Network

Network Configuration Example

Open Compute Network Operating System Version 1.1

MP-BGP VxLAN, ACI & Demo. Brian Kvisgaard System Engineer, CCIE SP #41039 November 2017

H3C S7500E-X Switch Series

Network Configuration Example

Enterprise. Nexus 1000V. L2/L3 Fabric WAN/PE. Customer VRF. MPLS Backbone. Service Provider Data Center-1 Customer VRF WAN/PE OTV OTV.

Network Configuration Example

Designing Mul+- Tenant Data Centers using EVPN- IRB. Neeraj Malhotra, Principal Engineer, Cisco Ahmed Abeer, Technical Marke<ng Engineer, Cisco

OPEN CONTRAIL ARCHITECTURE GEORGIA TECH SDN EVENT

Configuring MPLS and EoMPLS

Network Configuration Example

Implementing VXLAN in DataCenter

Network Configuration Example

Network Configuration Example

BGP IN THE DATA CENTER

BESS work on control planes for DC overlay networks A short overview

Overview. Overview. OTV Fundamentals. OTV Terms. This chapter provides an overview for Overlay Transport Virtualization (OTV) on Cisco NX-OS devices.

Segment Routing on Cisco Nexus 9500, 9300, 9200, 3200, and 3100 Platform Switches

Network Configuration Example

Building Blocks for Cloud Networks

Ethernet VPN (EVPN) and Provider Backbone Bridging-EVPN: Next Generation Solutions for MPLS-based Ethernet Services. Introduction and Application Note

Contrail Cloud Platform Architecture

Feature Information for BGP Control Plane, page 1 BGP Control Plane Setup, page 1. Feature Information for BGP Control Plane

IP fabrics - reloaded

Implementing MPLS VPNs over IP Tunnels

Contrail Cloud Platform Architecture

Provisioning Overlay Networks

WAN. Core Routing Module. Data Cente r LAB. Internet. Today: MPLS, OSPF, BGP Future: OSPF, BGP. Today: L2VPN, L3VPN. Future: VXLAN

Network Configuration Example

Configuring MPLS L3VPN

EVPN for VXLAN Tunnels (Layer 3)

Hochverfügbarkeit in Campusnetzen

Pluribus Adaptive Cloud Fabric

Network Configuration Example

MPLS L3VPN. The MPLS L3VPN model consists of three kinds of devices: PE CE Site 2. Figure 1 Network diagram for MPLS L3VPN model

Building Data Center Networks with VXLAN EVPN Overlays Part I

Cisco Dynamic Fabric Automation Architecture. Miroslav Brzek, Systems Engineer

Internet Engineering Task Force (IETF) Request for Comments: 8014 Category: Informational. M. Lasserre Independent T. Narten IBM December 2016

WAN Edge MPLSoL2 Service

Pluribus Adaptive Cloud Fabric Powering the Software-Defined Enterprise

White Paper. Huawei Campus Switches VXLAN Technology. White Paper

Building Blocks in EVPN VXLAN for Multi-Service Fabrics. Aldrin Isaac Co-author RFC7432 Juniper Networks

BGP mvpn BGP safi IPv4

VXLAN EVPN Automation with ODL NIC. Presented by: Shreyans Desai, Serro Yrineu Rodrigues, Lumina Networks

Transcription:

Solution Guide Infrastructure as a Service: EVPN and VXLAN Modified: 2016-10-16

Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net All rights reserved. Juniper Networks, Junos, Steel-Belted Radius, NetScreen, and ScreenOS are registered trademarks of Juniper Networks, Inc. in the United States and other countries. The Juniper Networks Logo, the Junos logo, and JunosE are trademarks of Juniper Networks, Inc. All other trademarks, service marks, registered trademarks, or registered service marks are the property of their respective owners. Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice. Solution Guide Infrastructure as a Service: EVPN and VXLAN All rights reserved. The information in this document is current as of the date on the title page. YEAR 2000 NOTICE Juniper Networks hardware and software products are Year 2000 compliant. Junos OS has no known time-related limitations through the year 2038. However, the NTP application is known to have some difficulty in the year 2036. END USER LICENSE AGREEMENT The Juniper Networks product that is the subject of this technical documentation consists of (or is intended for use with) Juniper Networks software. Use of such software is subject to the terms and conditions of the End User License Agreement ( EULA ) posted at http://www.juniper.net/support/eula.html. By downloading, installing or using such software, you agree to the terms and conditions of that EULA. ii

Table of Contents Chapter 1 Infrastructure as a Service: EVPN and VXLAN.......................... 5 About This Solution Guide............................................. 5 Understanding the IaaS: EVPN and VXLAN Solution........................ 5 Market Overview................................................. 5 Solution Overview................................................ 6 Solution Elements................................................ 7 Design Considerations............................................ 11 Solution Implementation Summary................................. 17 Example: Configuring the IaaS: EVPN and VXLAN Solution.................. 18 iii

Infrastructure as a Service: EVPN and VXLAN iv

CHAPTER 1 Infrastructure as a Service: EVPN and VXLAN About This Solution Guide About This Solution Guide on page 5 Understanding the IaaS: EVPN and VXLAN Solution on page 5 Example: Configuring the IaaS: EVPN and VXLAN Solution on page 18 This Infrastructure as a service (IaaS) solution focuses on the use of Ethernet VPN (EVPN) and Virtual Extensible VLAN (VXLAN) over a bare-metal server (BMS)-based network. Such a network offers data center operators a way to create an external BGP (EBGP)-based IP fabric underlay, which provides a solid foundation for the EVPN and VXLAN overlay. By implementing this solution, telcos and data center operators can scale their cloud-enabled business, migrate legacy architectures to more flexible and modern architectures, compete with emerging Web services providers, and manage costs all at the same time. This guide provides an overview of the IaaS: EVPN and VXLAN solution, the solution requirements, design considerations, and how the solution was implemented by the Juniper Networks solutions team. It also provides an example of how to configure the network and verify that the solution is working as expected. Understanding the IaaS: EVPN and VXLAN Solution Market Overview on page 5 Solution Overview on page 6 Solution Elements on page 7 Design Considerations on page 11 Solution Implementation Summary on page 17 Market Overview In addition to owning their transport infrastructure, service providers are also in the business of offering managed IT and managed data center services to a large variety of customers. Because service providers own the infrastructure, they have the ability to offer higher service-level agreements (SLAs), quality of service (QoS), and security, as 5

Infrastructure as a Service: EVPN and VXLAN these services are often provided over dedicated circuits. However, the cost structure of these services can be relatively high, especially in comparison to the nimble and fast-executing Web services companies, for whom the cost structure is very lean and low. As service providers increasingly feel this competitive pressure, there is a need for them to innovate their business models and adopt cloud computing architectures in order to lower costs, increase efficiency, and maintain their competitiveness in Infrastructure as a Service (IaaS) offerings. While they continue to use SLAs, flexibility of deployment, and choice of topologies as a way to differentiate themselves from Web services providers, service providers also need to invest significantly in building highly automated networks. These improvements will help to cut operating expenses, and enable them to find new sources of revenue by offering new services, in order to compete more effectively. Service providers vary widely in how they build traditional networks, and there is not one specific standard or topology that is followed. However, as they move forward and extend their networks to offer cloud services, many providers are converging around two general topologies based on some high-level requirements: A large percentage of standalone bare-metal servers (BMSs), with some part of the network dedicated to offering virtualized compute services. This type of design keeps the intelligence in the traditional physical network. Largely virtualized services, with some small amount of BMS-based services. This type of design moves the intelligence out of the physical network and into the virtual network, and generally requires a software-defined network (SDN) controller. This solution guide focuses on the first use case, with a particular focus on the BMS environment. This guide will help you understand the requirements for an IaaS network, the architecture required to build the network, how to configure each layer, and how to verify its operational state. Solution Overview Traditionally, data centers have used Layer 2 technologies such as Spanning Tree Protocol (STP) and multichassis link aggregation groups (MC-LAG) to connect compute and storage resources. As the design of these data centers evolves to scale out multitenant networks, a new data center architecture is needed that decouples the underlay (physical) network from a tenant overlay network. Using a Layer 3 IP-based underlay coupled with a VXLAN-Ethernet VPN (EVPN) overlay, data center and cloud operators can deploy much larger networks than are otherwise possible with traditional Layer 2 Ethernet-based architectures. With overlays, endpoints (servers or virtual machines [VMs]) can be placed anywhere in the network and remain connected to the same logical Layer 2 network, enabling the virtual topology to be decoupled from the physical topology. For the reasons of scale and operational efficiency outlined above, virtual networking is being widely deployed in data centers. Also, the role of bare-metal compute has become more relevant for high-performance, scaleout, or container-driven workloads. This solution guide describes how standards-based control and forwarding plane protocols can enable interconnectivity by leveraging control-plane learning. In particular, this guide describes how using EVPN for control plane learning can facilitate BMS interconnection within 6

Chapter 1: Infrastructure as a Service: EVPN and VXLAN VXLAN virtual networks (VNs), and between VNs using a gateway such as a Juniper Networks QFX Series switch. Solution Elements Underlay Network In data center environments, the role of the physical underlay network is to provide an IP fabric. Also known as a Clos network, its responsibility is to provide unicast IP connectivity from any physical device (server, storage device, router, or switch) to any other physical device. An ideal underlay network provides low-latency, nonblocking, high-bandwidth connectivity from any point in the network to any other point in the network. At the underlay layer, devices maintain and share reachability information about the physical network itself. However, this layer does not contain any per-tenant state; that is, devices do not maintain and share reachability information about virtual or physical endpoints. This is a task for the overlay layer. IP fabrics can vary in size and scale. A typical solution uses two layers spine and leaf to form what is known as a three-stage Clos network, where each leaf device is connected to each spine device, as shown in shown in Figure 1 on page 7. A spine and leaf fabric is sometimes referred to as a folded, three-stage Clos network, because the first and third stages the ingress and egress nodes are folded back on top of each other. In this configuration, spine devices are typically Layer 3 switches that provide connectivity between leaf devices, and leaf devices are top-of-rack (TOR) switches that provide connectivity to the servers. Figure 1: Three-Stage Clos-Based IP Fabric Spine Spine Spine Spine Leaf Leaf Leaf Leaf Leaf g043017 As the scale of the fabric increases, it can be necessary to expand to a five-stage Clos network, as shown in Figure 2 on page 8. This scenario adds a fabric layer to provide inter-pod, or inter-data center, connectivity. 7

Infrastructure as a Service: EVPN and VXLAN Figure 2: Five-Stage Clos-Based IP Fabric A key benefit of a Clos-based fabric is natural resiliency. High availability mechanisms, such as MC-LAG or Virtual Chassis, are not required as the IP fabric uses multiple links at each layer and device; resiliency and redundancy are provided by the physical network infrastructure itself. Building an IP fabric is very straightforward and serves as a great foundation for overlay technologies such as EVPN and VXLAN. NOTE: For more information about Clos-based IP fabrics, see Clos IP Fabrics with QFX5100 Switches. Overlay Using an overlay architecture in the data center allows you to decouple physical network devices from the endpoints in the network. This decoupling allows the data center network to be programmatically provisioned at a per-tenant level. Overlay networking generally supports both Layer 2 and Layer 3 transport between servers or VMs. It also supports a much larger scale: a traditional network using VLANs for separation can support a maximum of about 4,000 tenants, while an overlay protocol such as VXLAN supports over 16 million. NOTE: At the time of this writing, QFX5100 and QFX10000 Series switches support 4000 virtual network identifiers (VNIs) per device. Virtual networks (VNs) are a key concept in an overlay environment. VNs are logical constructs implemented on top of the physical networks that replace VLAN-based isolation and provide multitenancy in a virtualized data center. Each VN is isolated from other VNs unless explicitly allowed by security policy. VNs can be interconnected within a data center, and between data centers. In data center networks, tunneling protocols such as VXLAN are used to create the data plane for the overlay layer. For devices using VXLAN, each entity that performs the encapsulation and decapsulation of packets is called a VXLAN tunnel endpoint (VTEP). VTEPs typically reside within the hypervisor of virtualized hosts, but can also reside in network devices to support BMS endpoints. 8

Chapter 1: Infrastructure as a Service: EVPN and VXLAN Figure 3 on page 9 shows a typical overlay architecture. Figure 3: Overlay Architecture In the diagram, server to the left of the IP fabric has been virtualized with a hypervisor. The hypervisor contains a VTEP that handles the encapsulation of data-plane traffic between VMs, as well as MAC address learning, provisioning of new virtual networks, and other configuration changes. The physical servers above and to the right of the IP fabric do not have any VTEP capabilities of their own. In order for these servers to participate in the overlay architecture and communicate with other endpoints (physical or virtual), they need help to encapsulate the data-plane traffic and perform MAC address learning. In this case, that help comes from the attached network device, typically a top-of-rack (TOR) switch or a leaf device in the IP fabric. Supporting the VTEP role in a network device simplifies the overlay architecture; now any device with physical servers connected to it can simply perform the overlay encapsulation and control-plane function on their behalf. From the point of view of a physical server, the network functions as usual. NOTE: For more information on VXLAN and VTEPs in overlay networks, see Learn About: VXLAN in Virtualized Data Center Networks. To support the scale of data center networks, the overlay layer typically requires a control-plane protocol to facilitate learning and sharing of endpoints. EVPN is a popular choice for this function. EVPN is a control-plane technology that uses Multiprotocol BGP (MP-BGP) for MAC and IP address (endpoint) distribution, with MAC addresses being treated as routes. Route entries can contain just a MAC address, or a MAC address plus an IP address (ARP entry). As used in data center environments, EVPN enables devices acting as VTEPs to exchange reachability information with each other about their endpoints. To support its range of capabilities, EVPN introduces several new concepts, including new route types and BGP communities. It also defines a new BGP network layer reachability information (NLRI), called the EVPN NLRI. Tor this solution, two route types are of particular note: EVPN Route Type 2: MAC/IP Advertisement route Extends BGP to advertise MAC and IP addresses in the EVPN NLRI. Key uses of this route type include advertising host 9

Infrastructure as a Service: EVPN and VXLAN MAC and IP reachability, allowing control plane-based MAC learning for remote PE devices, minimizing flooding across a WAN, and allowing PE devices to perform proxy-arp locally for remote hosts. Typically, the Type 2 route is used to support Layer 2 (intra-vxlan) traffic, though it can also support Layer 3 (inter-vxlan) traffic. EVPN Route Type 5: IP Prefix route Extends EVPN with a route type for the advertisement of IP prefixes. This route type decouples the advertisement of IP information from the advertisement of MAC addresses. The ability to advertise an entire IP prefix provides improved scaling (versus advertising MAC/IP information for every host), as well as increased efficiency in advertising and withdrawing routes. Typically, the Type 5 route is used to support Layer 3 (inter-vxlan) traffic. NOTE: For more information on EVPN in a data center context, see Improve Data Center Interconnect, L2 Services with Juniper s EVPN. Moving to an overlay architecture shifts the intelligence of the data center. Traditionally, servers and VMs each consume a MAC address and host route entry in the physical (underlay) network. However, with an overlay architecture, only the VTEPs consume a MAC address and host route entry in the physical network. All host-to-host traffic is now encapsulated between VTEPs, and the MAC address and host route of each server or VM aren t visible to the underlying networking equipment. The MAC address and host route scale have been moved from the underlay environment into the overlay. Gateways A gateway in a virtualized network environment typically refers to physical routers or switches that connect the tenant virtual networks to physical networks such as the Internet, a customer VPN, another data center, or to nonvirtualized servers. This solution uses multiple types of gateways. A Layer 2 VXLAN gateway, also known as a VTEP gateway, maps VLANs to VXLANs and handles VXLAN encapsulation and decapsulation so that non-virtualized resources do not need to support the VXLAN protocol. This permits the VXLAN and VLAN segments to act as one forwarding domain. In data center environments, a VTEP gateway often runs in software as a virtual switch or virtual router instance on a virtualized server. However, switches and routers can also function as VTEP gateways, encapsulating and decapsulating VXLAN packets on behalf of bare-metal servers, as shown earlier in Figure 3 on page 9. This setup is referred to as a hardware VTEP gateway. In this solution, the QFX5100 (leaf) devices act as Layer 2 gateways to support intra-vxlan traffic. To forward traffic between VXLANs, a Layer 3 gateway is required. In this solution, the QFX10002 (spine) devices act as Layer 3 gateways to support inter-vxlan traffic. 10

Chapter 1: Infrastructure as a Service: EVPN and VXLAN NOTE: For more information on Layer 3 gateways in a data center context, see Day One: Using Ethernet VPNs for Data Center Interconnect and Juniper Networks EVPN Implementation for Next-Generation Data Center Architectures. Design Considerations There are several design considerations when implementing an IaaS network. Fabric Connectivity Data center fabrics can be based on Layer 2 or Layer 3 technologies. Ethernet fabrics, such as Juniper Networks Virtual Chassis Fabric, are simple to manage and provide scale and equal-cost multipath (ECMP) capabilities to a certain degree. However, as the fabric increases in size, the scale of the network eventually becomes too much for an Ethernet fabric to handle. Tenant separation is another issue; as Ethernet fabrics have no overlay network, VLANs must be used, adding another limitation to the scalability of the network. An IaaS data center network requires Layer 3 protocols to provide the ECMP and scale capabilities for a network of this size. While IGPs provide excellent ECMP capabilities, BGP is the ideal option to provide the proper scaling and performance required by this solution. BGP was designed to handle the scale of the global Internet, and can be repurposed to support the needs of top-tier service provider data centers. BGP Design (Underlay) With BGP decided upon as the routing protocol for the fabric, the next decision is whether to use internal BGP (IBGP) or external BGP (EBGP). The very nature of an IP fabric requires having multiple, equal-cost paths; therefore, the key factor to consider here is how IBGP and EBGP implement ECMP functionality. IBGP requires that all devices peer with one another. In an IaaS network, BGP route reflectors typically would be implemented in the spine layer of the network to help with scaling. However, standard BGP route reflection only reflects the best (single) prefix to clients. In order to enable full ECMP, you need to configure the BGP AddPath feature to provide additional ECMP paths into the BGP route reflection advertisements to clients. Alternatively, EBGP supports ECMP without enabling additional features. It is easy to configure, and also facilitates traffic engineering if desired through standard EBGP techniques such as autonomous system (AS) padding. With EBGP, each device in the IP fabric uses a different AS number. It is also a good practice to align the AS numbers within each layer. As an example, Figure 4 on page 12 shows the spine layer with AS numbering in the 651xx range, and the leaf layer with AS numbering in the 652xx range. 11

Infrastructure as a Service: EVPN and VXLAN Figure 4: AS Numbering in an IP Fabric Underlay SPINE ASN 65101 ASN 65102 LEAF ASN 65201 ASN 65202 ASN 65203 ASN 65204 g043020 Because EBGP supports ECMP in a more straightforward fashion, an EBGP-based IP fabric is typically used at the underlay layer. NOTE: For information on Juniper Networks validated Clos-based Layer 3 IP fabric solution, see Solution Guide: Software as a Service. BGP Design (Overlay) At the overlay layer, similar decisions must be made. Again the very nature of an IP fabric requires having multiple, equal-cost paths. In addition, you must consider the overlay protocol being used. This solution uses EVPN as the control-plane protocol for the overlay; given that EVPN uses MP-BGP for communication (signaling), BGP is again a logical choice to be used in the overlay. There is more than one way to design the overlay environment. Because this solution is controllerless, meaning there is no SDN controller in use, the network itself must perform both the underlay and overlay functions. This solution uses an IBGP overlay design with route reflection, as shown in Figure 5 on page 12. With this design, leaf devices within a given point of delivery (POD) share endpoint information upstream as EVPN routes to the spine devices, which are acting as route reflectors. The spine devices reflect the routes downstream to the other leaf devices. Figure 5: BGP (EVPN) Overlay Design Single POD The spine devices can also advertise the EVPN routes to other PODs. As shown in Figure 6 on page 13, the spine devices use an MP-IBGP full mesh to share EVPN routes and provide inter-pod communication. 12

Chapter 1: Infrastructure as a Service: EVPN and VXLAN Figure 6: BGP (EVPN) Overlay Design Multiple PODs NOTE: For more information about Clos-based IP fabric design, see Clos IP Fabrics with QFX5100 Switches. EVPN Design As noted above, this solution uses EVPN as the control-plane protocol for the overlay. EVPN runs between VXLAN gateways, and removes the need for VXLAN to handle the advertisement of MAC and IP reachability information in the data plane by enabling this functionality in the control plane. A multitenant data center environment requires mechanisms to support traffic flows both within and between VNs. For this solution, intra-vxlan traffic is handled at the leaf layer, with the QFX5100 switches acting as VXLAN Layer 2 gateways. Inter-VXLAN traffic is handled at the spine layer, with the QFX10002 switches acting as VXLAN Layer 3 gateways. Spine devices are configured with integrated routing and bridging (IRB) interfaces, which endpoints use as a default gateway for non-local traffic. Intra-VXLAN forwarding is typically performed with the help of EVPN route Type 2 announcements, which advertise MAC addresses (along with their related IP address). Inter-VXLAN routing can also be performed using EVPN route Type 2 announcements, though it is increasingly performed with the help of EVPN route Type 5 announcements, which advertise entire IP prefixes. Inter-VXLAN routing supports two operating modes: asymmetric and symmetric. These terms relate to the number of lookups performed by the devices at each end of a VXLAN tunnel. The following describes the two modes: Asymmetric mode The sending device maintains explicit reachability to all remote endpoints. Benefit: just a single lookup is required on the receiving device (since the endpoint was already known by the sending device). Drawback: large environments can cause very large lookup tables. 13

Infrastructure as a Service: EVPN and VXLAN Symmetric mode The sending device does not maintain explicit reachability to all remote endpoints; rather, it puts remote traffic into a single routing VXLAN tunnel and lets the receiving device perform the endpoint lookup locally. Benefit: reduces lookup table size. Drawback: an additional lookup is required by the receiving device (since the endpoint was not explicitly known by the sending device). This solution uses symmetric mode for inter-vxlan routing. This mode is generally preferred, as current Junos OS platforms can perform multiple lookups in hardware with no impact to line-rate performance. NOTE: At the time of this writing, the QFX10002 and MX Series routers support asymmetric mode with EVPN route Type 2. QFX10002 also supports symmetric mode with EVPN route Type 5. NOTE: For more detailed information on inter-vxlan routing, see Configuring EVPN Type 5 for QFX10000 Series Switches. EVPN supports all-active (multipath) forwarding for endpoints, allowing them to be connected to two or more leaf devices for redundant connectivity, as shown in Figure 7 on page 14. Figure 7: EVPN Server Multihoming In EVPN terms, the links to a multihomed server are defined as a single Ethernet segment. Each Ethernet segment is identified using a unique Ethernet segment identifier (ESI). NOTE: For more detailed information about EVPN ESIs, see EVPN Multihoming Overview. VXLAN Design VXLAN in the overlay has the following design characteristics: 14

Chapter 1: Infrastructure as a Service: EVPN and VXLAN Each bridge domain / VXLAN network identifier (VNI) must have a VXLAN tunnel to each spine and leaf in a full mesh, that is, any-to-any connectivity. VXLAN is the data plane encapsulation between servers. EVPN is used as the control plane for MAC address learning. An example of the VXLAN design for this solution is shown in Figure 8 on page 15. Figure 8: VXLAN Design Tenant Design This solution provides tenant separation and connectivity at the spine and leaf layers. Tenant design in the spine devices has the following design characteristics: Each tenant gets its own VRF. Each tenant VRF can have multiple bridge domains. Bridge domains within a VRF can switch and route freely. Bridge domains between VRFs must not switch and route. Each bridge domain must provide VXLAN Layer 2 gateway functionality. Each bridge domain will have a routed Layer 3 interface. IRB interfaces must be able to perform inter-vxlan routing. Each spine device in the POD must be configured with identical VRF, bridge domain, and IRB components. An example of the spine tenant design for this solution is shown in Figure 9 on page 16. 15

Infrastructure as a Service: EVPN and VXLAN Figure 9: Tenant Design in Spine Devices By comparison, tenant design in the leaf devices is very simple, with the following design characteristics: Leaf devices are Layer 2 only (no VRF or IRB interfaces). By default, all traffic is isolated per bridge domain. Although a given tenant might own BD1, BD2, and BD3, there are no VRFs on the leaf device. An example of the leaf tenant design for this solution is shown in Figure 10 on page 16. Figure 10: Tenant Design in Leaf Devices IRB Design Inter-VXLAN gateway functionality is implemented in this solution at the spine layer, using IRB interfaces. These interfaces have the following design characteristics: Every bridge domain must have an Layer 3 / routed interface that is associated with an IRB interface. Each bridge domain s IRB interface can use IPv4 addressing, IPv6 addressing, or both. Each spine device must use the same IPv4 and IPv6 IRB interface addresses (this reduces the public IP addresses wasted at scale). Each spine must implement EVPN anycast gateway. An example of the leaf tenant design setup is shown in Figure 11 on page 16. Figure 11: IRB Interface Design on Spine Devices 16

Chapter 1: Infrastructure as a Service: EVPN and VXLAN Solution Implementation Summary The following hardware equipment and key software features were used to create the IaaS solution described in the upcoming example: Fabric Four QFX5100-24Q switches Underlay network EBGP peering with the downstream (spine) devices using two-byte AS numbers BFD for all BGP sessions Traffic load balancing EBGP multipath Resilient hashing Per-packet load balancing Spine Four QFX10002-72Q switches Underlay network EBGP peering with the upstream (fabric) devices using two-byte AS numbers EBGP peering with the downstream (leaf) devices using two-byte AS numbers Overlay network EVPN / IBGP full mesh between all spine devices EVPN / IBGP route reflection to leaf devices Each spine device is a route reflector for leaf devices in its POD Each POD is a separate cluster BFD for all BGP sessions Traffic load balancing EBGP multipath Resilient hashing Per-packet load balancing Nine VLANs (100 to 108) to illustrate intra-vlan and inter-vlan traffic using EVPN route Type 2 Two VLANs (999 on Spine 1 and Spine 2, 888 on Spine 3 and Spine 4) to illustrate inter-vlan traffic using EVPN route Type 5 Leaf 17

Infrastructure as a Service: EVPN and VXLAN Four QFX5100-48S switches Underlay network EBGP peering with the upstream (spine) devices using two-byte AS numbers Overlay network EVPN / IBGP peering with the upstream (spine) devices using two-byte AS numbers BFD for all BGP sessions Traffic load balancing EBGP multipath Resilient hashing Per-packet load balancing Nine VLANs (100 to 108) to illustrate intra-vlan and inter-vlan traffic using EVPN route Type 2 Two VLANs (999 on Leaf 1 and Leaf 2, 888 on Leaf 3 and Leaf 4) to illustrate inter-vlan traffic using EVPN route Type 5 Servers / End hosts Bare-metal servers attached to leaf devices Traffic generator simulating BMS hosts, sending intra- and inter-vlan traffic Related Documentation Example: Configuring the IaaS: EVPN and VXLAN Solution on page 18 Example: Configuring the IaaS: EVPN and VXLAN Solution This example describes how to build, configure, and verify a bare metal server (BMS) network containing a BGP-based IP fabric underlay, supported by an EVPN and VXLAN overlay. Requirements on page 18 Overview and Topology on page 19 Configuring the IaaS: EVPN and VXLAN Solution on page 26 Configuring Additional Features for the IaaS: EVPN and VXLAN Solution on page 51 Verification on page 60 Requirements Table 1 on page 19 lists the hardware and software components used in this example. 18

Chapter 1: Infrastructure as a Service: EVPN and VXLAN Table 1: Solution Hardware and Software Requirements Device Hardware Software Fabric devices QFX5100-24Q Junos OS Release 14.1X53-D30.3 Spine devices QFX10002-72Q Junos OS Release 15.1X53-D60.4 Leaf devices QFX5100-48S Junos OS Release 14.1X53-D35.3 Host emulation Traffic Generator Overview and Topology The topology used in this example consists of a series of QFX5100 and QFX10002 switches, as shown in Figure 12 on page 19. Figure 12: IaaS: EVPN and VXLAN Solution - Underlay Topology In this example, the fabric layer has four QFX5100-24Q switches, the spine layer has four QFX10002-72Q switches, and the leaf layer uses four QFX5100-48S switches. Leaf 1, Leaf 2, Spine 1, and Spine 2 are included in a single point of delivery (POD) named POD 19

Infrastructure as a Service: EVPN and VXLAN 1; and Leaf 3, Leaf 4, Spine 3, and Spine 4 are included in POD 2. Both data center PODs connect to the fabric layer, which provides inter-pod connectivity. NOTE: This topology simulates conditions for PODs contained either in the same data center or PODs located in different data centers. Two hosts per leaf device are connected to Leaf 1, Leaf 2, and Leaf 3. One host is dual-homed to Leaf 3 and Leaf 4 through Switch 5, and one host is single-homed to Leaf 4. This first diagram also represents the EBGP underlay for the solution, utilizing an individual autonomous system number for each device and a unique loopback address for each device for easy monitoring and troubleshooting of the network. The topology for the overlay is shown in Figure 13 on page 20. Figure 13: IaaS: EVPN and VXLAN Solution - Overlay Topology A full mesh IBGP configuration connects the spine devices together, and all spine and leaf devices belong to a single autonomous system (65200). A route reflector cluster is assigned to each POD and enables the leaf devices within the POD to have redundant connections to the spine layer. The example included in this solution explores the use of both Type 2 and Type 5 EVPN routes and contains configuration excerpts to enable you to select either option. In Figure 14 on page 21, Type 2 routes are distributed within the same VLAN. 20

Chapter 1: Infrastructure as a Service: EVPN and VXLAN Figure 14: IaaS: EVPN and VXLAN Solution - Type 2 Intra-VLAN Traffic As shown, when traffic flows between hosts that are connected to the same leaf (1.1), the traffic stays locally on the leaf and does not need to be sent to the upper layers. To reach hosts connected to other leaf devices in the same POD (1.2), traffic travels between the leaf devices and spine devices across the IP fabric. Host traffic is switched using a VXLAN tunnel established between the leaf devices. The ingress leaf device encapsulates the host traffic with a VXLAN header, the traffic is switched using the outer header, and it travels over the spine layer to reach the other leaf device. The egress leaf device de-encapsulates the VXLAN header and switches the frame to the destination host. To reach hosts located in another POD (1.3), the traffic must be sent up through the leaf, spine, and fabric layers and then down through the spine and leaf layers in the second POD to reach the destination host. The VXLAN tunnel established between the leaf devices in the different PODs enables traffic to travel from the ingress leaf device, across the spine layer in the first POD, through the fabric layer, to the spine layer in the second POD, and to the egress leaf device. The egress leaf device de-encapsulates the VXLAN header and switches the frame to the destination host. Figure 15 on page 22 shows how Type 2 routes are handled between different VLANs. 21

Infrastructure as a Service: EVPN and VXLAN Figure 15: IaaS: EVPN and VXLAN Solution - Type 2 Inter-VLAN Traffic As shown, the process is the same for all three cases of inter-vlan traffic because they each require Layer 3 routing (1.1, 1.2, and 1.3). Host traffic containing an inner header is encapsulated with a VXLAN header and an outer header that lists the local spine device as the destination. The spine device strips the outer header, de-encapsulates the VXLAN header, performs a route lookup for the inner header, and forwards the traffic across an EVPN routing instance to the respective host using a VXLAN tunnel that references the appropriate leaf device. The egress leaf device de-encapsulates the VXLAN header and switches the frame to the desired host. In this example, VLANs 100 to 108 illustrate intra-vlan and inter-vlan traffic using EVPN route Type 2. As a final option, Figure 16 on page 23 shows how Type 5 routes are handled between VLANs. 22

Chapter 1: Infrastructure as a Service: EVPN and VXLAN Figure 16: IaaS: EVPN and VXLAN Solution - Type 2 and Type 5 Inter-VLAN Traffic For the first two cases (1.1 and 1.2), inter-vlan traffic is handled the same as shown in Figure 15 on page 22. However, when sending Type 5 inter-vlan traffic between different data centers (1.3), the host traffic is encapsulated with a VXLAN header and an outer header that lists the local spine device as the destination. The local spine device de-encapsulates the VXLAN header, performs a route lookup for the inner header, and forwards the traffic across an EVPN routing instance to the remote spine device in the second POD by using a VXLAN header. The remote spine device de-encapsulates the packet and performs a route lookup for the respective routing instance based on the VNI number. The spine device then encapsulates the traffic and sends it across a VXLAN tunnel to the respective leaf device. The egress leaf device de-encapsulates the VXLAN header and switches the frame to the destination host. In this example, VLANs 999 (Spine 1 and Spine 2) and 888 (Spine 3 and Spine 4) illustrate inter-vlan traffic using EVPN route Type 5. NOTE: At the time this guide was written, Type 5 can only be used for inter-vlan topologies. To support intra-vlan topologies, use Type 2. 23

Infrastructure as a Service: EVPN and VXLAN Table 2 on page 24 lists the IPv4 addresses used in this example, Table 3 on page 24 displays the IPv6 addresses used in this example, and Table 4 on page 25 lists the loopback addresses and autonomous system numbers for the fabric, spine, and leaf devices. Table 2: IPv4 Addressing IPv4 Network Prefixes Network Fabric to spine point-to-point links 172.16.0.0/24 Spine to leaf point-to-point links 172.16.0.0/24 Loopback IP addresses (for all devices) 10.0.0.0/24 Anycast IPv4 addresses A set of nine addresses that increment the third octet and use.1 for the fourth octet: 10.1.100.1/16 10.1.101.1/16 10.1.102.1/16 10.1.103.1/16 10.1.104.1/16 10.1.105.1/16 10.1.106.1/16 10.1.107.1/16 10.1.108.1/16 Server/traffic generator IPv4 host devices A range of five addresses (0-4) per host, with the host number represented in the tens place. For example, Host 7 has the following range of addresses: 10.1.100.70/16-10.1.100.74/16. Table 3: IPv6 Addressing IPv6 Network Prefixes Network Anycast IPv6 addresses A set of nine addresses that increment the fifth double-octet and use :1 for the final double-octet: 2001:db8:10:1:100::1/80 2001:db8:10:1:101::1/80 2001:db8:10:1:102::1/80 2001:db8:10:1:103::1/80 2001:db8:10:1:104::1/80 2001:db8:10:1:105::1/80 2001:db8:10:1:106::1/80 2001:db8:10:1:107::1/80 2001:db8:10:1:108::1/80 24

Chapter 1: Infrastructure as a Service: EVPN and VXLAN Table 3: IPv6 Addressing (continued) IPv6 Network Prefixes Network Server/traffic generator IPv6 host devices A set of addresses that increment the fifth double-octet and use :<210 + spine-number> for the final double-octet. For example, for Spine 1, 210 + 1 equals 211, so the corresponding IPv6 addresses are as follows: 2001:db8:10:1:100::211/80 2001:db8:10:1:101::211/80 2001:db8:10:1:102::211/80 2001:db8:10:1:103::211/80 2001:db8:10:1:104::211/80 2001:db8:10:1:105::211/80 2001:db8:10:1:106::211/80 2001:db8:10:1:107::211/80 2001:db8:10:1:108::211/80 Table 4: Loopback Addresses and Underlay ASNs for Fabric Devices, Spine Devices, and Leaf Devices Loopback Address ASN Fabric 1 10.0.0.1 65001 Fabric 2 10.0.0.2 65002 Fabric 3 10.0.0.3 65003 Fabric 4 10.0.0.4 65004 Spine 2 10.0.0.11 65011 (underlay) 65200 (overlay) Spine 3 10.0.0.12 65012 (underlay) 65200 (overlay) Spine 4 10.0.0.13 65013 (underlay) 65200 (overlay) Spine 4 10.0.0.14 65014 (underlay) 65200 (overlay) Leaf 1 10.0.0.21 65021 (underlay) 65200 (overlay) Leaf 2 10.0.0.22 65022 (underlay) 65200 (overlay) 25

Infrastructure as a Service: EVPN and VXLAN Table 4: Loopback Addresses and Underlay ASNs for Fabric Devices, Spine Devices, and Leaf Devices (continued) Loopback Address ASN Leaf 3 10.0.0.23 65023 (underlay) 65200 (overlay) Leaf 4 10.0.0.24 65024 (underlay) 65200 (overlay) Configuring the IaaS: EVPN and VXLAN Solution NOTE: You can use Ansible scripts to generate a large portion of the IP fabric and EVPN VXLAN configurations. For more information, see: Ansible Junos Configuration for EVPN/VXLAN. This section explains how to build out the leaf, spine, and fabric layers with an EBGP-based IP fabric underlay and an IBGP-based EVPN and VXLAN overlay for the solution. It includes the following sections: Configuring Leaf Devices for the IaaS: EVPN and VXLAN Solution on page 26 Configuring Spine Devices for the IaaS: EVPN and VXLAN Solution on page 34 Configuring Fabric Devices for the IaaS: EVPN and VXLAN Solution on page 47 Configuring Host Multihoming on page 50 Configuring Leaf Devices for the IaaS: EVPN and VXLAN Solution CLI Quick Configuration To quickly configure the leaf devices, enter the following representative configuration statements on each device: NOTE: The configuration shown here applies to Leaf 1. [edit] set interfaces xe-0/0/12 description "To Host 1" set interfaces xe-0/0/12 unit 0 family ethernet-switching interface-mode trunk set interfaces xe-0/0/12 unit 0 family ethernet-switching vlan members 100-108 set interfaces xe-0/0/12 unit 0 family ethernet-switching vlan members 999 set interfaces xe-0/0/13 description "To Host 5" set interfaces xe-0/0/13 unit 0 family ethernet-switching interface-mode trunk set interfaces xe-0/0/13 unit 0 family ethernet-switching vlan members 100-108 set interfaces xe-0/0/13 unit 0 family ethernet-switching vlan members 999 set interfaces et-0/0/50 description "To Spine 1" set interfaces et-0/0/50 mtu 9192 set interfaces et-0/0/50 unit 0 family inet mtu 9000 set interfaces et-0/0/50 unit 0 family inet address 172.16.0.33/31 26

Chapter 1: Infrastructure as a Service: EVPN and VXLAN set interfaces et-0/0/51 description "To Spine 2" set interfaces et-0/0/51 mtu 9192 set interfaces et-0/0/51 unit 0 family inet mtu 9000 set interfaces et-0/0/51 unit 0 family inet address 172.16.0.37/31 set interfaces lo0 unit 0 family inet address 10.0.0.21/32 set routing-options forwarding-table export pfe-ecmp set routing-options router-id 10.0.0.21 set protocols bgp group underlay-ipfabric type external set protocols bgp group underlay-ipfabric mtu-discovery set protocols bgp group underlay-ipfabric import bgp-ipclos-in set protocols bgp group underlay-ipfabric export bgp-ipclos-out set protocols bgp group underlay-ipfabric local-as 65021 set protocols bgp group underlay-ipfabric bfd-liveness-detection minimum-interval 350 set protocols bgp group underlay-ipfabric bfd-liveness-detection multiplier 3 set protocols bgp group underlay-ipfabric bfd-liveness-detection session-mode automatic set protocols bgp group underlay-ipfabric multipath multiple-as set protocols bgp group underlay-ipfabric neighbor 172.16.0.32 peer-as 65011 set protocols bgp group underlay-ipfabric neighbor 172.16.0.36 peer-as 65012 set protocols bgp log-updown set protocols bgp graceful-restart set protocols bgp group overlay-evpn type internal set protocols bgp group overlay-evpn local-address 10.0.0.21 set protocols bgp group overlay-evpn import OVERLAY-IN set protocols bgp group overlay-evpn family evpn signaling set protocols bgp group overlay-evpn local-as 65200 set protocols bgp group overlay-evpn bfd-liveness-detection minimum-interval 350 set protocols bgp group overlay-evpn bfd-liveness-detection multiplier 3 set protocols bgp group overlay-evpn bfd-liveness-detection session-mode automatic set protocols bgp group overlay-evpn multipath set protocols bgp group overlay-evpn neighbor 10.0.0.11 set protocols bgp group overlay-evpn neighbor 10.0.0.12 set protocols evpn vni-options vni 1000 vrf-target export target:1:1000 set protocols evpn vni-options vni 1001 vrf-target export target:1:1001 set protocols evpn vni-options vni 1002 vrf-target export target:1:1002 set protocols evpn vni-options vni 1003 vrf-target export target:1:1003 set protocols evpn vni-options vni 1004 vrf-target export target:1:1004 set protocols evpn vni-options vni 1005 vrf-target export target:1:1005 set protocols evpn vni-options vni 1006 vrf-target export target:1:1006 set protocols evpn vni-options vni 1007 vrf-target export target:1:1007 set protocols evpn vni-options vni 1008 vrf-target export target:1:1008 set protocols evpn vni-options vni 1999 vrf-target export target:1:1999 set protocols evpn encapsulation vxlan set protocols evpn extended-vni-list 1000 set protocols evpn extended-vni-list 1001 set protocols evpn extended-vni-list 1002 set protocols evpn extended-vni-list 1003 set protocols evpn extended-vni-list 1004 set protocols evpn extended-vni-list 1005 set protocols evpn extended-vni-list 1006 set protocols evpn extended-vni-list 1007 set protocols evpn extended-vni-list 1008 set protocols evpn extended-vni-list 1999 set protocols evpn multicast-mode ingress-replication set protocols lldp interface all set policy-options community com1000 members target:1:1000 set policy-options community com1001 members target:1:1001 27

Infrastructure as a Service: EVPN and VXLAN set policy-options community com1002 members target:1:1002 set policy-options community com1003 members target:1:1003 set policy-options community com1004 members target:1:1004 set policy-options community com1005 members target:1:1005 set policy-options community com1006 members target:1:1006 set policy-options community com1007 members target:1:1007 set policy-options community com1008 members target:1:1008 set policy-options community com1999 members target:1:1999 set policy-options community comm-leaf_esi members target:9999:9999 set policy-options policy-statement bgp-ipclos-in term loopbacks from route-filter 10.0.0.0/16 orlonger set policy-options policy-statement bgp-ipclos-in term loopbacks then accept set policy-options policy-statement bgp-ipclos-out term loopback from protocol direct set policy-options policy-statement bgp-ipclos-out term loopback from route-filter 10.0.0.21/32 orlonger set policy-options policy-statement bgp-ipclos-out term loopback then next-hop self set policy-options policy-statement bgp-ipclos-out term loopback then accept set policy-options policy-statement bgp-ipclos-out term reject then reject set policy-options policy-statement LEAF-IN term import_leaf_esi from community comm-leaf_esi set policy-options policy-statement LEAF-IN term import_leaf_esi then accept set policy-options policy-statement LEAF-IN term import_vni1000 from community com1000 set policy-options policy-statement LEAF-IN term import_vni1000 then accept set policy-options policy-statement LEAF-IN term import_vni1001 from community com1001 set policy-options policy-statement LEAF-IN term import_vni1001 then accept set policy-options policy-statement LEAF-IN term import_vni1002 from community com1002 set policy-options policy-statement LEAF-IN term import_vni1002 then accept set policy-options policy-statement LEAF-IN term import_vni1003 from community com1003 set policy-options policy-statement LEAF-IN term import_vni1003 then accept set policy-options policy-statement LEAF-IN term import_vni1004 from community com1004 set policy-options policy-statement LEAF-IN term import_vni1004 then accept set policy-options policy-statement LEAF-IN term import_vni1005 from community com1005 set policy-options policy-statement LEAF-IN term import_vni1005 then accept set policy-options policy-statement LEAF-IN term import_vni1006 from community com1006 set policy-options policy-statement LEAF-IN term import_vni1006 then accept set policy-options policy-statement LEAF-IN term import_vni1007 from community com1007 set policy-options policy-statement LEAF-IN term import_vni1007 then accept set policy-options policy-statement LEAF-IN term import_vni1008 from community com1008 set policy-options policy-statement LEAF-IN term import_vni1008 then accept set policy-options policy-statement LEAF-IN term import_vni1999 from community com1999 set policy-options policy-statement LEAF-IN term import_vni1999 then accept set policy-options policy-statement LEAF-IN term default then reject set policy-options policy-statement OVERLAY-IN term reject-remote-gw from family evpn set policy-options policy-statement OVERLAY-IN term reject-remote-gw from next-hop 10.0.0.13 28

Chapter 1: Infrastructure as a Service: EVPN and VXLAN set policy-options policy-statement OVERLAY-IN term reject-remote-gw from next-hop 10.0.0.14 set policy-options policy-statement OVERLAY-IN term reject-remote-gw from nlri-route-type 1 set policy-options policy-statement OVERLAY-IN term reject-remote-gw from nlri-route-type 2 set policy-options policy-statement OVERLAY-IN term reject-remote-gw then reject set policy-options policy-statement OVERLAY-IN term accept-all then accept set policy-options policy-statement pfe-ecmp then load-balance per-packet set switch-options route-distinguisher 10.0.0.21:1 set switch-options vrf-import LEAF-IN set switch-options vrf-target target:9999:9999 set switch-options vtep-source-interface lo0.0 set vlans v1000 vlan-id 100 set vlans v1000 vxlan vni 1000 set vlans v1000 vxlan ingress-node-replication set vlans v1001 vlan-id 101 set vlans v1001 vxlan vni 1001 set vlans v1001 vxlan ingress-node-replication set vlans v1002 vlan-id 102 set vlans v1002 vxlan vni 1002 set vlans v1002 vxlan ingress-node-replication set vlans v1003 vlan-id 103 set vlans v1003 vxlan vni 1003 set vlans v1003 vxlan ingress-node-replication set vlans v1004 vlan-id 104 set vlans v1004 vxlan vni 1004 set vlans v1004 vxlan ingress-node-replication set vlans v1005 vlan-id 105 set vlans v1005 vxlan vni 1005 set vlans v1005 vxlan ingress-node-replication set vlans v1006 vlan-id 106 set vlans v1006 vxlan vni 1006 set vlans v1006 vxlan ingress-node-replication set vlans v1007 vlan-id 107 set vlans v1007 vxlan vni 1007 set vlans v1007 vxlan ingress-node-replication set vlans v1008 vlan-id 108 set vlans v1008 vxlan vni 1008 set vlans v1008 vxlan ingress-node-replication Step-by-Step Procedure To configure the leaf devices: 1. Configure Ethernet interfaces to reach the hosts : [edit] user@leaf-1# set interfaces xe-0/0/12 description "To Host 1" user@leaf-1# set interfaces xe-0/0/12 unit 0 family ethernet-switching interface-mode trunk user@leaf-1# set interfaces xe-0/0/12 unit 0 family ethernet-switching vlan members 100-108 user@leaf-1# set interfaces xe-0/0/12 unit 0 family ethernet-switching vlan members 999 user@leaf-1# set interfaces xe-0/0/13 description "To Host 5" user@leaf-1# set interfaces xe-0/0/13 unit 0 family ethernet-switching interface-mode trunk 29

Infrastructure as a Service: EVPN and VXLAN user@leaf-1# set interfaces xe-0/0/13 unit 0 family ethernet-switching vlan members 100-108 user@leaf-1# set interfaces xe-0/0/13 unit 0 family ethernet-switching vlan members 999 2. Configure the interfaces connecting the leaf device to the spine devices: [edit] user@leaf-1# set interfaces et-0/0/50 description "To Spine 1" user@leaf-1# set interfaces et-0/0/50 mtu 9192 user@leaf-1# set interfaces et-0/0/50 unit 0 family inet mtu 9000 user@leaf-1# set interfaces et-0/0/50 unit 0 family inet address 172.16.0.33/31 user@leaf-1# set interfaces et-0/0/51 description "To Spine 2" user@leaf-1# set interfaces et-0/0/51 mtu 9192 user@leaf-1# set interfaces et-0/0/51 unit 0 family inet mtu 9000 user@leaf-1# set interfaces et-0/0/51 unit 0 family inet address 172.16.0.37/31 3. Configure the loopback interface with a reachable IPv4 address. This loopback address is the tunnel source address. [edit] user@leaf-1# set interfaces lo0 unit 0 family inet address 10.0.0.21/32 4. Configure the router ID for the leaf device: [edit] user@leaf-1# set routing-options router-id 10.0.0.21 5. Configure an EBGP-based underlay between the leaf and spine devices and enable BFD and LLDP: [edit] user@leaf-1# set protocols bgp group underlay-ipfabric type external user@leaf-1# set protocols bgp group underlay-ipfabric mtu-discovery user@leaf-1# set protocols bgp group underlay-ipfabric import bgp-ipclos-in user@leaf-1# set protocols bgp group underlay-ipfabric export bgp-ipclos-out user@leaf-1# set protocols bgp group underlay-ipfabric local-as 65021 user@leaf-1# set protocols bgp group underlay-ipfabric bfd-liveness-detection minimum-interval 350 user@leaf-1# set protocols bgp group underlay-ipfabric bfd-liveness-detection multiplier 3 user@leaf-1# set protocols bgp group underlay-ipfabric bfd-liveness-detection session-mode automatic user@leaf-1# set protocols bgp group underlay-ipfabric multipath multiple-as user@leaf-1# set protocols bgp group underlay-ipfabric neighbor 172.16.0.32 peer-as 65011 user@leaf-1# set protocols bgp group underlay-ipfabric neighbor 172.16.0.36 peer-as 65012 user@leaf-1# set protocols lldp interface all 6. Create a routing policy that only advertises and receives loopback addresses from the IP fabric and EBGP underlay: [edit] user@leaf-1# set policy-options policy-statement bgp-ipclos-in term loopbacks from route-filter 10.0.0.0/16 orlonger user@leaf-1# set policy-options policy-statement bgp-ipclos-in term loopbacks then accept 30