Solution Guide. Infrastructure as a Service: EVPN and VXLAN. Modified: Copyright 2016, Juniper Networks, Inc.

Size: px
Start display at page:

Download "Solution Guide. Infrastructure as a Service: EVPN and VXLAN. Modified: Copyright 2016, Juniper Networks, Inc."

Transcription

1 Solution Guide Infrastructure as a Service: EVPN and VXLAN Modified:

2 Juniper Networks, Inc Innovation Way Sunnyvale, California USA All rights reserved. Juniper Networks, Junos, Steel-Belted Radius, NetScreen, and ScreenOS are registered trademarks of Juniper Networks, Inc. in the United States and other countries. The Juniper Networks Logo, the Junos logo, and JunosE are trademarks of Juniper Networks, Inc. All other trademarks, service marks, registered trademarks, or registered service marks are the property of their respective owners. Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice. Solution Guide Infrastructure as a Service: EVPN and VXLAN All rights reserved. The information in this document is current as of the date on the title page. YEAR 2000 NOTICE Juniper Networks hardware and software products are Year 2000 compliant. Junos OS has no known time-related limitations through the year However, the NTP application is known to have some difficulty in the year END USER LICENSE AGREEMENT The Juniper Networks product that is the subject of this technical documentation consists of (or is intended for use with) Juniper Networks software. Use of such software is subject to the terms and conditions of the End User License Agreement ( EULA ) posted at By downloading, installing or using such software, you agree to the terms and conditions of that EULA. ii

3 Table of Contents Chapter 1 Infrastructure as a Service: EVPN and VXLAN About This Solution Guide Understanding the IaaS: EVPN and VXLAN Solution Market Overview Solution Overview Solution Elements Design Considerations Solution Implementation Summary Example: Configuring the IaaS: EVPN and VXLAN Solution iii

4 Infrastructure as a Service: EVPN and VXLAN iv

5 CHAPTER 1 Infrastructure as a Service: EVPN and VXLAN About This Solution Guide About This Solution Guide on page 5 Understanding the IaaS: EVPN and VXLAN Solution on page 5 Example: Configuring the IaaS: EVPN and VXLAN Solution on page 18 This Infrastructure as a service (IaaS) solution focuses on the use of Ethernet VPN (EVPN) and Virtual Extensible VLAN (VXLAN) over a bare-metal server (BMS)-based network. Such a network offers data center operators a way to create an external BGP (EBGP)-based IP fabric underlay, which provides a solid foundation for the EVPN and VXLAN overlay. By implementing this solution, telcos and data center operators can scale their cloud-enabled business, migrate legacy architectures to more flexible and modern architectures, compete with emerging Web services providers, and manage costs all at the same time. This guide provides an overview of the IaaS: EVPN and VXLAN solution, the solution requirements, design considerations, and how the solution was implemented by the Juniper Networks solutions team. It also provides an example of how to configure the network and verify that the solution is working as expected. Understanding the IaaS: EVPN and VXLAN Solution Market Overview on page 5 Solution Overview on page 6 Solution Elements on page 7 Design Considerations on page 11 Solution Implementation Summary on page 17 Market Overview In addition to owning their transport infrastructure, service providers are also in the business of offering managed IT and managed data center services to a large variety of customers. Because service providers own the infrastructure, they have the ability to offer higher service-level agreements (SLAs), quality of service (QoS), and security, as 5

6 Infrastructure as a Service: EVPN and VXLAN these services are often provided over dedicated circuits. However, the cost structure of these services can be relatively high, especially in comparison to the nimble and fast-executing Web services companies, for whom the cost structure is very lean and low. As service providers increasingly feel this competitive pressure, there is a need for them to innovate their business models and adopt cloud computing architectures in order to lower costs, increase efficiency, and maintain their competitiveness in Infrastructure as a Service (IaaS) offerings. While they continue to use SLAs, flexibility of deployment, and choice of topologies as a way to differentiate themselves from Web services providers, service providers also need to invest significantly in building highly automated networks. These improvements will help to cut operating expenses, and enable them to find new sources of revenue by offering new services, in order to compete more effectively. Service providers vary widely in how they build traditional networks, and there is not one specific standard or topology that is followed. However, as they move forward and extend their networks to offer cloud services, many providers are converging around two general topologies based on some high-level requirements: A large percentage of standalone bare-metal servers (BMSs), with some part of the network dedicated to offering virtualized compute services. This type of design keeps the intelligence in the traditional physical network. Largely virtualized services, with some small amount of BMS-based services. This type of design moves the intelligence out of the physical network and into the virtual network, and generally requires a software-defined network (SDN) controller. This solution guide focuses on the first use case, with a particular focus on the BMS environment. This guide will help you understand the requirements for an IaaS network, the architecture required to build the network, how to configure each layer, and how to verify its operational state. Solution Overview Traditionally, data centers have used Layer 2 technologies such as Spanning Tree Protocol (STP) and multichassis link aggregation groups (MC-LAG) to connect compute and storage resources. As the design of these data centers evolves to scale out multitenant networks, a new data center architecture is needed that decouples the underlay (physical) network from a tenant overlay network. Using a Layer 3 IP-based underlay coupled with a VXLAN-Ethernet VPN (EVPN) overlay, data center and cloud operators can deploy much larger networks than are otherwise possible with traditional Layer 2 Ethernet-based architectures. With overlays, endpoints (servers or virtual machines [VMs]) can be placed anywhere in the network and remain connected to the same logical Layer 2 network, enabling the virtual topology to be decoupled from the physical topology. For the reasons of scale and operational efficiency outlined above, virtual networking is being widely deployed in data centers. Also, the role of bare-metal compute has become more relevant for high-performance, scaleout, or container-driven workloads. This solution guide describes how standards-based control and forwarding plane protocols can enable interconnectivity by leveraging control-plane learning. In particular, this guide describes how using EVPN for control plane learning can facilitate BMS interconnection within 6

7 Chapter 1: Infrastructure as a Service: EVPN and VXLAN VXLAN virtual networks (VNs), and between VNs using a gateway such as a Juniper Networks QFX Series switch. Solution Elements Underlay Network In data center environments, the role of the physical underlay network is to provide an IP fabric. Also known as a Clos network, its responsibility is to provide unicast IP connectivity from any physical device (server, storage device, router, or switch) to any other physical device. An ideal underlay network provides low-latency, nonblocking, high-bandwidth connectivity from any point in the network to any other point in the network. At the underlay layer, devices maintain and share reachability information about the physical network itself. However, this layer does not contain any per-tenant state; that is, devices do not maintain and share reachability information about virtual or physical endpoints. This is a task for the overlay layer. IP fabrics can vary in size and scale. A typical solution uses two layers spine and leaf to form what is known as a three-stage Clos network, where each leaf device is connected to each spine device, as shown in shown in Figure 1 on page 7. A spine and leaf fabric is sometimes referred to as a folded, three-stage Clos network, because the first and third stages the ingress and egress nodes are folded back on top of each other. In this configuration, spine devices are typically Layer 3 switches that provide connectivity between leaf devices, and leaf devices are top-of-rack (TOR) switches that provide connectivity to the servers. Figure 1: Three-Stage Clos-Based IP Fabric Spine Spine Spine Spine Leaf Leaf Leaf Leaf Leaf g As the scale of the fabric increases, it can be necessary to expand to a five-stage Clos network, as shown in Figure 2 on page 8. This scenario adds a fabric layer to provide inter-pod, or inter-data center, connectivity. 7

8 Infrastructure as a Service: EVPN and VXLAN Figure 2: Five-Stage Clos-Based IP Fabric A key benefit of a Clos-based fabric is natural resiliency. High availability mechanisms, such as MC-LAG or Virtual Chassis, are not required as the IP fabric uses multiple links at each layer and device; resiliency and redundancy are provided by the physical network infrastructure itself. Building an IP fabric is very straightforward and serves as a great foundation for overlay technologies such as EVPN and VXLAN. NOTE: For more information about Clos-based IP fabrics, see Clos IP Fabrics with QFX5100 Switches. Overlay Using an overlay architecture in the data center allows you to decouple physical network devices from the endpoints in the network. This decoupling allows the data center network to be programmatically provisioned at a per-tenant level. Overlay networking generally supports both Layer 2 and Layer 3 transport between servers or VMs. It also supports a much larger scale: a traditional network using VLANs for separation can support a maximum of about 4,000 tenants, while an overlay protocol such as VXLAN supports over 16 million. NOTE: At the time of this writing, QFX5100 and QFX10000 Series switches support 4000 virtual network identifiers (VNIs) per device. Virtual networks (VNs) are a key concept in an overlay environment. VNs are logical constructs implemented on top of the physical networks that replace VLAN-based isolation and provide multitenancy in a virtualized data center. Each VN is isolated from other VNs unless explicitly allowed by security policy. VNs can be interconnected within a data center, and between data centers. In data center networks, tunneling protocols such as VXLAN are used to create the data plane for the overlay layer. For devices using VXLAN, each entity that performs the encapsulation and decapsulation of packets is called a VXLAN tunnel endpoint (VTEP). VTEPs typically reside within the hypervisor of virtualized hosts, but can also reside in network devices to support BMS endpoints. 8

9 Chapter 1: Infrastructure as a Service: EVPN and VXLAN Figure 3 on page 9 shows a typical overlay architecture. Figure 3: Overlay Architecture In the diagram, server to the left of the IP fabric has been virtualized with a hypervisor. The hypervisor contains a VTEP that handles the encapsulation of data-plane traffic between VMs, as well as MAC address learning, provisioning of new virtual networks, and other configuration changes. The physical servers above and to the right of the IP fabric do not have any VTEP capabilities of their own. In order for these servers to participate in the overlay architecture and communicate with other endpoints (physical or virtual), they need help to encapsulate the data-plane traffic and perform MAC address learning. In this case, that help comes from the attached network device, typically a top-of-rack (TOR) switch or a leaf device in the IP fabric. Supporting the VTEP role in a network device simplifies the overlay architecture; now any device with physical servers connected to it can simply perform the overlay encapsulation and control-plane function on their behalf. From the point of view of a physical server, the network functions as usual. NOTE: For more information on VXLAN and VTEPs in overlay networks, see Learn About: VXLAN in Virtualized Data Center Networks. To support the scale of data center networks, the overlay layer typically requires a control-plane protocol to facilitate learning and sharing of endpoints. EVPN is a popular choice for this function. EVPN is a control-plane technology that uses Multiprotocol BGP (MP-BGP) for MAC and IP address (endpoint) distribution, with MAC addresses being treated as routes. Route entries can contain just a MAC address, or a MAC address plus an IP address (ARP entry). As used in data center environments, EVPN enables devices acting as VTEPs to exchange reachability information with each other about their endpoints. To support its range of capabilities, EVPN introduces several new concepts, including new route types and BGP communities. It also defines a new BGP network layer reachability information (NLRI), called the EVPN NLRI. Tor this solution, two route types are of particular note: EVPN Route Type 2: MAC/IP Advertisement route Extends BGP to advertise MAC and IP addresses in the EVPN NLRI. Key uses of this route type include advertising host 9

10 Infrastructure as a Service: EVPN and VXLAN MAC and IP reachability, allowing control plane-based MAC learning for remote PE devices, minimizing flooding across a WAN, and allowing PE devices to perform proxy-arp locally for remote hosts. Typically, the Type 2 route is used to support Layer 2 (intra-vxlan) traffic, though it can also support Layer 3 (inter-vxlan) traffic. EVPN Route Type 5: IP Prefix route Extends EVPN with a route type for the advertisement of IP prefixes. This route type decouples the advertisement of IP information from the advertisement of MAC addresses. The ability to advertise an entire IP prefix provides improved scaling (versus advertising MAC/IP information for every host), as well as increased efficiency in advertising and withdrawing routes. Typically, the Type 5 route is used to support Layer 3 (inter-vxlan) traffic. NOTE: For more information on EVPN in a data center context, see Improve Data Center Interconnect, L2 Services with Juniper s EVPN. Moving to an overlay architecture shifts the intelligence of the data center. Traditionally, servers and VMs each consume a MAC address and host route entry in the physical (underlay) network. However, with an overlay architecture, only the VTEPs consume a MAC address and host route entry in the physical network. All host-to-host traffic is now encapsulated between VTEPs, and the MAC address and host route of each server or VM aren t visible to the underlying networking equipment. The MAC address and host route scale have been moved from the underlay environment into the overlay. Gateways A gateway in a virtualized network environment typically refers to physical routers or switches that connect the tenant virtual networks to physical networks such as the Internet, a customer VPN, another data center, or to nonvirtualized servers. This solution uses multiple types of gateways. A Layer 2 VXLAN gateway, also known as a VTEP gateway, maps VLANs to VXLANs and handles VXLAN encapsulation and decapsulation so that non-virtualized resources do not need to support the VXLAN protocol. This permits the VXLAN and VLAN segments to act as one forwarding domain. In data center environments, a VTEP gateway often runs in software as a virtual switch or virtual router instance on a virtualized server. However, switches and routers can also function as VTEP gateways, encapsulating and decapsulating VXLAN packets on behalf of bare-metal servers, as shown earlier in Figure 3 on page 9. This setup is referred to as a hardware VTEP gateway. In this solution, the QFX5100 (leaf) devices act as Layer 2 gateways to support intra-vxlan traffic. To forward traffic between VXLANs, a Layer 3 gateway is required. In this solution, the QFX10002 (spine) devices act as Layer 3 gateways to support inter-vxlan traffic. 10

11 Chapter 1: Infrastructure as a Service: EVPN and VXLAN NOTE: For more information on Layer 3 gateways in a data center context, see Day One: Using Ethernet VPNs for Data Center Interconnect and Juniper Networks EVPN Implementation for Next-Generation Data Center Architectures. Design Considerations There are several design considerations when implementing an IaaS network. Fabric Connectivity Data center fabrics can be based on Layer 2 or Layer 3 technologies. Ethernet fabrics, such as Juniper Networks Virtual Chassis Fabric, are simple to manage and provide scale and equal-cost multipath (ECMP) capabilities to a certain degree. However, as the fabric increases in size, the scale of the network eventually becomes too much for an Ethernet fabric to handle. Tenant separation is another issue; as Ethernet fabrics have no overlay network, VLANs must be used, adding another limitation to the scalability of the network. An IaaS data center network requires Layer 3 protocols to provide the ECMP and scale capabilities for a network of this size. While IGPs provide excellent ECMP capabilities, BGP is the ideal option to provide the proper scaling and performance required by this solution. BGP was designed to handle the scale of the global Internet, and can be repurposed to support the needs of top-tier service provider data centers. BGP Design (Underlay) With BGP decided upon as the routing protocol for the fabric, the next decision is whether to use internal BGP (IBGP) or external BGP (EBGP). The very nature of an IP fabric requires having multiple, equal-cost paths; therefore, the key factor to consider here is how IBGP and EBGP implement ECMP functionality. IBGP requires that all devices peer with one another. In an IaaS network, BGP route reflectors typically would be implemented in the spine layer of the network to help with scaling. However, standard BGP route reflection only reflects the best (single) prefix to clients. In order to enable full ECMP, you need to configure the BGP AddPath feature to provide additional ECMP paths into the BGP route reflection advertisements to clients. Alternatively, EBGP supports ECMP without enabling additional features. It is easy to configure, and also facilitates traffic engineering if desired through standard EBGP techniques such as autonomous system (AS) padding. With EBGP, each device in the IP fabric uses a different AS number. It is also a good practice to align the AS numbers within each layer. As an example, Figure 4 on page 12 shows the spine layer with AS numbering in the 651xx range, and the leaf layer with AS numbering in the 652xx range. 11

12 Infrastructure as a Service: EVPN and VXLAN Figure 4: AS Numbering in an IP Fabric Underlay SPINE ASN ASN LEAF ASN ASN ASN ASN g Because EBGP supports ECMP in a more straightforward fashion, an EBGP-based IP fabric is typically used at the underlay layer. NOTE: For information on Juniper Networks validated Clos-based Layer 3 IP fabric solution, see Solution Guide: Software as a Service. BGP Design (Overlay) At the overlay layer, similar decisions must be made. Again the very nature of an IP fabric requires having multiple, equal-cost paths. In addition, you must consider the overlay protocol being used. This solution uses EVPN as the control-plane protocol for the overlay; given that EVPN uses MP-BGP for communication (signaling), BGP is again a logical choice to be used in the overlay. There is more than one way to design the overlay environment. Because this solution is controllerless, meaning there is no SDN controller in use, the network itself must perform both the underlay and overlay functions. This solution uses an IBGP overlay design with route reflection, as shown in Figure 5 on page 12. With this design, leaf devices within a given point of delivery (POD) share endpoint information upstream as EVPN routes to the spine devices, which are acting as route reflectors. The spine devices reflect the routes downstream to the other leaf devices. Figure 5: BGP (EVPN) Overlay Design Single POD The spine devices can also advertise the EVPN routes to other PODs. As shown in Figure 6 on page 13, the spine devices use an MP-IBGP full mesh to share EVPN routes and provide inter-pod communication. 12

13 Chapter 1: Infrastructure as a Service: EVPN and VXLAN Figure 6: BGP (EVPN) Overlay Design Multiple PODs NOTE: For more information about Clos-based IP fabric design, see Clos IP Fabrics with QFX5100 Switches. EVPN Design As noted above, this solution uses EVPN as the control-plane protocol for the overlay. EVPN runs between VXLAN gateways, and removes the need for VXLAN to handle the advertisement of MAC and IP reachability information in the data plane by enabling this functionality in the control plane. A multitenant data center environment requires mechanisms to support traffic flows both within and between VNs. For this solution, intra-vxlan traffic is handled at the leaf layer, with the QFX5100 switches acting as VXLAN Layer 2 gateways. Inter-VXLAN traffic is handled at the spine layer, with the QFX10002 switches acting as VXLAN Layer 3 gateways. Spine devices are configured with integrated routing and bridging (IRB) interfaces, which endpoints use as a default gateway for non-local traffic. Intra-VXLAN forwarding is typically performed with the help of EVPN route Type 2 announcements, which advertise MAC addresses (along with their related IP address). Inter-VXLAN routing can also be performed using EVPN route Type 2 announcements, though it is increasingly performed with the help of EVPN route Type 5 announcements, which advertise entire IP prefixes. Inter-VXLAN routing supports two operating modes: asymmetric and symmetric. These terms relate to the number of lookups performed by the devices at each end of a VXLAN tunnel. The following describes the two modes: Asymmetric mode The sending device maintains explicit reachability to all remote endpoints. Benefit: just a single lookup is required on the receiving device (since the endpoint was already known by the sending device). Drawback: large environments can cause very large lookup tables. 13

14 Infrastructure as a Service: EVPN and VXLAN Symmetric mode The sending device does not maintain explicit reachability to all remote endpoints; rather, it puts remote traffic into a single routing VXLAN tunnel and lets the receiving device perform the endpoint lookup locally. Benefit: reduces lookup table size. Drawback: an additional lookup is required by the receiving device (since the endpoint was not explicitly known by the sending device). This solution uses symmetric mode for inter-vxlan routing. This mode is generally preferred, as current Junos OS platforms can perform multiple lookups in hardware with no impact to line-rate performance. NOTE: At the time of this writing, the QFX10002 and MX Series routers support asymmetric mode with EVPN route Type 2. QFX10002 also supports symmetric mode with EVPN route Type 5. NOTE: For more detailed information on inter-vxlan routing, see Configuring EVPN Type 5 for QFX10000 Series Switches. EVPN supports all-active (multipath) forwarding for endpoints, allowing them to be connected to two or more leaf devices for redundant connectivity, as shown in Figure 7 on page 14. Figure 7: EVPN Server Multihoming In EVPN terms, the links to a multihomed server are defined as a single Ethernet segment. Each Ethernet segment is identified using a unique Ethernet segment identifier (ESI). NOTE: For more detailed information about EVPN ESIs, see EVPN Multihoming Overview. VXLAN Design VXLAN in the overlay has the following design characteristics: 14

15 Chapter 1: Infrastructure as a Service: EVPN and VXLAN Each bridge domain / VXLAN network identifier (VNI) must have a VXLAN tunnel to each spine and leaf in a full mesh, that is, any-to-any connectivity. VXLAN is the data plane encapsulation between servers. EVPN is used as the control plane for MAC address learning. An example of the VXLAN design for this solution is shown in Figure 8 on page 15. Figure 8: VXLAN Design Tenant Design This solution provides tenant separation and connectivity at the spine and leaf layers. Tenant design in the spine devices has the following design characteristics: Each tenant gets its own VRF. Each tenant VRF can have multiple bridge domains. Bridge domains within a VRF can switch and route freely. Bridge domains between VRFs must not switch and route. Each bridge domain must provide VXLAN Layer 2 gateway functionality. Each bridge domain will have a routed Layer 3 interface. IRB interfaces must be able to perform inter-vxlan routing. Each spine device in the POD must be configured with identical VRF, bridge domain, and IRB components. An example of the spine tenant design for this solution is shown in Figure 9 on page

16 Infrastructure as a Service: EVPN and VXLAN Figure 9: Tenant Design in Spine Devices By comparison, tenant design in the leaf devices is very simple, with the following design characteristics: Leaf devices are Layer 2 only (no VRF or IRB interfaces). By default, all traffic is isolated per bridge domain. Although a given tenant might own BD1, BD2, and BD3, there are no VRFs on the leaf device. An example of the leaf tenant design for this solution is shown in Figure 10 on page 16. Figure 10: Tenant Design in Leaf Devices IRB Design Inter-VXLAN gateway functionality is implemented in this solution at the spine layer, using IRB interfaces. These interfaces have the following design characteristics: Every bridge domain must have an Layer 3 / routed interface that is associated with an IRB interface. Each bridge domain s IRB interface can use IPv4 addressing, IPv6 addressing, or both. Each spine device must use the same IPv4 and IPv6 IRB interface addresses (this reduces the public IP addresses wasted at scale). Each spine must implement EVPN anycast gateway. An example of the leaf tenant design setup is shown in Figure 11 on page 16. Figure 11: IRB Interface Design on Spine Devices 16

17 Chapter 1: Infrastructure as a Service: EVPN and VXLAN Solution Implementation Summary The following hardware equipment and key software features were used to create the IaaS solution described in the upcoming example: Fabric Four QFX Q switches Underlay network EBGP peering with the downstream (spine) devices using two-byte AS numbers BFD for all BGP sessions Traffic load balancing EBGP multipath Resilient hashing Per-packet load balancing Spine Four QFX Q switches Underlay network EBGP peering with the upstream (fabric) devices using two-byte AS numbers EBGP peering with the downstream (leaf) devices using two-byte AS numbers Overlay network EVPN / IBGP full mesh between all spine devices EVPN / IBGP route reflection to leaf devices Each spine device is a route reflector for leaf devices in its POD Each POD is a separate cluster BFD for all BGP sessions Traffic load balancing EBGP multipath Resilient hashing Per-packet load balancing Nine VLANs (100 to 108) to illustrate intra-vlan and inter-vlan traffic using EVPN route Type 2 Two VLANs (999 on Spine 1 and Spine 2, 888 on Spine 3 and Spine 4) to illustrate inter-vlan traffic using EVPN route Type 5 Leaf 17

18 Infrastructure as a Service: EVPN and VXLAN Four QFX S switches Underlay network EBGP peering with the upstream (spine) devices using two-byte AS numbers Overlay network EVPN / IBGP peering with the upstream (spine) devices using two-byte AS numbers BFD for all BGP sessions Traffic load balancing EBGP multipath Resilient hashing Per-packet load balancing Nine VLANs (100 to 108) to illustrate intra-vlan and inter-vlan traffic using EVPN route Type 2 Two VLANs (999 on Leaf 1 and Leaf 2, 888 on Leaf 3 and Leaf 4) to illustrate inter-vlan traffic using EVPN route Type 5 Servers / End hosts Bare-metal servers attached to leaf devices Traffic generator simulating BMS hosts, sending intra- and inter-vlan traffic Related Documentation Example: Configuring the IaaS: EVPN and VXLAN Solution on page 18 Example: Configuring the IaaS: EVPN and VXLAN Solution This example describes how to build, configure, and verify a bare metal server (BMS) network containing a BGP-based IP fabric underlay, supported by an EVPN and VXLAN overlay. Requirements on page 18 Overview and Topology on page 19 Configuring the IaaS: EVPN and VXLAN Solution on page 26 Configuring Additional Features for the IaaS: EVPN and VXLAN Solution on page 51 Verification on page 60 Requirements Table 1 on page 19 lists the hardware and software components used in this example. 18

19 Chapter 1: Infrastructure as a Service: EVPN and VXLAN Table 1: Solution Hardware and Software Requirements Device Hardware Software Fabric devices QFX Q Junos OS Release 14.1X53-D30.3 Spine devices QFX Q Junos OS Release 15.1X53-D60.4 Leaf devices QFX S Junos OS Release 14.1X53-D35.3 Host emulation Traffic Generator Overview and Topology The topology used in this example consists of a series of QFX5100 and QFX10002 switches, as shown in Figure 12 on page 19. Figure 12: IaaS: EVPN and VXLAN Solution - Underlay Topology In this example, the fabric layer has four QFX Q switches, the spine layer has four QFX Q switches, and the leaf layer uses four QFX S switches. Leaf 1, Leaf 2, Spine 1, and Spine 2 are included in a single point of delivery (POD) named POD 19

20 Infrastructure as a Service: EVPN and VXLAN 1; and Leaf 3, Leaf 4, Spine 3, and Spine 4 are included in POD 2. Both data center PODs connect to the fabric layer, which provides inter-pod connectivity. NOTE: This topology simulates conditions for PODs contained either in the same data center or PODs located in different data centers. Two hosts per leaf device are connected to Leaf 1, Leaf 2, and Leaf 3. One host is dual-homed to Leaf 3 and Leaf 4 through Switch 5, and one host is single-homed to Leaf 4. This first diagram also represents the EBGP underlay for the solution, utilizing an individual autonomous system number for each device and a unique loopback address for each device for easy monitoring and troubleshooting of the network. The topology for the overlay is shown in Figure 13 on page 20. Figure 13: IaaS: EVPN and VXLAN Solution - Overlay Topology A full mesh IBGP configuration connects the spine devices together, and all spine and leaf devices belong to a single autonomous system (65200). A route reflector cluster is assigned to each POD and enables the leaf devices within the POD to have redundant connections to the spine layer. The example included in this solution explores the use of both Type 2 and Type 5 EVPN routes and contains configuration excerpts to enable you to select either option. In Figure 14 on page 21, Type 2 routes are distributed within the same VLAN. 20

21 Chapter 1: Infrastructure as a Service: EVPN and VXLAN Figure 14: IaaS: EVPN and VXLAN Solution - Type 2 Intra-VLAN Traffic As shown, when traffic flows between hosts that are connected to the same leaf (1.1), the traffic stays locally on the leaf and does not need to be sent to the upper layers. To reach hosts connected to other leaf devices in the same POD (1.2), traffic travels between the leaf devices and spine devices across the IP fabric. Host traffic is switched using a VXLAN tunnel established between the leaf devices. The ingress leaf device encapsulates the host traffic with a VXLAN header, the traffic is switched using the outer header, and it travels over the spine layer to reach the other leaf device. The egress leaf device de-encapsulates the VXLAN header and switches the frame to the destination host. To reach hosts located in another POD (1.3), the traffic must be sent up through the leaf, spine, and fabric layers and then down through the spine and leaf layers in the second POD to reach the destination host. The VXLAN tunnel established between the leaf devices in the different PODs enables traffic to travel from the ingress leaf device, across the spine layer in the first POD, through the fabric layer, to the spine layer in the second POD, and to the egress leaf device. The egress leaf device de-encapsulates the VXLAN header and switches the frame to the destination host. Figure 15 on page 22 shows how Type 2 routes are handled between different VLANs. 21

22 Infrastructure as a Service: EVPN and VXLAN Figure 15: IaaS: EVPN and VXLAN Solution - Type 2 Inter-VLAN Traffic As shown, the process is the same for all three cases of inter-vlan traffic because they each require Layer 3 routing (1.1, 1.2, and 1.3). Host traffic containing an inner header is encapsulated with a VXLAN header and an outer header that lists the local spine device as the destination. The spine device strips the outer header, de-encapsulates the VXLAN header, performs a route lookup for the inner header, and forwards the traffic across an EVPN routing instance to the respective host using a VXLAN tunnel that references the appropriate leaf device. The egress leaf device de-encapsulates the VXLAN header and switches the frame to the desired host. In this example, VLANs 100 to 108 illustrate intra-vlan and inter-vlan traffic using EVPN route Type 2. As a final option, Figure 16 on page 23 shows how Type 5 routes are handled between VLANs. 22

23 Chapter 1: Infrastructure as a Service: EVPN and VXLAN Figure 16: IaaS: EVPN and VXLAN Solution - Type 2 and Type 5 Inter-VLAN Traffic For the first two cases (1.1 and 1.2), inter-vlan traffic is handled the same as shown in Figure 15 on page 22. However, when sending Type 5 inter-vlan traffic between different data centers (1.3), the host traffic is encapsulated with a VXLAN header and an outer header that lists the local spine device as the destination. The local spine device de-encapsulates the VXLAN header, performs a route lookup for the inner header, and forwards the traffic across an EVPN routing instance to the remote spine device in the second POD by using a VXLAN header. The remote spine device de-encapsulates the packet and performs a route lookup for the respective routing instance based on the VNI number. The spine device then encapsulates the traffic and sends it across a VXLAN tunnel to the respective leaf device. The egress leaf device de-encapsulates the VXLAN header and switches the frame to the destination host. In this example, VLANs 999 (Spine 1 and Spine 2) and 888 (Spine 3 and Spine 4) illustrate inter-vlan traffic using EVPN route Type 5. NOTE: At the time this guide was written, Type 5 can only be used for inter-vlan topologies. To support intra-vlan topologies, use Type 2. 23

24 Infrastructure as a Service: EVPN and VXLAN Table 2 on page 24 lists the IPv4 addresses used in this example, Table 3 on page 24 displays the IPv6 addresses used in this example, and Table 4 on page 25 lists the loopback addresses and autonomous system numbers for the fabric, spine, and leaf devices. Table 2: IPv4 Addressing IPv4 Network Prefixes Network Fabric to spine point-to-point links /24 Spine to leaf point-to-point links /24 Loopback IP addresses (for all devices) /24 Anycast IPv4 addresses A set of nine addresses that increment the third octet and use.1 for the fourth octet: / / / / / / / / /16 Server/traffic generator IPv4 host devices A range of five addresses (0-4) per host, with the host number represented in the tens place. For example, Host 7 has the following range of addresses: / /16. Table 3: IPv6 Addressing IPv6 Network Prefixes Network Anycast IPv6 addresses A set of nine addresses that increment the fifth double-octet and use :1 for the final double-octet: 2001:db8:10:1:100::1/ :db8:10:1:101::1/ :db8:10:1:102::1/ :db8:10:1:103::1/ :db8:10:1:104::1/ :db8:10:1:105::1/ :db8:10:1:106::1/ :db8:10:1:107::1/ :db8:10:1:108::1/80 24

25 Chapter 1: Infrastructure as a Service: EVPN and VXLAN Table 3: IPv6 Addressing (continued) IPv6 Network Prefixes Network Server/traffic generator IPv6 host devices A set of addresses that increment the fifth double-octet and use :<210 + spine-number> for the final double-octet. For example, for Spine 1, equals 211, so the corresponding IPv6 addresses are as follows: 2001:db8:10:1:100::211/ :db8:10:1:101::211/ :db8:10:1:102::211/ :db8:10:1:103::211/ :db8:10:1:104::211/ :db8:10:1:105::211/ :db8:10:1:106::211/ :db8:10:1:107::211/ :db8:10:1:108::211/80 Table 4: Loopback Addresses and Underlay ASNs for Fabric Devices, Spine Devices, and Leaf Devices Loopback Address ASN Fabric Fabric Fabric Fabric Spine (underlay) (overlay) Spine (underlay) (overlay) Spine (underlay) (overlay) Spine (underlay) (overlay) Leaf (underlay) (overlay) Leaf (underlay) (overlay) 25

26 Infrastructure as a Service: EVPN and VXLAN Table 4: Loopback Addresses and Underlay ASNs for Fabric Devices, Spine Devices, and Leaf Devices (continued) Loopback Address ASN Leaf (underlay) (overlay) Leaf (underlay) (overlay) Configuring the IaaS: EVPN and VXLAN Solution NOTE: You can use Ansible scripts to generate a large portion of the IP fabric and EVPN VXLAN configurations. For more information, see: Ansible Junos Configuration for EVPN/VXLAN. This section explains how to build out the leaf, spine, and fabric layers with an EBGP-based IP fabric underlay and an IBGP-based EVPN and VXLAN overlay for the solution. It includes the following sections: Configuring Leaf Devices for the IaaS: EVPN and VXLAN Solution on page 26 Configuring Spine Devices for the IaaS: EVPN and VXLAN Solution on page 34 Configuring Fabric Devices for the IaaS: EVPN and VXLAN Solution on page 47 Configuring Host Multihoming on page 50 Configuring Leaf Devices for the IaaS: EVPN and VXLAN Solution CLI Quick Configuration To quickly configure the leaf devices, enter the following representative configuration statements on each device: NOTE: The configuration shown here applies to Leaf 1. [edit] set interfaces xe-0/0/12 description "To Host 1" set interfaces xe-0/0/12 unit 0 family ethernet-switching interface-mode trunk set interfaces xe-0/0/12 unit 0 family ethernet-switching vlan members set interfaces xe-0/0/12 unit 0 family ethernet-switching vlan members 999 set interfaces xe-0/0/13 description "To Host 5" set interfaces xe-0/0/13 unit 0 family ethernet-switching interface-mode trunk set interfaces xe-0/0/13 unit 0 family ethernet-switching vlan members set interfaces xe-0/0/13 unit 0 family ethernet-switching vlan members 999 set interfaces et-0/0/50 description "To Spine 1" set interfaces et-0/0/50 mtu 9192 set interfaces et-0/0/50 unit 0 family inet mtu 9000 set interfaces et-0/0/50 unit 0 family inet address /31 26

27 Chapter 1: Infrastructure as a Service: EVPN and VXLAN set interfaces et-0/0/51 description "To Spine 2" set interfaces et-0/0/51 mtu 9192 set interfaces et-0/0/51 unit 0 family inet mtu 9000 set interfaces et-0/0/51 unit 0 family inet address /31 set interfaces lo0 unit 0 family inet address /32 set routing-options forwarding-table export pfe-ecmp set routing-options router-id set protocols bgp group underlay-ipfabric type external set protocols bgp group underlay-ipfabric mtu-discovery set protocols bgp group underlay-ipfabric import bgp-ipclos-in set protocols bgp group underlay-ipfabric export bgp-ipclos-out set protocols bgp group underlay-ipfabric local-as set protocols bgp group underlay-ipfabric bfd-liveness-detection minimum-interval 350 set protocols bgp group underlay-ipfabric bfd-liveness-detection multiplier 3 set protocols bgp group underlay-ipfabric bfd-liveness-detection session-mode automatic set protocols bgp group underlay-ipfabric multipath multiple-as set protocols bgp group underlay-ipfabric neighbor peer-as set protocols bgp group underlay-ipfabric neighbor peer-as set protocols bgp log-updown set protocols bgp graceful-restart set protocols bgp group overlay-evpn type internal set protocols bgp group overlay-evpn local-address set protocols bgp group overlay-evpn import OVERLAY-IN set protocols bgp group overlay-evpn family evpn signaling set protocols bgp group overlay-evpn local-as set protocols bgp group overlay-evpn bfd-liveness-detection minimum-interval 350 set protocols bgp group overlay-evpn bfd-liveness-detection multiplier 3 set protocols bgp group overlay-evpn bfd-liveness-detection session-mode automatic set protocols bgp group overlay-evpn multipath set protocols bgp group overlay-evpn neighbor set protocols bgp group overlay-evpn neighbor set protocols evpn vni-options vni 1000 vrf-target export target:1:1000 set protocols evpn vni-options vni 1001 vrf-target export target:1:1001 set protocols evpn vni-options vni 1002 vrf-target export target:1:1002 set protocols evpn vni-options vni 1003 vrf-target export target:1:1003 set protocols evpn vni-options vni 1004 vrf-target export target:1:1004 set protocols evpn vni-options vni 1005 vrf-target export target:1:1005 set protocols evpn vni-options vni 1006 vrf-target export target:1:1006 set protocols evpn vni-options vni 1007 vrf-target export target:1:1007 set protocols evpn vni-options vni 1008 vrf-target export target:1:1008 set protocols evpn vni-options vni 1999 vrf-target export target:1:1999 set protocols evpn encapsulation vxlan set protocols evpn extended-vni-list 1000 set protocols evpn extended-vni-list 1001 set protocols evpn extended-vni-list 1002 set protocols evpn extended-vni-list 1003 set protocols evpn extended-vni-list 1004 set protocols evpn extended-vni-list 1005 set protocols evpn extended-vni-list 1006 set protocols evpn extended-vni-list 1007 set protocols evpn extended-vni-list 1008 set protocols evpn extended-vni-list 1999 set protocols evpn multicast-mode ingress-replication set protocols lldp interface all set policy-options community com1000 members target:1:1000 set policy-options community com1001 members target:1:

28 Infrastructure as a Service: EVPN and VXLAN set policy-options community com1002 members target:1:1002 set policy-options community com1003 members target:1:1003 set policy-options community com1004 members target:1:1004 set policy-options community com1005 members target:1:1005 set policy-options community com1006 members target:1:1006 set policy-options community com1007 members target:1:1007 set policy-options community com1008 members target:1:1008 set policy-options community com1999 members target:1:1999 set policy-options community comm-leaf_esi members target:9999:9999 set policy-options policy-statement bgp-ipclos-in term loopbacks from route-filter /16 orlonger set policy-options policy-statement bgp-ipclos-in term loopbacks then accept set policy-options policy-statement bgp-ipclos-out term loopback from protocol direct set policy-options policy-statement bgp-ipclos-out term loopback from route-filter /32 orlonger set policy-options policy-statement bgp-ipclos-out term loopback then next-hop self set policy-options policy-statement bgp-ipclos-out term loopback then accept set policy-options policy-statement bgp-ipclos-out term reject then reject set policy-options policy-statement LEAF-IN term import_leaf_esi from community comm-leaf_esi set policy-options policy-statement LEAF-IN term import_leaf_esi then accept set policy-options policy-statement LEAF-IN term import_vni1000 from community com1000 set policy-options policy-statement LEAF-IN term import_vni1000 then accept set policy-options policy-statement LEAF-IN term import_vni1001 from community com1001 set policy-options policy-statement LEAF-IN term import_vni1001 then accept set policy-options policy-statement LEAF-IN term import_vni1002 from community com1002 set policy-options policy-statement LEAF-IN term import_vni1002 then accept set policy-options policy-statement LEAF-IN term import_vni1003 from community com1003 set policy-options policy-statement LEAF-IN term import_vni1003 then accept set policy-options policy-statement LEAF-IN term import_vni1004 from community com1004 set policy-options policy-statement LEAF-IN term import_vni1004 then accept set policy-options policy-statement LEAF-IN term import_vni1005 from community com1005 set policy-options policy-statement LEAF-IN term import_vni1005 then accept set policy-options policy-statement LEAF-IN term import_vni1006 from community com1006 set policy-options policy-statement LEAF-IN term import_vni1006 then accept set policy-options policy-statement LEAF-IN term import_vni1007 from community com1007 set policy-options policy-statement LEAF-IN term import_vni1007 then accept set policy-options policy-statement LEAF-IN term import_vni1008 from community com1008 set policy-options policy-statement LEAF-IN term import_vni1008 then accept set policy-options policy-statement LEAF-IN term import_vni1999 from community com1999 set policy-options policy-statement LEAF-IN term import_vni1999 then accept set policy-options policy-statement LEAF-IN term default then reject set policy-options policy-statement OVERLAY-IN term reject-remote-gw from family evpn set policy-options policy-statement OVERLAY-IN term reject-remote-gw from next-hop

29 Chapter 1: Infrastructure as a Service: EVPN and VXLAN set policy-options policy-statement OVERLAY-IN term reject-remote-gw from next-hop set policy-options policy-statement OVERLAY-IN term reject-remote-gw from nlri-route-type 1 set policy-options policy-statement OVERLAY-IN term reject-remote-gw from nlri-route-type 2 set policy-options policy-statement OVERLAY-IN term reject-remote-gw then reject set policy-options policy-statement OVERLAY-IN term accept-all then accept set policy-options policy-statement pfe-ecmp then load-balance per-packet set switch-options route-distinguisher :1 set switch-options vrf-import LEAF-IN set switch-options vrf-target target:9999:9999 set switch-options vtep-source-interface lo0.0 set vlans v1000 vlan-id 100 set vlans v1000 vxlan vni 1000 set vlans v1000 vxlan ingress-node-replication set vlans v1001 vlan-id 101 set vlans v1001 vxlan vni 1001 set vlans v1001 vxlan ingress-node-replication set vlans v1002 vlan-id 102 set vlans v1002 vxlan vni 1002 set vlans v1002 vxlan ingress-node-replication set vlans v1003 vlan-id 103 set vlans v1003 vxlan vni 1003 set vlans v1003 vxlan ingress-node-replication set vlans v1004 vlan-id 104 set vlans v1004 vxlan vni 1004 set vlans v1004 vxlan ingress-node-replication set vlans v1005 vlan-id 105 set vlans v1005 vxlan vni 1005 set vlans v1005 vxlan ingress-node-replication set vlans v1006 vlan-id 106 set vlans v1006 vxlan vni 1006 set vlans v1006 vxlan ingress-node-replication set vlans v1007 vlan-id 107 set vlans v1007 vxlan vni 1007 set vlans v1007 vxlan ingress-node-replication set vlans v1008 vlan-id 108 set vlans v1008 vxlan vni 1008 set vlans v1008 vxlan ingress-node-replication Step-by-Step Procedure To configure the leaf devices: 1. Configure Ethernet interfaces to reach the hosts : [edit] user@leaf-1# set interfaces xe-0/0/12 description "To Host 1" user@leaf-1# set interfaces xe-0/0/12 unit 0 family ethernet-switching interface-mode trunk user@leaf-1# set interfaces xe-0/0/12 unit 0 family ethernet-switching vlan members user@leaf-1# set interfaces xe-0/0/12 unit 0 family ethernet-switching vlan members 999 user@leaf-1# set interfaces xe-0/0/13 description "To Host 5" user@leaf-1# set interfaces xe-0/0/13 unit 0 family ethernet-switching interface-mode trunk 29

30 Infrastructure as a Service: EVPN and VXLAN user@leaf-1# set interfaces xe-0/0/13 unit 0 family ethernet-switching vlan members user@leaf-1# set interfaces xe-0/0/13 unit 0 family ethernet-switching vlan members Configure the interfaces connecting the leaf device to the spine devices: [edit] user@leaf-1# set interfaces et-0/0/50 description "To Spine 1" user@leaf-1# set interfaces et-0/0/50 mtu 9192 user@leaf-1# set interfaces et-0/0/50 unit 0 family inet mtu 9000 user@leaf-1# set interfaces et-0/0/50 unit 0 family inet address /31 user@leaf-1# set interfaces et-0/0/51 description "To Spine 2" user@leaf-1# set interfaces et-0/0/51 mtu 9192 user@leaf-1# set interfaces et-0/0/51 unit 0 family inet mtu 9000 user@leaf-1# set interfaces et-0/0/51 unit 0 family inet address /31 3. Configure the loopback interface with a reachable IPv4 address. This loopback address is the tunnel source address. [edit] user@leaf-1# set interfaces lo0 unit 0 family inet address /32 4. Configure the router ID for the leaf device: [edit] user@leaf-1# set routing-options router-id Configure an EBGP-based underlay between the leaf and spine devices and enable BFD and LLDP: [edit] user@leaf-1# set protocols bgp group underlay-ipfabric type external user@leaf-1# set protocols bgp group underlay-ipfabric mtu-discovery user@leaf-1# set protocols bgp group underlay-ipfabric import bgp-ipclos-in user@leaf-1# set protocols bgp group underlay-ipfabric export bgp-ipclos-out user@leaf-1# set protocols bgp group underlay-ipfabric local-as user@leaf-1# set protocols bgp group underlay-ipfabric bfd-liveness-detection minimum-interval 350 user@leaf-1# set protocols bgp group underlay-ipfabric bfd-liveness-detection multiplier 3 user@leaf-1# set protocols bgp group underlay-ipfabric bfd-liveness-detection session-mode automatic user@leaf-1# set protocols bgp group underlay-ipfabric multipath multiple-as user@leaf-1# set protocols bgp group underlay-ipfabric neighbor peer-as user@leaf-1# set protocols bgp group underlay-ipfabric neighbor peer-as user@leaf-1# set protocols lldp interface all 6. Create a routing policy that only advertises and receives loopback addresses from the IP fabric and EBGP underlay: [edit] user@leaf-1# set policy-options policy-statement bgp-ipclos-in term loopbacks from route-filter /16 orlonger user@leaf-1# set policy-options policy-statement bgp-ipclos-in term loopbacks then accept 30

Traffic Load Balancing in EVPN/VXLAN Networks. Tech Note

Traffic Load Balancing in EVPN/VXLAN Networks. Tech Note Traffic Load Balancing in EVPN/VXLAN Networks Tech Note December 2017 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net Juniper Networks assumes no

More information

Cloud Data Center Architecture Guide

Cloud Data Center Architecture Guide Cloud Data Center Architecture Guide Modified: 2018-08-21 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net Juniper Networks, the Juniper Networks

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Configuring BGP Autodiscovery for LDP VPLS Release NCE0035 Modified: 2017-01-24 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net

More information

Network Configuration Example

Network Configuration Example Network Configuration Example MetaFabric Architecture 2.0: Configuring Virtual Chassis Fabric and VMware NSX Modified: 2017-04-14 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089

More information

Data Center Configuration. 1. Configuring VXLAN

Data Center Configuration. 1. Configuring VXLAN Data Center Configuration 1. 1 1.1 Overview Virtual Extensible Local Area Network (VXLAN) is a virtual Ethernet based on the physical IP (overlay) network. It is a technology that encapsulates layer 2

More information

Network Virtualization in IP Fabric with BGP EVPN

Network Virtualization in IP Fabric with BGP EVPN EXTREME VALIDATED DESIGN Network Virtualization in IP Fabric with BGP EVPN Network Virtualization in IP Fabric with BGP EVPN Version 2.0 9035383 February 2018 2018, Extreme Networks, Inc. All Rights Reserved.

More information

Contents. EVPN overview 1

Contents. EVPN overview 1 Contents EVPN overview 1 EVPN network model 1 MP-BGP extension for EVPN 2 Configuration automation 3 Assignment of traffic to VXLANs 3 Traffic from the local site to a remote site 3 Traffic from a remote

More information

VXLAN Overview: Cisco Nexus 9000 Series Switches

VXLAN Overview: Cisco Nexus 9000 Series Switches White Paper VXLAN Overview: Cisco Nexus 9000 Series Switches What You Will Learn Traditional network segmentation has been provided by VLANs that are standardized under the IEEE 802.1Q group. VLANs provide

More information

IP Fabric Reference Architecture

IP Fabric Reference Architecture IP Fabric Reference Architecture Technical Deep Dive jammon@brocade.com Feng Shui of Data Center Design 1. Follow KISS Principle Keep It Simple 2. Minimal features 3. Minimal configuration 4. Configuration

More information

EXTREME VALIDATED DESIGN. Network Virtualization in IP Fabric with BGP EVPN

EXTREME VALIDATED DESIGN. Network Virtualization in IP Fabric with BGP EVPN EXTREME VALIDATED DESIGN Network Virtualization in IP Fabric with BGP EVPN 53-1004308-07 April 2018 2018, Extreme Networks, Inc. All Rights Reserved. Extreme Networks and the Extreme Networks logo are

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Configuring a Two-Tiered Virtualized Data Center for Large Enterprise Networks Release NCE 33 Modified: 2016-08-01 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Deploying Secure Multicast Market Data Services for Financial Services Environments Modified: 2016-07-29 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089

More information

Introduction to External Connectivity

Introduction to External Connectivity Before you begin Ensure you know about Programmable Fabric. Conceptual information is covered in the Introduction to Cisco Programmable Fabric and Introducing Cisco Programmable Fabric (VXLAN/EVPN) chapters.

More information

Ethernet VPN (EVPN) in Data Center

Ethernet VPN (EVPN) in Data Center Ethernet VPN (EVPN) in Data Center Description and Design considerations Vasilis Stavropoulos Sparkle GR EVPN in Data Center The necessity for EVPN (what it is, which problems it solves) EVPN with MPLS

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Configuring the BGP Local Autonomous System Attribute Release NCE0045 Modified: 2016-11-08 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000

More information

HPE FlexFabric 5940 Switch Series

HPE FlexFabric 5940 Switch Series HPE FlexFabric 5940 Switch Series EVPN Configuration Guide Part number: 5200-2002b Software version: Release 25xx Document version: 6W102-20170830 Copyright 2017 Hewlett Packard Enterprise Development

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Configuring Protocol Independent Multicast Join Load Balancing Release NCE0054 Modified: 2017-01-20 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089

More information

Implementing VXLAN. Prerequisites for implementing VXLANs. Information about Implementing VXLAN

Implementing VXLAN. Prerequisites for implementing VXLANs. Information about Implementing VXLAN This module provides conceptual information for VXLAN in general and configuration information for layer 2 VXLAN on Cisco ASR 9000 Series Router. For configuration information of layer 3 VXLAN, see Implementing

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Interconnecting a Layer 2 Circuit with a Layer 3 VPN Modified: 2017-01-19 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Configuring VPLS Multihoming Using Autodiscovery (FEC 129) Release NCE0072 Modified: 2016-10-26 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA

More information

Configuring VXLAN EVPN Multi-Site

Configuring VXLAN EVPN Multi-Site This chapter contains the following sections: About VXLAN EVPN Multi-Site, page 1 Guidelines and Limitations for VXLAN EVPN Multi-Site, page 2 Enabling VXLAN EVPN Multi-Site, page 2 Configuring VNI Dual

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Configuring Ethernet CFM Over VPLS Modified: 2017-01-24 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net All rights

More information

Hierarchical Fabric Designs The Journey to Multisite. Lukas Krattiger Principal Engineer September 2017

Hierarchical Fabric Designs The Journey to Multisite. Lukas Krattiger Principal Engineer September 2017 Hierarchical Fabric Designs The Journey to Multisite Lukas Krattiger Principal Engineer September 2017 A Single Fabric, a Single Data Center External Layer-3 Network Pod 1 Leaf/ Topologies (aka Folded

More information

VXLAN Design with Cisco Nexus 9300 Platform Switches

VXLAN Design with Cisco Nexus 9300 Platform Switches Guide VXLAN Design with Cisco Nexus 9300 Platform Switches Guide October 2014 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 39 Contents What

More information

Configuring VXLAN EVPN Multi-Site

Configuring VXLAN EVPN Multi-Site This chapter contains the following sections: About VXLAN EVPN Multi-Site, page 1 Licensing Requirements for VXLAN EVPN Multi-Site, page 2 Guidelines and Limitations for VXLAN EVPN Multi-Site, page 2 Enabling

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Configuring Private VLANs on a QFX Switch Using Extended Functionality Modified: 2016-08-01 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Ingress Replication for MVPN and for IP Multicast Using Next Gen MVPN Modified: 2016-12-20 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000

More information

Unicast Forwarding. Unicast. Unicast Forwarding Flows Overview. Intra Subnet Forwarding (Bridging) Unicast, on page 1

Unicast Forwarding. Unicast. Unicast Forwarding Flows Overview. Intra Subnet Forwarding (Bridging) Unicast, on page 1 Unicast, on page 1 Unicast Flows Overview Intra and inter subnet forwarding are the possible unicast forwarding flows in the VXLAN BGP EVPN fabric, between leaf/tor switch VTEPs. They are explained in

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Configuring External BGP Peering Release NCE0056 Modified: 2017-01-20 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net

More information

Huawei CloudEngine Series. VXLAN Technology White Paper. Issue 06 Date HUAWEI TECHNOLOGIES CO., LTD.

Huawei CloudEngine Series. VXLAN Technology White Paper. Issue 06 Date HUAWEI TECHNOLOGIES CO., LTD. Issue 06 Date 2016-07-28 HUAWEI TECHNOLOGIES CO., LTD. 2016. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Configuring the BGP Local Preference Release NCE0046 Modified: 2016-11-08 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net

More information

Multi-site Datacenter Network Infrastructures

Multi-site Datacenter Network Infrastructures Multi-site Datacenter Network Infrastructures Petr Grygárek rek 2009 Petr Grygarek, Advanced Computer Networks Technologies 1 Why Multisite Datacenters? Resiliency against large-scale site failures (geodiversity)

More information

VXLAN EVPN Multihoming with Cisco Nexus 9000 Series Switches

VXLAN EVPN Multihoming with Cisco Nexus 9000 Series Switches White Paper VXLAN EVPN Multihoming with Cisco Nexus 9000 Series Switches 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 27 Contents Introduction...

More information

Evolved Campus Core: An EVPN Framework for Campus Networks. Vincent Celindro JNCIE #69 / CCIE #8630

Evolved Campus Core: An EVPN Framework for Campus Networks. Vincent Celindro JNCIE #69 / CCIE #8630 Evolved Campus Core: An EVPN Framework for Campus Networks Vincent Celindro JNCIE #69 / CCIE #8630 This statement of direction sets forth Juniper Networks current intention and is subject to change at

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Virtual Router Use Case for Educational Networks Release NCE0039 Modified: 2017-01-23 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000

More information

Junos OS Multiple Instances for Label Distribution Protocol Feature Guide Release 11.4 Published: Copyright 2011, Juniper Networks, Inc.

Junos OS Multiple Instances for Label Distribution Protocol Feature Guide Release 11.4 Published: Copyright 2011, Juniper Networks, Inc. Junos OS Multiple Instances for Label Distribution Protocol Feature Guide Release 11.4 Published: 2011-11-08 Juniper Networks, Inc. 1194 North Mathilda Avenue Sunnyvale, California 94089 USA 408-745-2000

More information

Optimizing Layer 2 DCI with OTV between Multiple VXLAN EVPN Fabrics (Multifabric)

Optimizing Layer 2 DCI with OTV between Multiple VXLAN EVPN Fabrics (Multifabric) White Paper Optimizing Layer 2 DCI with OTV between Multiple VXLAN EVPN Fabrics (Multifabric) What You Will Learn This document describes how to achieve a VXLAN EVPN multifabric design by integrating Virtual

More information

H3C S6520XE-HI Switch Series

H3C S6520XE-HI Switch Series H3C S6520XE-HI Switch Series EVPN Configuration Guide New H3C Technologies Co., Ltd. http://www.h3c.com.hk Software version: Release 1108 Document version: 6W100-20171228 Copyright 2017, New H3C Technologies

More information

Configuring VXLAN EVPN Multi-Site

Configuring VXLAN EVPN Multi-Site This chapter contains the following sections: About VXLAN EVPN Multi-Site, on page 1 Licensing Requirements for VXLAN EVPN Multi-Site, on page 2 Guidelines and Limitations for VXLAN EVPN Multi-Site, on

More information

Technical Brief. Achieving a Scale-Out IP Fabric with the Adaptive Cloud Fabric Architecture.

Technical Brief. Achieving a Scale-Out IP Fabric with the Adaptive Cloud Fabric Architecture. Technical Brief Achieving a Scale-Out IP Fabric with the Adaptive Cloud Fabric Architecture www.pluribusnetworks.com Terminology Reference This is a glossary of acronyms and terms used throughout this

More information

Provisioning Overlay Networks

Provisioning Overlay Networks This chapter has the following sections: Using Cisco Virtual Topology System, page 1 Creating Overlays, page 2 Creating Network using VMware, page 4 Creating Subnetwork using VMware, page 4 Creating Routers

More information

VXLAN Multipod Design for Intra-Data Center and Geographically Dispersed Data Center Sites

VXLAN Multipod Design for Intra-Data Center and Geographically Dispersed Data Center Sites White Paper VXLAN Multipod Design for Intra-Data Center and Geographically Dispersed Data Center Sites May 17, 2016 Authors Max Ardica, Principal Engineer INSBU Patrice Bellagamba, Distinguish System Engineer

More information

Using IPsec with Multiservices MICs on MX Series Routers

Using IPsec with Multiservices MICs on MX Series Routers Using IPsec with Multiservices MICs on MX Series Routers Test Case April 2017 Version 1.0 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net Juniper

More information

Higher scalability to address more Layer 2 segments: up to 16 million VXLAN segments.

Higher scalability to address more Layer 2 segments: up to 16 million VXLAN segments. This chapter tells how to configure Virtual extensible LAN (VXLAN) interfaces. VXLANs act as Layer 2 virtual networks over Layer 3 physical networks to stretch Layer 2 networks. About VXLAN Encapsulation

More information

MPLS VPN--Inter-AS Option AB

MPLS VPN--Inter-AS Option AB The feature combines the best functionality of an Inter-AS Option (10) A and Inter-AS Option (10) B network to allow a Multiprotocol Label Switching (MPLS) Virtual Private Network (VPN) service provider

More information

Pluribus Data Center Interconnect Validated

Pluribus Data Center Interconnect Validated Design Guide Pluribus Data Center Interconnect Validated Design Guide www.pluribusnetworks.com Terminology Reference This is a glossary of acronyms and terms used throughout this document. AS BFD BGP L2VPN

More information

Cisco ACI Multi-Pod/Multi-Site Deployment Options Max Ardica Principal Engineer BRKACI-2003

Cisco ACI Multi-Pod/Multi-Site Deployment Options Max Ardica Principal Engineer BRKACI-2003 Cisco ACI Multi-Pod/Multi-Site Deployment Options Max Ardica Principal Engineer BRKACI-2003 Agenda ACI Introduction and Multi-Fabric Use Cases ACI Multi-Fabric Design Options ACI Stretched Fabric Overview

More information

VXLAN EVPN Multi-Site Design and Deployment

VXLAN EVPN Multi-Site Design and Deployment White Paper VXLAN EVPN Multi-Site Design and Deployment 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 55 Contents What you will learn... 4

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Configuring Layer 3 Cloud Data Center Tenants Published: 2014-09-19 Juniper Networks, Inc. 1194 North Mathilda Avenue Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Configuring Dual-Stack Lite for IPv6 Access Release NCE0025 Modified: 2016-10-12 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net

More information

MPLS VPN Inter-AS Option AB

MPLS VPN Inter-AS Option AB First Published: December 17, 2007 Last Updated: September 21, 2011 The feature combines the best functionality of an Inter-AS Option (10) A and Inter-AS Option (10) B network to allow a Multiprotocol

More information

Internet Engineering Task Force (IETF) Request for Comments: N. Bitar Nokia R. Shekhar. Juniper. J. Uttaro AT&T W. Henderickx Nokia March 2018

Internet Engineering Task Force (IETF) Request for Comments: N. Bitar Nokia R. Shekhar. Juniper. J. Uttaro AT&T W. Henderickx Nokia March 2018 Internet Engineering Task Force (IETF) Request for Comments: 8365 Category: Standards Track ISSN: 2070-1721 A. Sajassi, Ed. Cisco J. Drake, Ed. Juniper N. Bitar Nokia R. Shekhar Juniper J. Uttaro AT&T

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Configuring DCBX Application Protocol TLV Exchange Release NCE 63 Modified: 2016-08-01 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Configuring a Single SRX Series Device in a Branch Office Modified: 2017-01-23 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net

More information

Extreme Networks How to Build Scalable and Resilient Fabric Networks

Extreme Networks How to Build Scalable and Resilient Fabric Networks Extreme Networks How to Build Scalable and Resilient Fabric Networks Mikael Holmberg Distinguished Systems Engineer Fabrics MLAG IETF TRILL Cisco FabricPath Extreme (Brocade) VCS Juniper QFabric IEEE Fabric

More information

Virtual Extensible LAN and Ethernet Virtual Private Network

Virtual Extensible LAN and Ethernet Virtual Private Network Virtual Extensible LAN and Ethernet Virtual Private Network Contents Introduction Prerequisites Requirements Components Used Background Information Why you need a new extension for VLAN? Why do you chose

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Configuring RSVP-Signaled Point-to-Multipoint LSPs on Logical Systems Modified: 2017-01-18 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000

More information

Open Compute Network Operating System Version 1.1

Open Compute Network Operating System Version 1.1 Solution Guide Open Compute Network Operating System Version 1.1 Data Center Solution - EVPN with VXLAN 2016 IP Infusion Inc. All Rights Reserved. This documentation is subject to change without notice.

More information

MP-BGP VxLAN, ACI & Demo. Brian Kvisgaard System Engineer, CCIE SP #41039 November 2017

MP-BGP VxLAN, ACI & Demo. Brian Kvisgaard System Engineer, CCIE SP #41039 November 2017 MP-BGP VxLAN, ACI & Demo Brian Kvisgaard System Engineer, CCIE SP #41039 November 2017 Datacenter solutions Programmable Fabric Classic Ethernet VxLAN-BGP EVPN standard-based Cisco DCNM Automation Modern

More information

H3C S7500E-X Switch Series

H3C S7500E-X Switch Series H3C S7500E-X Switch Series EVPN Configuration Guide Hangzhou H3C Technologies Co., Ltd. http://www.h3c.com Software version: S7500EX-CMW710-R7523P01 Document version: 6W100-20160830 Copyright 2016, Hangzhou

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Load Balancing Layer 3 VPN Traffic While Simultaneously Using IP Header Filtering Modified: 2017-01-19 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089

More information

Enterprise. Nexus 1000V. L2/L3 Fabric WAN/PE. Customer VRF. MPLS Backbone. Service Provider Data Center-1 Customer VRF WAN/PE OTV OTV.

Enterprise. Nexus 1000V. L2/L3 Fabric WAN/PE. Customer VRF. MPLS Backbone. Service Provider Data Center-1 Customer VRF WAN/PE OTV OTV. 2 CHAPTER Cisco's Disaster Recovery as a Service (DRaaS) architecture supports virtual data centers that consist of a collection of geographically-dispersed data center locations. Since data centers are

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Configuring the Broadband Edge as a Service Node Within Seamless MPLS Network Designs Modified: 2016-07-29 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California

More information

Designing Mul+- Tenant Data Centers using EVPN- IRB. Neeraj Malhotra, Principal Engineer, Cisco Ahmed Abeer, Technical Marke<ng Engineer, Cisco

Designing Mul+- Tenant Data Centers using EVPN- IRB. Neeraj Malhotra, Principal Engineer, Cisco Ahmed Abeer, Technical Marke<ng Engineer, Cisco Designing Mul+- Tenant Data Centers using EVPN- IRB Neeraj Malhotra, Principal Engineer, Cisco Ahmed Abeer, Technical Marke

More information

OPEN CONTRAIL ARCHITECTURE GEORGIA TECH SDN EVENT

OPEN CONTRAIL ARCHITECTURE GEORGIA TECH SDN EVENT OPEN CONTRAIL ARCHITECTURE GEORGIA TECH SDN EVENT sdn-and-nfv-technical---georgia-tech---sep-2013---v2 Bruno Rijsman, Distinguished Engineer 24 September 2013 Use Cases 2 Copyright 2013 Juniper Networks,

More information

Configuring MPLS and EoMPLS

Configuring MPLS and EoMPLS 37 CHAPTER This chapter describes how to configure multiprotocol label switching (MPLS) and Ethernet over MPLS (EoMPLS) on the Catalyst 3750 Metro switch. MPLS is a packet-switching technology that integrates

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Configuring IS-IS Dual Stacking of IPv4 and IPv6 Unicast Addresses Release NCE0068 Modified: 2017-01-20 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089

More information

Implementing VXLAN in DataCenter

Implementing VXLAN in DataCenter Implementing VXLAN in DataCenter LTRDCT-1223 Lilian Quan Technical Marketing Engineering, INSBU Erum Frahim Technical Leader, ecats John Weston Technical Leader, ecats Why Overlays? Robust Underlay/Fabric

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Configuring CoS Hierarchical Port Scheduling Release NCE 71 Modified: 2016-12-16 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Configuring Multichassis Link Aggregation on a QFX Series Switch Release NCE 64 Modified: 2016-08-01 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089

More information

BGP IN THE DATA CENTER

BGP IN THE DATA CENTER BGP IN THE DATA CENTER A PACKET DESIGN E-BOOK Contents Page 3 : BGP the Savior Page 4 : Traditional Data Center Architecture Traffic Flows Scalability Spanning Tree Protocol (STP) Page 6 : CLOS Architecture

More information

BESS work on control planes for DC overlay networks A short overview

BESS work on control planes for DC overlay networks A short overview BESS work on control planes for DC overlay networks A short overview Jorge Rabadan IETF99, July 2017 Prague 1 Agenda EVPN in a nutshell BESS work on EVPN for NVO3 networks EVPN in the industry today Future

More information

Overview. Overview. OTV Fundamentals. OTV Terms. This chapter provides an overview for Overlay Transport Virtualization (OTV) on Cisco NX-OS devices.

Overview. Overview. OTV Fundamentals. OTV Terms. This chapter provides an overview for Overlay Transport Virtualization (OTV) on Cisco NX-OS devices. This chapter provides an overview for Overlay Transport Virtualization (OTV) on Cisco NX-OS devices., page 1 Sample Topologies, page 6 OTV is a MAC-in-IP method that extends Layer 2 connectivity across

More information

Segment Routing on Cisco Nexus 9500, 9300, 9200, 3200, and 3100 Platform Switches

Segment Routing on Cisco Nexus 9500, 9300, 9200, 3200, and 3100 Platform Switches White Paper Segment Routing on Cisco Nexus 9500, 9300, 9200, 3200, and 3100 Platform Switches Authors Ambrish Mehta, Cisco Systems Inc. Haider Salman, Cisco Systems Inc. 2017 Cisco and/or its affiliates.

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Deploying Scalable Services on an MX Series Router Acting as a Broadband Network Gateway Release NCE0062 Modified: 2017-01-24 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale,

More information

Building Blocks for Cloud Networks

Building Blocks for Cloud Networks Building Blocks for Cloud Networks Aldrin Isaac, Cross Portfolio Architecture, Juniper SPLM December 12, 2017 This presentation is an overview of the key network building blocks for multi-service cloud

More information

Ethernet VPN (EVPN) and Provider Backbone Bridging-EVPN: Next Generation Solutions for MPLS-based Ethernet Services. Introduction and Application Note

Ethernet VPN (EVPN) and Provider Backbone Bridging-EVPN: Next Generation Solutions for MPLS-based Ethernet Services. Introduction and Application Note White Paper Ethernet VPN (EVPN) and Provider Backbone Bridging-EVPN: Next Generation Solutions for MPLS-based Ethernet Services Introduction and Application Note Last Updated: 5/2014 Ethernet VPN (EVPN)

More information

Contrail Cloud Platform Architecture

Contrail Cloud Platform Architecture Contrail Cloud Platform Architecture Release 13.0 Modified: 2018-08-23 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net Juniper Networks, the Juniper

More information

Feature Information for BGP Control Plane, page 1 BGP Control Plane Setup, page 1. Feature Information for BGP Control Plane

Feature Information for BGP Control Plane, page 1 BGP Control Plane Setup, page 1. Feature Information for BGP Control Plane Feature Information for, page 1 Setup, page 1 Feature Information for Table 1: Feature Information for Feature Releases Feature Information PoAP diagnostics 7.2(0)N1(1) Included a new section on POAP Diagnostics.

More information

IP fabrics - reloaded

IP fabrics - reloaded IP fabrics - reloaded Joerg Ammon Senior Principal Systems Engineer 2017-11-09 2017 Extreme Networks, Inc. All rights reserved Extreme Networks Acquisition update Oct 30, 2017:

More information

Implementing MPLS VPNs over IP Tunnels

Implementing MPLS VPNs over IP Tunnels The MPLS VPNs over IP Tunnels feature lets you deploy Layer 3 Virtual Private Network (L3VPN) services, over an IP core network, using L2TPv3 multipoint tunneling instead of MPLS. This allows L2TPv3 tunnels

More information

Contrail Cloud Platform Architecture

Contrail Cloud Platform Architecture Contrail Cloud Platform Architecture Release 10.0 Modified: 2018-04-04 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net Juniper Networks, the Juniper

More information

Provisioning Overlay Networks

Provisioning Overlay Networks This chapter has the following sections: Using Cisco Virtual Topology System, page 1 Creating Overlays, page 2 Creating Network using VMware, page 3 Creating Subnetwork using VMware, page 4 Creating Routers

More information

WAN. Core Routing Module. Data Cente r LAB. Internet. Today: MPLS, OSPF, BGP Future: OSPF, BGP. Today: L2VPN, L3VPN. Future: VXLAN

WAN. Core Routing Module. Data Cente r LAB. Internet. Today: MPLS, OSPF, BGP Future: OSPF, BGP. Today: L2VPN, L3VPN. Future: VXLAN 150000 100000 50000 0 Trident+ Trident II NG 300 200 100 IPv4 FIB LPM IPv6 FIB LPM 0 Trident+ Trident II or + NG LAB Data Cente r Internet WAN Bandwidth in 10G Increment 40GE Ports 10GE Ports 100GE Ports

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Validated Reference - Business Edge Solution - Device R-10 Release 1.0 Published: 2014-03-31 Juniper Networks, Inc. 1194 North Mathilda Avenue Sunnyvale, California 94089

More information

Configuring MPLS L3VPN

Configuring MPLS L3VPN Contents Configuring MPLS L3VPN 1 MPLS L3VPN overview 1 Introduction to MPLS L3VPN 1 MPLS L3VPN concepts 2 MPLS L3VPN packet forwarding 5 MPLS L3VPN networking schemes 5 MPLS L3VPN routing information

More information

EVPN for VXLAN Tunnels (Layer 3)

EVPN for VXLAN Tunnels (Layer 3) EVPN for VXLAN Tunnels (Layer 3) In This Chapter This section provides information about EVPN for VXLAN tunnels (Layer 3). Topics in this section include: Applicability on page 312 Overview on page 313

More information

Hochverfügbarkeit in Campusnetzen

Hochverfügbarkeit in Campusnetzen Hochverfügbarkeit in Campusnetzen Für die deutsche Airheads Community 04. Juli 2017, Tino H. Seifert, System Engineer Aruba Differences between Campus Edge and Campus Core Campus Edge In many cases no

More information

Pluribus Adaptive Cloud Fabric

Pluribus Adaptive Cloud Fabric Product Overview Adaptive Cloud Fabric Powering the Software-Defined Enterprise Highlights Completely software enabled and built on open networking platforms Powered by the Netvisor ONE network Operating

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Configuring a Routing Matrix with a TX Matrix Plus Router in Mixed Mode Modified: 2016-12-13 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000

More information

MPLS L3VPN. The MPLS L3VPN model consists of three kinds of devices: PE CE Site 2. Figure 1 Network diagram for MPLS L3VPN model

MPLS L3VPN. The MPLS L3VPN model consists of three kinds of devices: PE CE Site 2. Figure 1 Network diagram for MPLS L3VPN model is a kind of PE-based L3VPN technology for service provider VPN solutions. It uses BGP to advertise VPN routes and uses to forward VPN packets on service provider backbones. provides flexible networking

More information

Building Data Center Networks with VXLAN EVPN Overlays Part I

Building Data Center Networks with VXLAN EVPN Overlays Part I BRKDCT-2949 Building Data Center Networks with VXLAN EVPN Overlays Part I Lukas Krattiger, Principal Engineer Cisco Spark How Questions? Use Cisco Spark to communicate with the speaker after the session

More information

Cisco Dynamic Fabric Automation Architecture. Miroslav Brzek, Systems Engineer

Cisco Dynamic Fabric Automation Architecture. Miroslav Brzek, Systems Engineer Cisco Dynamic Fabric Automation Architecture Miroslav Brzek, Systems Engineer mibrzek@cisco.com Agenda DFA Overview Optimized Networking Fabric Properties Control Plane Forwarding Plane Virtual Fabrics

More information

Internet Engineering Task Force (IETF) Request for Comments: 8014 Category: Informational. M. Lasserre Independent T. Narten IBM December 2016

Internet Engineering Task Force (IETF) Request for Comments: 8014 Category: Informational. M. Lasserre Independent T. Narten IBM December 2016 Internet Engineering Task Force (IETF) Request for Comments: 8014 Category: Informational ISSN: 2070-1721 D. Black Dell EMC J. Hudson L. Kreeger M. Lasserre Independent T. Narten IBM December 2016 An Architecture

More information

WAN Edge MPLSoL2 Service

WAN Edge MPLSoL2 Service 4 CHAPTER While Layer 3 VPN services are becoming increasing popular as a primary connection for the WAN, there are a much larger percentage of customers still using Layer 2 services such Frame-Relay (FR).

More information

Pluribus Adaptive Cloud Fabric Powering the Software-Defined Enterprise

Pluribus Adaptive Cloud Fabric Powering the Software-Defined Enterprise Adaptive Cloud Fabric Powering the Software-Defined Enterprise Highlights Completely software enabled and built on open networking platforms Powered by the Netvisor ONE network Operating System Eliminates

More information

White Paper. Huawei Campus Switches VXLAN Technology. White Paper

White Paper. Huawei Campus Switches VXLAN Technology. White Paper White Paper Huawei Campus Switches VXLAN Technology White Paper 1 Terms Abbreviation VXLAN NVo3 BUM VNI VM VTEP SDN Full English Name Virtual Extensible Local Area Network Network Virtualization over L3

More information

Building Blocks in EVPN VXLAN for Multi-Service Fabrics. Aldrin Isaac Co-author RFC7432 Juniper Networks

Building Blocks in EVPN VXLAN for Multi-Service Fabrics. Aldrin Isaac Co-author RFC7432 Juniper Networks Building Blocks in EVPN VXLAN for Multi-Service Fabrics Aldrin Isaac Co-author RFC7432 Juniper Networks Network Subsystems Network Virtualization Bandwidth Broker TE LAN Fabric WAN Fabric LAN WAN EVPN

More information

BGP mvpn BGP safi IPv4

BGP mvpn BGP safi IPv4 The BGP mvpn BGP safi 129 IPv4 feature provides the capability to support multicast routing in the service provider s core IPv4 network This feature is needed to support BGP-based MVPNs BGP MVPN provides

More information

VXLAN EVPN Automation with ODL NIC. Presented by: Shreyans Desai, Serro Yrineu Rodrigues, Lumina Networks

VXLAN EVPN Automation with ODL NIC. Presented by: Shreyans Desai, Serro Yrineu Rodrigues, Lumina Networks VXLAN EVPN Automation with ODL NIC Presented by: Shreyans Desai, Serro Yrineu Rodrigues, Lumina Networks Agenda Use-Case - Why we are doing this? What is VXLAN / EVPN? Define VXLAN, BGP and EVPN configuration

More information