EXTREME VALIDATED DESIGN. Extreme IP Fabric Architecture

Size: px
Start display at page:

Download "EXTREME VALIDATED DESIGN. Extreme IP Fabric Architecture"

Transcription

1 EXTREME VALIDATED DESIGN April 2018

2 2018, Extreme Networks, Inc. All Rights Reserved. Extreme Networks and the Extreme Networks logo are trademarks or registered trademarks of Extreme Networks, Inc. in the United States and/or other countries. All other names are the property of their respective owners. For additional information on Extreme Networks Trademarks please see Specifications and product availability are subject to change without notice. 2017, Brocade Communications Systems, Inc. All Rights Reserved. Brocade, the B-wing symbol, and MyBrocade are registered trademarks of Brocade Communications Systems, Inc., in the United States and in other countries. Other brands, product names, or service names mentioned of Brocade Communications Systems, Inc. are listed at brocade-legal-intellectual-property/brocade-legal-trademarks.html. Other marks may belong to third parties. Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document at any time, without notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade sales office for information on feature and product availability. Export of technical data contained in this document may require an export license from the United States government. The authors and Brocade Communications Systems, Inc. assume no liability or responsibility to any person or entity with respect to the accuracy of this document or any loss, cost, liability, or damages arising from the information contained herein or the computer programs that accompany it. The product described by this document may contain open source software covered by the GNU General Public License or other open source license agreements. To find out which open source software is included in Brocade products, view the licensing terms applicable to the open source software, and obtain a copy of the programming source code, please visit 2

3 Contents List of Figures... 5 Preface... 7 Extreme Validated Designs... 7 Purpose of This Document... 7 Target Audience... 7 Authors... 8 Document History... 8 About Extreme Networks... 8 Introduction... 9 Terminology... 9 IP Fabric Overview Evolution of Data Center Fabrics Layer 2 Aggregation Layer 2 Fabric Architectures IP Fabric (AKA Routing to ToR) Leaf-Spine Architecture Leaf-Spine Layer 3 Clos Topology (Two-Tier) Optimized 5-Stage Layer 3 Clos Topology (Three-Tier) IP Fabric Control Plane Pervasive ebgp ibgp for Routing Inside a PoD IP Fabric Validated Designs Pervasive ebgp Hardware Matrix IP Fabric Configuration Common Configuration on All Nodes in the Fabric Node ID Configuration for VDX Platforms Fabric Infrastructure Links Loopback Interfaces and Router ID Server-Facing Links and Networks on ToRs VDX ToRs SLX 9140 ToRs MCT on SLX 9540 Edge Leafs ebgp Control-Plane Configuration Deployment Model-1: ebgp Configuration for Optimized 5-Stage Clos Deployment Model-2: ebgp Configuration for 3-Stage Clos Fabric Illustration Examples Network Reachability Between Racks and PoDs Verification Design Considerations Scale

4 Recommendations for ISL Ports in avdx vlag Pair Leaf Recommendations for ICL Ports in an SLX MCT Pair Generalized TTL Security Mechanism for BGP (GTSM) Appendix Configuration of the Nodes SLX 9140 MCT Pair ToR/Leaf Peer Peer SLX 9240 Spine SLX 9850 Super-Spine SLX 9540 Edge Leaf VDX vlag Pair Leaf References

5 List of Figures Figure 1 on page 12 L2 Aggregation Figure 2 on page 13 L2 Fabric Figure 3 on page 14 IP Fabric (Routing to ToR) Figure 4 on page 15 Leaf-Spine L3 Clos Topology Figure 5 on page 16 Optimized 5-Stage L3 Clos Topology Figure 6 on page 18 IP Fabric with ebgp as the Control Protocol Figure 7 on page 19 IP Fabric with ibgp as the Control Protocol Inside a PoD Figure 8 on page 21 Pervasive ebgp in an Optimized 5-Stage IP Fabric Figure 9 on page 48 Connectivity Between the Racks and PoDs 5

6 6

7 Preface Preface Extreme Validated Designs Purpose of This Document Target Audience About the Authors Document History About Extreme Networks Extreme Validated Designs Helping customers consider, select, and deploy network solutions for current and planned needs is our mission. Extreme Validated Designs offer a fast track to success by accelerating that process. Validated designs are repeatable reference network architectures that have been engineered and tested to address specific use cases and deployment scenarios. They document systematic steps and best practices that help administrators, architects, and engineers plan, design, and deploy physical and virtual network technologies. Leveraging these validated network architectures accelerates deployment speed, increases reliability and predictability, and reduces risk. Extreme Validated Designs incorporate network and security principles and technologies across the ecosystem of service provider, data center, campus, and wireless networks. Each Extreme Validated Design provides a standardized network architecture for a specific use case, incorporating technologies and feature sets across Extreme products and partner offerings. All Extreme Validated Designs follow best-practice recommendations and allow for customer-specific network architecture variations that deliver additional benefits. The variations are documented and supported to provide ongoing value, and all Extreme Validated Designs are continuously maintained to ensure that every design remains supported as new products and software versions are introduced. By accelerating time-to-value, reducing risk, and offering the freedom to incorporate creative, supported variations, these validated network architectures provide a tremendous value-add for building and growing a flexible network infrastructure. Purpose of This Document This Extreme Validated Design provides guidance for designing and implementing an IP fabric in a data center network using Extreme hardware and software. It details the reference architecture for deploying an IP fabric using the SLX platform as the spine and superspine in 3-stage and 5-stage Clos topologies. The design practices documented here follow best-practice recommendations, but variations to the design are supported as well. Target Audience This document is written for Extreme systems engineers, partners, and customers who design, implement, and support data center networks. This document is intended for experienced data center architects and engineers. It assumes that the reader has a good understanding of data center switching and routing features. 7

8 Preface Authors Krish Padmanabhan Sr Principal Engineer, System and Solution Engineering Eldho Jacob Principal Engineer, System and Solution Engineering The authors would like to acknowledge the following for their technical guidance in developing this validated design: Abdul Khader Director, System and Solution Engineering Vivek Baveja Director, Product Management Document History Date Part Number Description December 8, Initial release. April 21, Included new SLX platforms. Refer to the Hardware Matrix on page 20 for details on the platforms and their PINs. January Updated document to reflect Extreme's acquisition of Brocade's data center networking business. April Format change About Extreme Networks Extreme Networks (NASDAQ: EXTR) networking solutions help the world s leading organizations transition smoothly to a world where applications and information reside anywhere. This vision is designed to deliver key business benefits such as unmatched simplicity, non-stop networking, application optimization, and investment protection. Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and cost while enabling virtualization and cloud computing to increase business agility. To help ensure a complete solution, Extreme Networks partners with world-class IT companies and provides comprehensive education, support, and professional services offerings. ( 8

9 Introduction Introduction The IP fabric architecture is targeted for large enterprises and data centers planning to migrate from traditional Layer 2 fabrics to a Layer 3 fabric. This document is about building IP fabrics with Extreme VDX and SLX switching platforms. The SLX series of switches and routers are Extreme s next-generation platforms addressing the scale requirements of MSDC customers. The configurations and design practices documented here are fully validated and conform to the IP fabric reference architectures. The intention of this Extreme Validated Design document is to provide reference configurations and document best practices for building cloud-scale data-center networks using VDX and SLX switches and IP fabric architectures. It should be noted that this document does not cover the network virtualization or Layer 2/Layer 3 multitenancy aspects of IP fabric. Network virtualization in IP fabric is covered in the Network Virtualization in IP Fabric with BGP EVPN 1 Extreme Validated Design document. Terminology Term ARP AS ASN BGP ebgp ECMP ibgp IP MCT ND NLRI ORF PoD ToR URIB vlag VLAN VM Description Address Resolution Protocol Autonomous System Autonomous System Number Border Gateway Protocol External Border Gateway Protocol Equal Cost Multi-Path Internal Border Gateway Protocol Internet Protocol Multi-Chassis Trunk Neighbor Discovery Network Layer Reachability Information Outbound Route Filtering Point of Delivery Top of Rack switch Unicast Route Information Base Virtual Link Aggregation Group Virtual Local Area Network Virtual Machine 9

10 Introduction 10

11 IP Fabric Overview IP Fabric Overview Evolution of Data Center Fabrics Leaf-Spine Architecture IP Fabric Control Plane Extreme IP fabric provides a Layer 3 Clos deployment architecture for data center sites. It is a paradigm shift from traditional Layer 2 switching fabrics. With Extreme IP fabric, all links in the Clos topology are Layer 3 links. In comparison with the traditional 2-tier access/aggregation topologies and L2 fabrics where the L2/L3 demarcation happens on a device typically more than a hop away, the L2/L3 boundary in IP fabric is pushed to the ToR or the leaf node itself (AKA routing to the ToR). Leafs advertise the server subnets attached to them directly into the routing control-plane protocol. Modern data centers zeroed in on BGP as the preferred control-plane protocol. Because the infrastructure is built on IP, advantages like the following are leveraged: loop-free communication using industry-standard routing protocols, ECMP, a very high solution scale, and standards-based interoperability. 11

12 IP Fabric Overview Evolution of Data Center Fabrics Layer 2 Aggregation Figure 1 depicts the aggregation of several L2 switches into a pair of aggregation device in MLAG mode. This aggregation pair acts as the L2/L3 boundary; in other words, it aggregates all server VLANs and acts as a first-hop router for those VLANs. It also provides connectivity between the data center and the external networks. (The L2/L3 boundary is pushed to a third layer in certain scale-up topologies.) FIGURE 1 L2 Aggregation This model provides L2 extension naturally between racks. But there are trade-offs. Most of the network functions are concentrated on the aggregation pair. The number of ToRs, VLANs, MAC entries, IP subnets, ARP/ND entries, route scale, and so on is determined by these two devices for the entire fabric. The scale of the network and diameter (or number of racks) is limited by these two devices for the entire fabric. The VLANs or broadcast domains must be pruned properly according to the membership or interest in various racks to avoid large broadcast domains. 12

13 IP Fabric Overview Layer 2 Fabric Architectures These fabrics aim to provide Layer 2 services by borrowing the routing concepts of Layer 3 networks, such as ECMP. For example: VCS fabric. Host reachability is computed based on reachability to the switch to which the host is connected. (Much like a path to a network is behind the router to which the subnet is connected.) The fabric can be modeled as a leaf/spine Clos fabric as shown in Figure 2. Typically the spines would be the first-hop routers for the server VLANs. Compared to the model shown in Figure 1, this allows scaling out the number of spines to increase the bandwidth available to the ToRs, thereby reducing the oversubscription. FIGURE 2 L2 Fabric 13

14 IP Fabric Overview IP Fabric (AKA Routing to ToR) For very large enterprise networks or cloud service providers (SaaS or PaaS providers), the L2 fabrics or L2 scale-out architectures are not scalable enough to meet the infrastructure requirements. IP fabric, as the name suggests, is built on IP; i.e., the fabric nodes the ToRs and spines are interconnected using L3 links instead of switch ports or TRILL-capable ports, and a Layer 3 routing control protocol runs between these nodes for reachability between the ToRs. This type of fabric takes advantage of all the best practices of L3 routing and data plane. A smaller scale topology might benefit from a link-state protocol such as OSPF. Large-scale topologies, however, typically use BGP. Extreme validated design recommends BGP as the protocol for underlay network reachability. In IP fabric, the L2/L3 boundary is pushed to the ToR. The server VLANs are terminated in the ToR of each rack. This also gives the flexibility of reusing the same VLAN numbers on different racks, but still keeping them as distinct broadcast domains. This is a much more scalable architecture since the ToR handles only a rack of compute/storage and appliances compared to the spines/aggregation devices handling several racks of resources. The route scale is still important, but one may choose to use the techniques of route aggregation, default routing to spines, route filtering, etc. IP fabric is targeted at customers who have simple L3 requirements of aggregating several L3 subnets and those who would like to build an IP underlay infrastructure to meet the requirements of overlay networking, either host-based or network-based. The overlay-based network virtualization is not within the scope of this document. FIGURE 3 IP Fabric (Routing to ToR) 14

15 Leaf-Spine Architecture IP Fabric Overview Leaf-Spine Layer 3 Clos Topology (Two-Tier) The leaf-spine topology has become the de facto standard for networking topologies when building medium- to large-scale data center infrastructures. The leaf-spine topology is adapted from Clos telecommunications networks. The Extreme IP fabric within a PoD resembles a two-tier or 3-stage folded Clos fabric. The two-tier leaf-spine topology is shown in Figure 4. The bottom layer of the IP fabric has leaf devices (top-of-rack switches), and the top layer has spines. The role of the leaf is to provide connectivity to the endpoints in the data center network. These endpoints include compute servers and storage devices, as well as other networking devices like routers, switches, load balancers, firewalls, and any other physical or virtual networking endpoints. Because all endpoints connect only to the leaf, policy enforcement, including security, traffic-path selection, QoS marking, traffic policing, and shaping, is implemented at the leaf. The leafs act as the L2/L3 boundary for the server segments in an IP fabric. The role of the spine is to provide connectivity between leafs. The major role of the spine is to participate in the control-plane and dataplane operations for traffic forwarding between leafs. FIGURE 4 Leaf-Spine L3 Clos Topology As a design principle, the following requirements apply to the leaf-spine topology: Each leaf connects to all spines in the network through 40-Gbps Ethernet links. Spines are not interconnected with each other. Leafs are not interconnected with each other for data-plane purposes. (Two leafs may be interconnected for control-plane operations such as forming a server-facing vlag. This is referred to as vlag pair leaf.) The network endpoints do not connect to the spines. This type of topology has the predictable latency and also provides the ECMP forwarding in the underlay network. The number of hops between two leaf devices is always two within the fabric. This topology also s easier scale-out in the horizontal direction as the data center expands and is limited by the port density and bandwidth supported by the spine devices. This validated design recommends the same hardware in the spine layer. Mixing different hardware is not recommended. 15

16 IP Fabric Overview Optimized 5-Stage Layer 3 Clos Topology (Three-Tier) Multiple PoDs based on leaf-spine topologies can be connected for higher scale in an optimized 5-stage folded Clos (three-tier) topology. This topology adds a new tier to the network, known as a super-spine. Super-spines function similar to spines: BGP controlplane and data-plane forwarding between the PODs and from the PODs to destinations outside the fabric via the border leafs. No endpoints are connected to the super-spines. Figure 5 shows four super-spine switches connecting the spine switches across multiple data center PoDs. The connection between the spines and the super-spines follows the Clos principles: Each spine connects to all super-spines in the network. Neither spines nor super-spines are interconnected with each other. FIGURE 5 Optimized 5-Stage L3 Clos Topology 16

17 IP Fabric Overview IP Fabric Control Plane From the control-plane perspective, there are two deployment options: Pervasive ebgp ibgp within a POD In this validated design, we recommend ebgp as the control plane in the fabric. IP fabric is also referred to as routing to ToR in comparison with the traditional multitier access and aggregation networks. In those traditional networks, the L2/L3 boundary is on the aggregation devices. In IP fabric, this boundary is moved to the edge or leaf. The leaf nodes directly advertise their server subnets into the BGP control plane. Reachability between the server subnets on various racks is established using control-plane learning. Moving the intelligence to the leaf helps with scaling out the number of spines depending on the bandwidth or oversubscription issues in the network. For instance, the ratio between the bandwidth of the server ports and the uplinks ports (from leaf to spine layer). If the oversubscription is too high, additional spines may be added. 17

18 IP Fabric Overview Pervasive ebgp This deployment model refers to the usage of ebgp peering between the leaf and the spine in the fabric. This design using ebgp as a routing protocol within the data center is based on the IETF draft Use of BGP for Routing in Large-Scale Data Centers 2. In this model, each leaf node is assigned its own autonomous system (AS) number. The other nodes are grouped based on their role in the fabric, and each of these groups is assigned a separate AS number, as shown in Figure 6. Using ebgp in an IP fabric is simple and also provides the ability to apply BGP policies for traffic engineering on a per-leaf or per-rack basis since each leaf or rack in a PoD is assigned a unique AS number. Private AS numbers are used in the fabric. One design consideration for the AS number assignment is that a 2-byte AS number provides a maximum of 1023 private AS numbers (ASN to ASN 65534); if the IP fabric is larger than 1023 devices, we recommend using 4-byte private AS numbers (ASN 4,200,000,000 to 4,294,967,294). Each leaf in a PoD is assigned its own AS number. Leafs advertise the server subnets directly into BGP. All spines inside a PoD belong to one AS. All super-spines are configured in one AS. Edge or border leafs belong to a separate AS. Each leaf peers with all spines using ebgp. Each spine peers with all super-spines using ebgp. Each border leaf peers with all super-spines using ebgp. No BGP peering occurs between nodes in the same layer. FIGURE 6 IP Fabric with ebgp as the Control Protocol 18

19 ibgp for Routing Inside a PoD This model is given here for informational purposes and is not included in the validated design section of this document. IP Fabric Overview In this deployment model, each PoD and edge services PoD is configured with a unique AS number, as shown in Figure 7. The spines and leafs in a PoD are configured with the same AS number. The ibgp design is different than the ebgp design because ibgp must be fully meshed with all BGP-d devices in an IP fabric. In order to avoid the complexities of a full mesh, spines must act as route reflectors toward the leaf nodes inside the PoD. ebgp is used to peer between spines and super-spines. The super-spine layer is configured with a unique AS number; all super-spines use the same AS number. FIGURE 7 IP Fabric with ibgp as the Control Protocol Inside a PoD 19

20 IP Fabric Overview 20

21 IP Fabric Validated Designs IP Fabric Validated Designs Pervasive ebgp Hardware Matrix IP Fabric Configuration Illustration Examples This section provides the details of key deployment models with the validated configuration templates. Extreme validated design recommends two models for IP fabric deployment; these deployment models are categorized based on how the underlay is designed for interconnecting leaf, spine, super-spine, and border-leaf nodes. Only the pervasive ebgp model is included in this document for 3- and 5-stage fabrics. Pervasive ebgp Figure 8 shows the design for a 5-stage IP fabric using ebgp as the control protocol. Note that the border leafs are connected to the super-spines in this design. For small topologies, a 3-stage fabric may be sufficient. As shown in Figure 8, each POD is a 3-stage fabric. For 3-stage fabrics, the border leafs are directly connected to the spines. FIGURE 8 Pervasive ebgp in an Optimized 5-Stage IP Fabric 21

22 IP Fabric Validated Designs Hardware Matrix TABLE 1 Extreme Switch Platforms Supported in IP Fabric Places in the Network Platform Minimum Software Version Leaf Nodes SLX 9140 VDX 6740 VDX S Spine Nodes SLX 9240 SLX-OS 17s.1.02 Network OS 7.2.0a Network OS 7.2.0a SLX-OS 17s.1.02 SLX 9850 SLX-OS 17r.1.01b Super-Spine Nodes SLX 9850 SLX-OS 17r.1.01b Edge or Border Leaf SLX 9140 SLX-OS 17s.1.02 SLX 9540 SLX-OS 17r.1.01b IP Fabric Configuration This section covers the provisioning and validation of the IP fabric network topology. The configuration is given in four parts: 1. All common configuration required on all nodes in the fabric. 2. The configuration required on the server-facing side of the ToRs or leafs. 3. The MCT configuration required on edge leafs. 4. The BGP control plane, which is split into two models: 3-stage and 5-stage fabrics. 22

23 Common Configuration on All Nodes in the Fabric Node ID Configuration for VDX Platforms IP Fabric Validated Designs This configuration is applicable only for the VDX platforms used as a dual ToR or redundant leaf in the IP fabric. SLX platforms do not require this configuration. All VDX platforms are set to a VCS ID of 1 by default. Two nodes having a common VCS ID will form a VCS fabric between them. Since these nodes will be independent in the IP fabric, we need to ensure that they do not form a VCS fabric between them. This is achieved by configuring a unique VCS ID on each node. In the validated design, each VDX node is configured with a unique VCS ID. The RBridge ID may be re-used. We recommend using RBridge ID 1 for individual leafs and RBridge IDs 1 and 2 for the vlag pair. Note: this is done from the exec prompt POD1-Leaf5# vcs vcsid 409 set-rbridge-id 1 The vlag pair leaf is assigned its own unique VCS ID, and each node in the vlag pair has a separate RBridge ID. For example, in the validated design, Leaf1 is a 2-node vlag pair. vlag Peer 1 POD1-Leaf1-1# vcs vcsid 405 set-rbridge-id 1 vlag Peer 2 POD1-Leaf1-2# vcs vcsid 405 set-rbridge-id 2 Verify the Configuration In the following output, RBridge 2 is the principal switch. All configuration for both nodes in the vlag leaf can be done from this principal switch. POD1-Leaf1-1# show vcs Config Mode : Distributed VCS Mode : Logical Chassis VCS ID : 405 VCS GUID : a40a9358-9c72-4bc6-a1e9-86c1c7aff2bd Total Number of Nodes : 2 Rbridge-Id WWN Management IP VCS Status Fabric Status HostName :00:00:27:F8:F0:76:E0* Online Online POD1-Leaf1-1 2 >10:00:00:27:F8:F0:5B:B Online Online POD1-Leaf1-2 23

24 IP Fabric Validated Designs Fabric Infrastructure Links All nodes in the IP fabric leafs, spines, and super-spines are interconnected with Layer 3 interfaces. These links are referred to as fabric infrastructure links. In the validated design, 40Gbps or 100 Gbps links are used between the nodes. All fabric links between two layers say Leaf layer and Spine layer must be of identical bandwidth. Mixing links of different speeds must be avoided. All these links are configured as Layer 3 interfaces with a /31 IPv4 address and a /127 IPv6 address. The MTU for these links is set to Jumbo MTU. Disable the fabric ISL and trunk features on VDX platforms. VDX Platforms interface FortyGigabitEthernet 1/0/49 no fabric isl no fabric trunk ip address /31 ipv6 address fdf8:10:0:1::1/127 ipv6 mtu 9100 SLX Platforms interface Ethernet 3/2 ip address /31 ipv6 address fdf8:10:2:1::/127 ipv6 mtu

25 Loopback Interfaces and Router ID Each device in the fabric needs one loopback interface with a unique IPv4 address for the purpose of the router ID. VDX vlag Pair Leaf/ToR Note that the configuration for the vlag ToR is done from the principal switch. Use the show vcs command. rbridge-id 1 interface Loopback 2 ipv6 address fdf8:10:122:1::2/128 ip address /32 ip router-id rbridge-id 2 interface Loopback 2 ipv6 address fdf8:10:122:1::3/128 ip address /32 ip router-id IP Fabric Validated Designs Individual Nonredundant VDX Leaf For the nonredundant VDX leaf, there is just one node. rbridge-id 1 interface Loopback 2 ipv6 address fdf8:10:122:5::2/128 ip address /32 ip router-id SLX Platforms SLX platforms do not need an RBridge configuration as do the VDX platforms. interface Loopback 2 ipv6 address fdf8:10:125:1::2/128 ip address /32 ip router-id

26 IP Fabric Validated Designs Server-Facing Links and Networks on ToRs VDX ToRs Server VLANs interface Vlan 101 interface Vlan VE Interfaces for the Server VLANs VE interfaces are configured to provide the first-hop routing functionality for the hosts in the VLAN. The following configuration parameters are required: The IPv4 address and prefix length for the VLAN subnet The IPv6 address and prefix length for the VLAN subnet Jumbo MTU for both IPv4 and IPv6 packet forwarding rbridge-id 1 interface Ve 101 ipv6 address fdf8:10:5:101::1/64 ipv6 mtu 9000 ip mtu 9000 ip address /24 FHRP for the vlag Pair ToRs For the vlag pair leaf, VRRPe is required for first-hop router functionality in the server VLANs. With VRRPe extension, both switches in the pair act as the active router in the VLANs. One VRRP group is configured for IPv4 and one is configured for IPv6. Hosts use the virtual IP as the gateway address. rbridge-id 1 interface Ve 101 ipv6 address fdf8:10:1:101::1/64 ipv6 mtu 9000 ipv6 vrrp-extended-group 100 virtual-ip fdf8:10:1:101::254 preempt-mode priority 150 advertisement-interval 1 ip mtu 9000 ip address /24 vrrp-extended-group 10 virtual-ip advertisement-interval 1 preempt-mode priority 150 rbridge-id 2 interface Ve 101 ipv6 address fdf8:10:1:101::2/64 ipv6 mtu 9000 ipv6 vrrp-extended-group 100 virtual-ip fdf8:10:1:101::254 preempt-mode priority 140 advertisement-interval 1 ip mtu 9000 ip address /24 vrrp-extended-group 10 virtual-ip advertisement-interval 1 preempt-mode priority

27 Server Access Links on the Individual Leaf/ToR The server-facing or access links are on the leaf nodes. In the validated design, 10-Gbps links are used for server-facing VLANs. These links are configured as Layer 2 trunk or access ports with VLANs associated. Disable fabric ISL and trunk features. Spanning tree is disabled. 1 interface TenGigabitEthernet 1/0/4 switchport switchport mode trunk switchport trunk allowed vlan add switchport trunk tag native-vlan spanning-tree shutdown no fabric isl no fabric trunk Enabled as a trunk port Add the required VLANs to the trunk port IP Fabric Validated Designs Server Access Links on the vlag Pair/ToR vlag configuration involves the following: Node ID configuration on the pair of devices Inter-switch links or ISL configuration on both devices Configuration of server-facing port channels and the required VLANs on them Node ID Configuration on the vlag Pair Refer to Node ID Configuration for VDX Platforms on page 20 for assigning the node ID to the vlag pair. Pod1-Leaf1-1, rbridge-id 1 Pod1-Leaf1-2, rbridge-id 2 POD1-Leaf1-1# show vcs Config Mode : Distributed VCS Mode : Logical Chassis VCS ID : 405 VCS GUID : c98c32fb cc5-a7fa-aeb f9 Total Number of Nodes : 2 Rbridge-Id WWN Management IP VCS Status Fabric Status HostName >10:00:00:27:F8:F0:76:E0* Online Online POD1-Leaf :100:0:fa48:34:: :00:00:27:F8:F0:5B:B Online Online POD1-Leaf :100:0:fa48:34::46 1 If there are L2 switches or bridges between the leaf and servers, spanning tree must be d. If there is a possibility of enabling bridges inadvertently under the leaf nodes, we recommend enabling spanning tree and configuring the server ports as edge ports. POD1-Leaf3(conf-if-te-1/0/4)# spanning-tree autoedge 27

28 IP Fabric Validated Designs ISL Configuration As shown in the illustration below, the vlag pair is interconnected by two 10-Gbps Ethernet ports for ISL. Server Port-Channel Configuration In the configuration shown below, port channel 1 is configured as a vlag. interface TenGigabitEthernet 1/0/1 fabric isl fabric trunk interface TenGigabitEthernet 1/0/2 fabric isl fabric trunk interface TenGigabitEthernet 2/0/1 fabric isl fabric trunk interface TenGigabitEthernet 2/0/2 fabric isl fabric trunk Rbridge 1 Rbridge 2 Te 1/0/9 ISL Links Te 2/0/9 vlag interface Port-channel 1 vlag ignore-split mtu 9022 switchport switchport mode trunk switchport trunk allowed vlan add switchport trunk tag native-vlan spanning-tree shutdown interface TenGigabitEthernet 1/0/9 channel-group 1 mode active type standard no fabric isl no fabric trunk lacp timeout long interface TenGigabitEthernet 2/0/9 channel-group 1 mode active type standard no fabric isl no fabric trunk lacp timeout long 28

29 SLX 9140 ToRs Server VLANs On SLX platforms, the router-interface for a VLAN is specified under the VLAN as shown below. IP Fabric Validated Designs vlan 76 router-interface Ve 76 vlan 77 router-interface Ve 77 VE Interfaces for the Server VLANs VE interfaces are configured to provide the first-hop routing functionality for the hosts in the VLAN. The following configuration parameters are required: The IPv4 address and prefix length for the VLAN subnet The IPv6 address and prefix length for the VLAN subnet Jumbo MTU for both IPv4 and IPv6 packet forwarding interface Ve 76 ipv6 address fdf8:10:0:4c::1/64 ipv6 mtu 9000 ip mtu 9000 ip address /24 FHRP for the MCT Pair ToR (Redundant Dual ToR) The SLX platforms use MCT for ToR redundancy. VRRPe is required for first-hop router functionality in the server VLANs. With VRRPe extension, both switches in the pair act as the active router in the VLANs. One VRRP group is configured for IPv4 and one for IPv6. Hosts use the virtual IP as the gateway address. interface Ve 80 ip mtu 9000 ip address /24 ipv6 address fdf8:10:0:50::2/96 ipv6 mtu 9000 ipv6 vrrp-extended-group 2 virtual-ip fdf8:10:0:50::254 no preempt-mode vrrp-extended-group 1 virtual-ip no preempt-mode interface Ve 80 ip mtu 9000 ip address /24 ipv6 address fdf8:10:0:50::2/96 ipv6 mtu 9000 ipv6 vrrp-extended-group 2 virtual-ip fdf8:10:0:50::254 no preempt-mode vrrp-extended-group 1 virtual-ip no preempt-mode Server Access Links on the Individual Leaf/ToR interface Ethernet 0/20 mtu 9022 switchport switchport mode trunk switchport trunk allowed vlan add switchport trunk tag native-vlan Enabled as a trunk port Add the required VLANs to the trunk port 29

30 IP Fabric Validated Designs Server Access Links on the MCT Pair/ToR On SLX leafs, server redundancy is achieved by the multi-chassis trunking with the LAG. This is similar to the vlag functionality available on VDX platforms. ICL Configuration As shown in the illustration below, the MCT pair is interconnected by two 40-Gbps Ethernet ports for ICL. Cluster Configuration The cluster configuration requires the following: Peer-interface port-channel or peering interface (port-channel 10). Control VLAN and VE interface (VLAN 4090 and VE 4090). Peer IP address (the IP address configured on the control VE of the peer). VLANs configured on the cluster. 30

31 Server or Client Port-Channel Configuration IP Fabric Validated Designs In the configuration shown below, port channel 101 is configured as a multi-chassis LAG port channel. In the configuration shown client Server1 1 identifier 1 on both peers designates the client port channel as a dual-homed LAG. This identifier must match on both peers. interface Ethernet 0/49 speed channel-group 10 mode active type standard interface Ethernet 0/50 speed channel-group 10 mode active type standard interface Port-channel 10 speed MCT Peer1 E 0/1 E 0/49 E 0/50 E 0/1 mlag interface Ethernet 0/49 speed channel-group 10 mode active type standard interface Ethernet 0/50 speed channel-group 10 mode active type standard interface Port-channel 10 speed MCT Peer2 vlan 4090 router-interface Ve 4090 interface Ve 4090 ip address /31 cluster pod1-cluster 1 peer-interface Port-channel 10 peer df-load-balance deploy client Server1 1 client-interface Port-channel 101 deploy evpn default route-target both auto ignore-as rd auto vlan add interface Port-channel 101 speed mtu 9022 switchport switchport mode trunk-no-default-native switchport trunk allowed vlan add interface Ethernet 0/1 speed channel-group 101 mode active type standard router bgp local-as capability as4- neighbor remote-as neighbor bfd address-family evpn neighbor activate vlan 4090 router-interface Ve 4090 interface Ve 4090 ip address /31 cluster pod1-cluster 1 peer-interface Port-channel 10 peer df-load-balance deploy client Server1 1 client-interface Port-channel 101 deploy evpn default route-target both auto ignore-as rd auto vlan add interface Port-channel 101 speed mtu 9022 switchport switchport mode trunk-no-default-native switchport trunk allowed vlan add interface Ethernet 0/1 speed channel-group 101 mode active type standard router bgp local-as capability as4- neighbor remote-as neighbor bfd address-family evpn neighbor activate 31

32 IP Fabric Validated Designs MCT on SLX 9540 Edge Leafs The edge leafs are configured in an MCT cluster for dual-homed firewall or appliance connectivity. The edge leafs are typically configured with one VRF for DC routes and with another VRF (or default VRF) for Internet connectivity. (Deployment designs may vary. A sample validated design with configuration is shown in this section.) In the configuration example shown here, the internal fabric routes are downloaded into the DC-VRF on the edge leafs. The Internet connectivity is provided in the default VRF. The appliance is connected between these two VRFs using VE interfaces over an MCT client port channel or LAG for redundancy or dual-homing to the two edge leafs. Steps involved in configuring the edge leafs: Configure an MCT cluster with two edge leafs. Create a VRF: DC-VRF. Configure fabric links connecting to the spines in the DC-VRF. Configure Internet-facing links in the default VRF. Configure two VE interfaces: one in the DC-VRF and another in the default VRF. (In the figure above, VE-2 is in the DC-VRF and VE-3 is in the default VRF.) Configure VRRP groups for these two VE interfaces. These VRRP VIPs act as gateways for the firewall or service appliance. In the DC-VRF, configure a static default route toward the firewall. Originate the default route into BGP toward the fabric. The global or default VRF may have a default route toward the Internet edge. (The configuration templates shown below are applicable to both edge leafs in the MCT cluster. Where there are differences, the configuration blocks are shown side-by-side.) 32

33 VRF Configuration IP Fabric Validated Designs vrf DC-VRF address-family ipv4 unicast ip route / address-family ipv6 unicast ipv6 route ::/0 fdf8:10:0:2::3 Default routes pointing to the service appliance. VLAN/VE/VRRP Configuration BGP Configuration router bgp address-family ipv4 unicast vrf DC-VRF graceful-restart default-information-originate address-family ipv6 unicast vrf DC-VRF graceful-restart default-information-originate Originate default route into the fabric. 33

34 IP Fabric Validated Designs MCT Cluster Configuration There are a few minor differences in the MCT configuration for the SLX 9540 platform compared to the SLX 9140 platforms used as leafs. interface Ethernet 0/11 channel-group 1 mode active type standard interface Ethernet 0/12 channel-group 1 mode active type standard interface Port-channel 1 switchport switchport mode trunk switchport trunk allowed vlan add 4 E 0/ MCT peer MCT Peer2 E 0/12 E 0/9 E 0/9 interface Ethernet 0/11 channel-group 1 mode active type standard interface Ethernet 0/12 channel-group 1 mode active type standard interface Port-channel 1 switchport switchport Rbridge 2 mode trunk switchport trunk allowed vlan add 4 vlan 4 router-interface Ve 4 interface Ve 4 ip address /31 ipv6 mtu 9100 interface Loopback 1 ip address /32 ip router-id ip route / cluster Edge-cluster 1 member vlan add 2-3 peer-interface Ve 4 peer client-isolation-strict deploy client client-interface Port-channel 2 esi 0:0:0:0:0:0:0:1:1 deploy interface Port-channel 2 switchport switchport mode trunk switchport trunk allowed vlan add 2-3 interface Ethernet 0/9 channel-group 2 mode active type standard router bgp local-as capability as4- fast-external-fallover neighbor remote-as neighbor update-source loopback 1 neighbor bfd address-family l2vpn evpn neighbor activate neighbor encapsulation mpls vlan 4 router-interface Ve 4 interface Ve 4 ip address /31 ipv6 mtu 9100 interface Loopback 1 ip address /32 ip router-id ip route / cluster Edge-cluster 1 member vlan add 2-3 peer-interface Ve 4 peer client-isolation-strict deploy client client-interface Port-channel 2 esi 0:0:0:0:0:0:0:1:1 deploy interface Port-channel 2 switchport switchport mode trunk switchport trunk allowed vlan add 2-3 interface Ethernet 0/9 channel-group 2 mode active type standard router bgp local-as capability as4- fast-external-fallover neighbor remote-as neighbor update-source loopback 1 neighbor bfd address-family l2vpn evpn neighbor activate neighbor encapsulation mpls 34

35 IP Fabric Validated Designs ebgp Control-Plane Configuration Deployment Model-1: ebgp Configuration for Optimized 5-Stage Clos Consider the following key points as a design principle for ebgp peering between the fabric nodes. Refer to Figure 8 on page 19 for topology information. In the validated design, a 4-byte private AS range is used. Each leaf is in a private AS. The vlag pair or MCT pair leaf (dual or redundant ToR) is considered as one leaf even though the control plane is independent. Both devices in the pair are in the same private AS. All spines within a PoD are in one private AS. All super-spines are in one private AS. All border leafs are in one private AS. BGP peering between two nodes is established over the fabric link between the nodes. Fabric link IPv4 and IPv6 addresses are used as the source and destination for the BGP sessions between the two nodes. ebgp IPv4 and IPv6 peering is established with MD5 authentication between the layers of nodes. Multiple AFI exchange over one peering session 22 (either IPv4 or IPv6) is planned in an upcoming software release. Until then, we must have separate peering IPv4 peering for IPv4 NLRI exchange and IPv6 peering for IPv6 NLRI exchange. Use peer groups for common configuration related to the BGP peers. Enable MD5 authentication on all BGP neighbors in the peer groups. Enable BFD on all BGP neighbors in the peer groups. Enable both IPv4 and IPv6 address families. Advertise the IPv4 and IPv6 addresses of the loopback interface into the respective address families. This is useful for debugging and troubleshooting reachability between the nodes. Activate the IPv6 peer groups under the IPv6 address family. Spine Configuration All spines within a PoD have a similar configuration. Peer groups are used to simplify the configurations and also for efficiency in BGP update processing. In a 5-stage fabric, spines are connected to leafs inside their PoD and to super-spines. Configure the directly connected leaf IP addresses in one peer group called leaf-group. Configure the directly connected leaf IPv6 addresses in a peer group called leaf-group-ipv6. Configure the directly connected super-spine IP addresses in a peer group called superspine-group. Configure the directly connected super-spine IPv6 addresses into another peer group called superspine-group-ipv6. 2 Exchanging both IPv4 and IPv6 AFI over a single BGP peering session is not supported. Support is planned in the upcoming releases of NOS and SLX-OS. 35

36 IP Fabric Validated Designs POD1-Spine1 router bgp local-as capability as4- fast-external-fallover neighbor leaf-group1 peer-group neighbor leaf-group1 description To Leaf AS-6500X.1 from neighbor leaf-group1 password $9$MCgKGaNt6OASX68/7TC6Lw== neighbor leaf-group1 bfd neighbor remote-as neighbor peer-group leaf-group1 neighbor remote-as neighbor peer-group leaf-group1 neighbor remote-as neighbor peer-group leaf-group1 neighbor remote-as neighbor peer-group leaf-group1 neighbor leaf-group1-ipv6 peer-group neighbor leaf-group1-ipv6 description To Leaf AS-6500X.1 neighbor leaf-group1-ipv6 password $9$MCgKGaNt6OASX68/7TC6Lw== neighbor leaf-group1-ipv6 bfd neighbor fdf8:10:0:1:1::1 remote-as neighbor fdf8:10:0:1:1::1 peer-group leaf-group1-ipv6 neighbor fdf8:10:0:2:1::1 remote-as neighbor fdf8:10:0:2:1::1 peer-group leaf-group1-ipv6 neighbor fdf8:10:0:3:1::1 remote-as neighbor fdf8:10:0:3:1::1 peer-group leaf-group1-ipv6 neighbor fdf8:10:0:4:1::1 remote-as neighbor fdf8:10:0:4:1::1 peer-group leaf-group1-ipv6 neighbor super-spine-group1 peer-group neighbor super-spine-group1 remote-as neighbor super-spine-group1 description To super-spine AS neighbor super-spine-group1 password $9$MCgKGaNt6OASX68/7TC6Lw== neighbor super-spine-group1 bfd neighbor super-spine-group1-ipv6 peer-group neighbor super-spine-group1-ipv6 remote-as neighbor super-spine-group1-ipv6 description To super-spine neighbor super-spine-group1-ipv6 password $9$MCgKGaNt6OASX68/7TC6Lw== neighbor super-spine-group1-ipv6 bfd neighbor peer-group super-spine-group1 neighbor peer-group super-spine-group1 neighbor fdf8:10:0:51:1::1 peer-group super-spine-group1-ipv6 neighbor fdf8:10:0:51:2::1 peer-group super-spine-group1-ipv6 address-family ipv4 unicast neighbor leaf-group1 -peer-as-check maximum-paths 8 graceful-restart address-family ipv6 unicast neighbor super-spine-group1-ipv6 activate neighbor super-spine-group1-ipv6 -peer-as-check neighbor leaf-group1-ipv6 activate neighbor leaf-group1-ipv6 -peer-as-check maximum-paths 8 graceful-restart 4 Byte AS number of this device Configure a peer-group for ipv4 exchange with Leafs. Enable MD5 authentication and BFD Add the directly connected leafs IPv4 addresses into leaf-group. Each leaf is in a different AS, so it must be specified seperately and not with peer-group In case of a dual-tor, both nodes are in one AS. Configure a peer-group for IPv6 exchange with Leafs. Enable MD5 authentication. Add IPv6 addresses of the directly connected Leafs to this peer-group. Configure two peer-groups for Superspines one for Ipv4 and one for IPv6 Enable ipv4 Address-Family Enable graceful restart Enable IPv6 Address-Family. Explicitly activate the peer-groups created for IPv6 exchange Each spine should establish IPv4 and IPv6 peering with all leafs inside the PoD and super-spines. (Note that the leaf nodes in a vlag pair share one common AS number between them, and super-spines belong to one AS number.) 36

37 IP Fabric Validated Designs pod1-spine1# show ip bgp summary BGP4 Summary Router ID: Local AS Number: Confederation Identifier: not configured Confederation Peers: Maximum Number of IP ECMP Paths Supported for Load Sharing: 64 Number of Neighbors Configured: 34, UP: 9 Number of Routes Installed: 810, Uses bytes Number of Routes Advertising to All Neighbors: (926 entries), Uses bytes Number of Attribute Entries Installed: 13, Uses 1352 bytes Neighbor Address AS# State Time Rt:Accepted Filtered Sent ToSend ESTAB 0h53m36s ESTAB 2d 1h 5m ESTAB 10d 1h 0m ESTAB 10d 1h 0m ESTAB 10d 1h 0m ESTAB 10d 1h 0m ESTAB 10d 1h 0m ESTAB 2d10h57m ESTAB 9d22h38m pod1-spine1# show ipv6 bgp summary BGP4 Summary Router ID: Local AS Number: Confederation Identifier: not configured Confederation Peers: Maximum Number of IP ECMP Paths Supported for Load Sharing: 64 Number of Neighbors Configured: 34, UP: 9 Number of Routes Installed: 802, Uses bytes Number of Routes Advertising to All Neighbors: (916 entries), Uses bytes Number of Attribute Entries Installed: 11, Uses 1144 bytes Neighbor Address AS# State Time Rt:Accepted Filtered Sent ToSend fdf8:10:0:1:1:: ESTAB 0h55m41s fdf8:10:0:2:1:: ESTAB 2d 1h 7m fdf8:10:0:3:1:: ESTAB 10d 1h 2m fdf8:10:0:4:1:: ESTAB 10d 1h 2m fdf8:10:0:5:1:: ESTAB 10d 1h 2m fdf8:10:0:6:1:: ESTAB 10d 1h 2m fdf8:10:0:7:1:: ESTAB 10d 1h 2m fdf8:10:0:51:1:: ESTAB 2d11h 0m fdf8:10:0:51:2:: ESTAB 9d22h40m Leaf Configuration All leafs within a PoD have a similar configuration. Peer groups are used to simplify the configuration and also for efficiency in BGP update processing. Leafs are connected to spines only. Configure the directly connected IP addresses of the spines into a peer group spine-group. Configure the directly connected IPv6 addresses of the spines into a peer group spine-group-ipv6. Advertise both IPv4 and IPv6 server subnets. Using a route map, filter the subnets of fabric links from being advertised into BGP and allow only the server subnets and loopback IP address. Advertising loopback IP and IPv6 address helps debug routing and node reachability issues. A sample route map is given below. This can be modified according to the deployment requirements. IPv4 Route-Map Configuration ip prefix-list fabric_links_ip seq 10 permit /0 ge 31 le 31 route-map ToR-map deny 10 match ip address prefix-list fabric_links_ip route-map ToR-map permit 20 IPv6 Route-Map Configuration ipv6 prefix-list fabric_links_ipv6 seq 10 permit ::/0 ge 127 le 127 route-map ToR-map-ipv6 deny 10 match ipv6 address prefix-list fabric_links_ipv6 route-map ToR-map-ipv6 permit 20 37

38 IP Fabric Validated Designs POD1-leaf1-1 router bgp local-as capability as4- fast-external-fallover neighbor spine-group peer-group neighbor spine-group remote-as neighbor spine-group description To spine neighbor spine-group password $9$MCgKGaNt6OASX68/7TC6Lw== neighbor spine-group bfd neighbor peer-group spine-group neighbor peer-group spine-group neighbor peer-group spine-group neighbor peer-group spine-group neighbor spine-group-ipv6 peer-group neighbor spine-group-ipv6 remote-as neighbor spine-group-ipv6 description To spine AS neighbor spine-group-ipv6 password $9$MCgKGaNt6OASX68/7TC6Lw== neighbor spine-group-ipv6 bfd neighbor fdf8:10:0:1:1::2 peer-group spine-group-ipv6 neighbor fdf8:10:0:1:2::2 peer-group spine-group-ipv6 neighbor fdf8:10:0:1:3::2 peer-group spine-group-ipv6 neighbor fdf8:10:0:1:4::2 peer-group spine-group-ipv6 address-family ipv4 unicast redistribute connected route-map ToR-map-ip neighbor spine-group -peer-as-check neighbor route-map out BGP-med maximum-paths 8 graceful-restart address-family ipv6 unicast redistribute connected route-map ToR-map-ipv6 neighbor spine-group-ipv6 activate neighbor spine-group-ipv6 -peer-as-check maximum-paths 8 graceful-restart Peer-group config grouping spines IPv4 addresses. Enable MD5 authentication and BFD Peer-group config grouping spines IPv6 addresses. Enable MD5 authentication and BFD Enable IPv4 Address-Family Redistribute connected to advertise the VLAN subnets and /32 loopback addresses. Enable graceful-restart Enable IPv6 Address-Family. Activate the peer-groups configured for IPv6 route exchange. Advertise /128 loopback IPv6 address and Host VLAN IPv6 subnets 38

39 IP Fabric Validated Designs Check the BGP neighbors. The leaf must be peering with all spines within the PoD for IPv4 address-family route exchange. As shown below, for a dual or vlag ToR, check the neighbors on both nodes. pod1-leaf1# show ip bgp summary BGP4 Summary Router ID: Local AS Number: Confederation Identifier: not configured Confederation Peers: Maximum Number of IP ECMP Paths Supported for Load Sharing: 8 Number of Neighbors Configured: 6, UP: 4 Number of Routes Installed: 1443, Uses bytes Number of Routes Advertising to All Neighbors: 2407 (1368 entries), Uses bytes Number of Attribute Entries Installed: 32, Uses 3328 bytes Neighbor Address AS# State Time Rt:Accepted Filtered Sent ToSend ESTAB 1h46m30s ESTAB 1d23h58m ESTAB 1d23h58m ESTAB 1d23h58m pod1-leaf1# show ip bgp summary BGP4 Summary Router ID: Local AS Number: Confederation Identifier: not configured Confederation Peers: Maximum Number of IP ECMP Paths Supported for Load Sharing: 8 Number of Neighbors Configured: 5, UP: 4 Number of Routes Installed: 1420, Uses bytes Number of Routes Advertising to All Neighbors: 1948 (908 entries), Uses bytes Number of Attribute Entries Installed: 17, Uses 1768 bytes Neighbor Address AS# State Time Rt:Accepted Filtered Sent ToSend fdf8:10:0:1:1:: ESTAB 1h47m36s fdf8:10:0:1:2:: ESTAB 1d23h59m fdf8:10:0:1:3:: ESTAB 1d23h59m fdf8:10:0:1:4:: ESTAB 1d23h59m Super-Spine Configuration This is applicable to all super-spines. Peer groups are used to simplify the configuration. Super-spines peer with the spines in each PoD and with the border leafs. In the validated design, the super-spines connect to two PoDs. The configuration may be replicated for multiple PoDs. Super-spines connect to the spines in each PoD and to the border leafs using fabric links. Create a peer group for each PoD for IPv4 peering, and exchange IPv4 routes: pod1_spine-group Add the directly connected neighbor IP addresses of all spines in PoD1 to this group. pod2_spine-group Add the directly connected neighbor IP addresses of all spines in PoD2 to this group. Create a peer group for each PoD for IPv6 peering, and exchange IPv6 routes: pod1_spine-group-ipv6 Add the directly connected neighbor IPv6 addresses of all spines in PoD1 to this group. pod2_spine-group-ipv6 Add the directly connected neighbor IPv6 addresses of all spines in PoD2 to this group. Create a separate peer group for the pair of edge leafs: edge-group and edge-group-ipv6. Add the directly connected neighbor IPv4 and IPv6 addresses of the edge leafs to these groups. Enable MD5 authentication to all peer groups. Enable BFD on all peer groups. 39

40 IP Fabric Validated Designs 40 router bgp local-as capability as4- fast-external-fallover neighbor edge-group peer-group neighbor edge-group remote-as neighbor edge-group password $9$BfpeY2eMFj4uKynSwFRgWA== neighbor edge-group bfd neighbor peer-group edge-group neighbor peer-group edge-group neighbor edge-group-ipv6 peer-group neighbor edge-group-ipv6 remote-as neighbor edge-group-ipv6 password $9$BfpeY2eMFj4uKynSwFRgWA== neighbor edge-group-ipv6 bfd neighbor fdf8:10:2:1::17 peer-group edge-group-ipv6 neighbor fdf8:10:2:1::19 peer-group edge-group-ipv6 neighbor pod1-spine-group peer-group neighbor pod1-spine-group remote-as neighbor pod1-spine-group password $9$BfpeY2eMFj4uKynSwFRgWA== neighbor pod1-spine-group bfd neighbor peer-group pod1-spine-group neighbor peer-group pod1-spine-group neighbor peer-group pod1-spine-group neighbor peer-group pod1-spine-group neighbor pod1-spine-group-ipv6 peer-group neighbor pod1-spine-group-ipv6 remote-as neighbor pod1-spine-group-ipv6 password $9$BfpeY2eMFj4uKynSwFRgWA== neighbor pod1-spine-group-ipv6 bfd neighbor fdf8:10:0:51:1::2 peer-group pod1-spine-group-ipv6 neighbor fdf8:10:0:52:1::2 peer-group pod1-spine-group-ipv6 neighbor fdf8:10:0:53:1::2 peer-group pod1-spine-group-ipv6 neighbor fdf8:10:0:54:1::2 peer-group pod1-spine-group-ipv6 neighbor pod2-spine-group peer-group neighbor pod2-spine-group remote-as neighbor pod2-spine-group password $9$BfpeY2eMFj4uKynSwFRgWA== neighbor pod2-spine-group bfd neighbor peer-group pod2-spine-group neighbor peer-group pod2-spine-group neighbor peer-group pod2-spine-group neighbor peer-group pod2-spine-group neighbor pod2-spine-group-ipv6 peer-group neighbor pod2-spine-group-ipv6 remote-as neighbor pod2-spine-group-ipv6 password $9$BfpeY2eMFj4uKynSwFRgWA== neighbor pod2-spine-group-ipv6 bfd neighbor fdf8:10:2:1::9 peer-group pod2-spine-group-ipv6 neighbor fdf8:10:2:1::11 peer-group pod2-spine-group-ipv6 neighbor fdf8:10:2:1::13 peer-group pod2-spine-group-ipv6 neighbor fdf8:10:2:1::15 peer-group pod2-spine-group-ipv6 address-family ipv4 unicast network /32 neighbor edge-group -peer-as-check neighbor pod2-spine-group -peer-as-check neighbor pod1-spine-group -peer-as-check maximum-paths 8 graceful-restart address-family ipv6 unicast network fdf8::10:124:1:1/128 neighbor edge-group-ipv6 activate neighbor edge-group-ipv6 -peer-as-check neighbor pod2-spine-group-ipv6 activate neighbor pod2-spine-group-ipv6 -peer-as-check neighbor pod1-spine-group-ipv6 activate neighbor pod1-spine-group-ipv6 -peer-as-check maximum-paths 8 graceful-restart Peer-group config pointing to edge leafs Peer-group config pointing to spines in PoD1 Peer-group config pointing to Spines in PoD2 Enable ipv4 Address-Family Advertise loopback interface s IPv4 address. Enable graceful-restart. Enable ipv6 Address-Family Advertise IPv6 address of the loopback interface. Activate the peer-groups configured for IPv6 routes exchange. Enable graceful-restart

41 IP Fabric Validated Designs Border/Edge Leaf Configuration Border leafs are connected to super-spines in a 5-stage fabric and to WAN edge devices. Note the following key points when configuring border leafs in a 5-stage fabric: Border leafs peer with each super-spine using fabric links. They exchange both IPv4 and IPv6 routes with the super-spines. Border leafs are also connected to the WAN edge devices. In this validated design, we have ebgp peering between the border leafs and WAN edges. Some deployments may require ibgp or simple IGP peering with the WAN edge depending on the scale. (The WAN edge may simply advertise a default route.) Configure two peer groups, super-spine and super-spine-ipv6, for IPv4 and IPv6 peers. Add the directly connected neighbor addresses of the super-spines into these groups. For WAN edge connectivity, configure two peer groups, wan-group and wan-group-ipv6, for IPv4 and IPv6 peering. Add the directly connected neighbor addresses of the WAN edge devices into these groups. Enable MD5 authentication and BFD on the IPv4 and IPv6 super-spine peer groups. Note that the super-spine peer groups are in the global VRF in the configuration below. They may be configured under a separate VRF as discussed in MCT on SLX 9540 Edge Leafs on page

42 IP Fabric Validated Designs edge-leaf1 router bgp local-as capability as4- neighbor super-spine peer-group neighbor super-spine remote-as neighbor super-spine description IPv4 peering to super-spines neighbor super-spine password 2 $MlVzZCFAbg== neighbor super-spine bfd neighbor peer-group super-spine neighbor peer-group super-spine neighbor super-spine-ipv6 peer-group neighbor super-spine-ipv6 remote-as neighbor super-spine-ipv6 description IPv6 peering to super-spines neighbor super-spine-ipv6 password 2 $MlVzZCFAbg== neighbor super-spine-ipv6 bfd neighbor fdf8:10:2:1::16 peer-group super-spine-ipv6 neighbor fdf8:10:3:1::16 peer-group super-spine-ipv6 neighbor wan-group peer-group neighbor wan-group remote-as neighbor wan-group description IPv4 peering to WAN Edge neighbor peer-group wan-group neighbor peer-group wan-group neighbor wan-group-ipv6 peer-group neighbor wan-group-ipv6 remote-as neighbor wan-group-ipv6 description IPv6 peering to WAN Edge neighbor fdf8:192:168:1::2 peer-group wan-group-ipv6 neighbor fdf8:192:168:1::2 peer-group wan-group-ipv6 address-family ipv4 unicast network /32 neighbor wan-group -peer-as-check neighbor super-spine -peer-as-check maximum-paths 8 graceful-restart address-family ipv6 unicast neighbor wan-group-ipv6 activate neighbor wan-group-ipv6 -peer-as-check neighbor super-spine-ipv6 activate neighbor super-spine-ipv6 -peer-as-check maximum-paths 8 graceful-restart Peer-groups config pointing to super-spines Peer-groups config pointing to WAN-edges Enable both IPv4 and IPv6 Address-Families Activate wan-group and superspine groups for IPv6 route exchange Deployment Model-2: ebgp Configuration for 3-Stage Clos Fabric Refer to Figure 8 on page 19 for topology information. The control-plane routing configuration for a 3-stage fabric is very similar to that of the 5-stage fabric with the exception of peering to super-spines by spines and border leafs. Border leafs are directly connected to spines. Note the following key points: In the validated design, a 4-byte private AS range is used. Each leaf is in a private AS. The vlag or MCT pair leaf (dual or redundant ToR) is considered as one leaf even though the control plane is independent. Both devices in the pair are in the same private AS. All spines within a PoD are in one private AS. All super-spines are in one private AS. All border leafs are in one private AS. 42

43 IP Fabric Validated Designs BGP peering between two nodes is established over the fabric link between the nodes. Fabric link IPv4 and IPv6 addresses are used as the source and destination for the BGP sessions between the two nodes. ebgp IPv4 and IPv6 peerings with MD5 authentication are established between the layers of nodes. Multiple AFI exchange over one peering session 33 (either IPv4 or IPv6) is planned in an upcoming software release. Until then, we need to have separate peering IPv4 peering to exchange IPv4 NLRI and IPv6 peering to exchange IPv6 NLRI. Use peer groups for common configuration related to the BGP peers. Enable MD5 authentication to all BGP neighbors. Enable both IPv4 and IPv6 address families. Advertise the IPv4 and IPv6 addresses of the loopback interface into respective address families. This is useful for debugging and troubleshooting reachability between the nodes. Activate IPv6 peer groups under the IPv6 address family. Enable MD5 authentication to all peer groups. Enable BFD on all peer groups. Spine Configuration All spines within a PoD have a similar configuration for IPv4 underlay. In a 3-stage fabric, spines peer with leafs and border leafs. Peer groups are used to simplify the configurations and also for efficiency in BGP update processing. Configure the directly connected leafs' IPv4 addresses into one peer group: leaf-group. Configure the directly connected leafs' IPv6 addresses into one peer group: leaf-group-ipv6. Configure the edge leafs IPv4 fabric link addresses into one peer group: edge-group. Configure the edge leafs IPv4 fabric link addresses into one peer group: edge-group-ipv6. 3 Exchanging both IPv4 and IPv6 AFI over a single BGP peering session is not supported. It is planned in the upcoming releases of NOS and SLX-OS. 43

44 IP Fabric Validated Designs POD1-Spine1 router bgp local-as capability as4- fast-external-fallover neighbor leaf-group1 peer-group neighbor leaf-group1 description To Leaf neighbor leaf-group1 password $9$MCgKGaNt6OASX68/7TC6Lw== neighbor leaf-group1 bfd neighbor remote-as neighbor peer-group leaf-group1 neighbor remote-as neighbor peer-group leaf-group1 neighbor remote-as neighbor peer-group leaf-group1 neighbor remote-as neighbor peer-group leaf-group1 neighbor leaf-group1-ipv6 peer-group neighbor leaf-group1-ipv6 description To Leaf AS-6500X.1 neighbor leaf-group1-ipv6 password $9$MCgKGaNt6OASX68/7TC6Lw== neighbor leaf-group1-ipv6 bfd neighbor fdf8:10:0:1::1 remote-as neighbor fdf8:10:0:1::1 peer-group leaf-group1-ipv6 neighbor fdf8:10:0:1::3 remote-as neighbor fdf8:10:0:1::3 peer-group leaf-group1-ipv6 neighbor fdf8:10:0:1::5 remote-as neighbor fdf8:10:0:1::5 peer-group leaf-group1-ipv6 neighbor fdf8:10:0:1::7 remote-as neighbor fdf8:10:0:1::7 peer-group leaf-group1-ipv6 neighbor edge-group peer-group neighbor edge-group remote-as neighbor edge-group password $9$BfpeY2eMFj4uKynSwFRgWA== neighbor edge-group bfd neighbor peer-group edge-group neighbor peer-group edge-group neighbor edge-group-ipv6 peer-group neighbor edge-group-ipv6 remote-as neighbor edge-group-ipv6 password $9$BfpeY2eMFj4uKynSwFRgWA== neighbor edge-group-ipv6 bfd neighbor fdf8:10:2:1::17 peer-group edge-group-ipv6 neighbor fdf8:10:2:1::19 peer-group edge-group-ipv6 address-family ipv4 unicast neighbor leaf-group1 -peer-as-check neighbor edge-group -peer-as-check maximum-paths 8 graceful-restart address-family ipv6 unicast neighbor leaf-group1-ipv6 activate neighbor leaf-group1-ipv6 -peer-as-check neighbor edge-group-ipv6 activate neighbor edge-group-ipv6 -peer-as-check maximum-paths 8 graceful-restart 4 Byte AS number of this device Configure a peer-group for ipv4 exchange with Leafs. Enable MD5 authentication and BFD Add the directly connected leafs IPv4 addresses into leaf-group. Each leaf is in a different AS, so it must be specified seperately and not with peer-group In case of a dual-tor, both nodes are in one AS. Configure a peer-group for IPv6 exchange with Leafs. Enable MD5 authentication. Add IPv6 addresses of the directly connected Leafs to this peer-group. Configure two peer-groups for edge-leafs one for Ipv4 and one for IPv6 Enable ipv4 Address-Family Enable graceful restart Enable IPv6 Address-Family. Explicitly activate the peer-groups created for IPv6 exchange 44

45 Leaf Configuration All leafs within a PoD have a similar configuration as that shown in the 5-stage model. Configure the directly connected spines' fabric link IPv4 addresses into one peer group: spine-group. Configure the directly connected spines' fabric link IPv4 addresses into one peer group: spine-group-ipv6. For the vlag or MCT pair leaf, the BGP configuration is applied independently on each node. IP Fabric Validated Designs Advertise the IPv4 and IPv6 addresses of the loopback interface (for debugging purposes) and the IPv4 and IPv6 server-facing subnets. Use a route map to exclude the addresses of the fabric links. A sample route map is given below. This configuration can be modified according to the deployment requirements. IPv4 Route-Map Configuration ip prefix-list fabric_links_ip seq 10 permit /0 ge 31 le 31 route-map ToR-map deny 10 match ip address prefix-list fabric_links_ip route-map ToR-map permit 20 IPv6 Route-Map Configuration ipv6 prefix-list fabric_links_ipv6 seq 10 permit ::/0 ge 127 le 127 route-map ToR-map-ipv6 deny 10 match ipv6 address prefix-list fabric_links_ipv6 route-map ToR-map-ipv6 permit 20 POD1-leaf1-1 router bgp local-as capability as4- neighbor spine-group peer-group neighbor spine-group remote-as neighbor spine-group description connected to 4 spines neighbor spine-group password 2 $PVNHITJVPWQ= neighbor peer-group spine-group neighbor peer-group spine-group neighbor peer-group spine-group neighbor peer-group spine-group neighbor spine-group-ipv6 peer-group neighbor spine-group-ipv6 remote-as neighbor spine-group-ipv6 description connected to 4 spines neighbor spine-group-ipv6 password 2 $MlVzZCFAbg== neighbor fdf8:10:0:1:: peer-group spine-group-ipv6 neighbor fdf8:10:0:2:: peer-group spine-group-ipv6 neighbor fdf8:10:0:3:: peer-group spine-group-ipv6 neighbor fdf8:10:0:4:: peer-group spine-group-ipv6 address-family ipv4 unicast redistribute connected route-map ToR-map neighbor spine-group -peer-as-check maximum-paths 8 graceful-restart address-family ipv6 unicast redistribute connected route-map ToR-map-ipv6 neighbor spine-group-ipv6 activate neighbor spine-group-ipv6 -peer-as-check maximum-paths 8 graceful-restart Peer-group config grouping spines IPv4 addresses. Enable MD5 authentication Peer-group config grouping spines IPv6 addresses. Enable MD5 authentication Enable IPv4 Address-Family Redistribute connected to advertise the VLAN subnets Enable graceful-restart Repeat the same for IPv6 Address-Family 45

46 IP Fabric Validated Designs Border/Edge Leaf Configuration Note the following key points when configuring a border leaf in a 5-stage fabric: Border leafs peer with each of the super-spines using the fabric links. Border leafs are also connected to the WAN edge devices. In this validated design, we have ebgp peering between the border leafs and the WAN edges. Some deployments may require ibgp or simple IGP peering with the WAN edge, depending on the scale. (The WAN edge may simply advertise a default route for external reachability.) Configure two peer groups, spine-group and superspine-group-ipv6, for the IPv4 and IPv6 peers. Add the directly connected neighbor addresses of the super-spines into these groups. For WAN edge connectivity, configure two peer groups, wan-group and wan-group-ipv6, for IPv4 and IPv6 peering. Add the directly connected neighbor addresses of the WAN edge devices into these groups. Enable MD5 authentication and BFD on the spine peer groups. Enable both IPv4 and IPv6 address families. Advertise the IPv4 and IPv6 addresses of the loopback interface into the respective address families. Note that the spine peer groups are in the global VRF in the configuration below. They may be configured under a separate VRF as discussed in MCT on SLX 9540 Edge Leafs on page

47 IP Fabric Validated Designs router bgp local-as capability as4- neighbor super-spine peer-group neighbor super-spine remote-as neighbor super-spine description IPv4 peering to super-spines neighbor super-spine password 2 $MlVzZCFAbg== neighbor super-spine bfd neighbor peer-group super-spine neighbor peer-group super-spine neighbor super-spine-ipv6 peer-group neighbor super-spine-ipv6 remote-as neighbor super-spine-ipv6 description IPv6 peering to super-spines neighbor super-spine-ipv6 password 2 $MlVzZCFAbg== neighbor super-spine-ipv6 bfd neighbor fdf8:10:2:1::16 peer-group super-spine-ipv6 neighbor fdf8:10:3:1::16 peer-group super-spine-ipv6 neighbor wan-group peer-group neighbor wan-group remote-as neighbor wan-group description IPv4 peering to WAN Edge neighbor peer-group wan-group neighbor peer-group wan-group neighbor wan-group-ipv6 peer-group neighbor wan-group-ipv6 remote-as neighbor wan-group-ipv6 description IPv6 peering to WAN Edge neighbor fdf8:192:168:1::2 peer-group wan-group-ipv6 neighbor fdf8:192:168:1::2 peer-group wan-group-ipv6 address-family ipv4 unicast neighbor super-spine -peer-as-check neighbor wan-group -peer-as-check maximum-paths 8 graceful-restart address-family ipv6 unicast neighbor wan-group-ipv6 activate neighbor wan-group-ipv6 -peer-as-check neighbor super-spine-ipv6 activate neighbor super-spine-ipv6 -peer-as-check maximum-paths 8 graceful-restart Peer-groups config pointing to super-spines Peer-groups config pointing to WAN-edges Enable both IPv4 and IPv6 Address-Families Activate wan-group and superspine groups for IPv6 route exchange Illustration Examples In this section, we illustrate the use cases using sections of the validated design network topology as appropriate. This will help the reader to further understand the deployment scenarios. Network Reachability Between Racks and PoDs Figure 9 shows a section of the topology to illustrate the following with configuration and verification: for this example, two racks are shown in PoD1 and one rack is shown in PoD2. In PoD1, Rack1 has two redundant vlag pair ToRs, leaf1-1 and leaf1-2. In PoD1, Rack2 has two redundant vlag pair ToRs, leaf1-1 and leaf1-2. In PoD2, Rack6 has an individual or nonredundant ToR, Leaf6. Under each leaf, one server VLAN is shown with the respective VE interface IP address and FHRP/VIP address in the case of a 47

48 IP Fabric Validated Designs vlag pair. This example shows the various CLI output required to verify the server subnet reachability between the leafs. Note that there is no manipulation of BGP paths such as route policies and route aggregation. For additional information, refer to Design Considerations on page 51. FIGURE 9 Connectivity Between the Racks and PoDs 48

Network Virtualization in IP Fabric with BGP EVPN

Network Virtualization in IP Fabric with BGP EVPN EXTREME VALIDATED DESIGN Network Virtualization in IP Fabric with BGP EVPN Network Virtualization in IP Fabric with BGP EVPN Version 2.0 9035383 February 2018 2018, Extreme Networks, Inc. All Rights Reserved.

More information

EXTREME VALIDATED DESIGN. Network Virtualization in IP Fabric with BGP EVPN

EXTREME VALIDATED DESIGN. Network Virtualization in IP Fabric with BGP EVPN EXTREME VALIDATED DESIGN Network Virtualization in IP Fabric with BGP EVPN 53-1004308-07 April 2018 2018, Extreme Networks, Inc. All Rights Reserved. Extreme Networks and the Extreme Networks logo are

More information

Hierarchical Fabric Designs The Journey to Multisite. Lukas Krattiger Principal Engineer September 2017

Hierarchical Fabric Designs The Journey to Multisite. Lukas Krattiger Principal Engineer September 2017 Hierarchical Fabric Designs The Journey to Multisite Lukas Krattiger Principal Engineer September 2017 A Single Fabric, a Single Data Center External Layer-3 Network Pod 1 Leaf/ Topologies (aka Folded

More information

IP Fabric Reference Architecture

IP Fabric Reference Architecture IP Fabric Reference Architecture Technical Deep Dive jammon@brocade.com Feng Shui of Data Center Design 1. Follow KISS Principle Keep It Simple 2. Minimal features 3. Minimal configuration 4. Configuration

More information

Extreme Networks How to Build Scalable and Resilient Fabric Networks

Extreme Networks How to Build Scalable and Resilient Fabric Networks Extreme Networks How to Build Scalable and Resilient Fabric Networks Mikael Holmberg Distinguished Systems Engineer Fabrics MLAG IETF TRILL Cisco FabricPath Extreme (Brocade) VCS Juniper QFabric IEEE Fabric

More information

Introduction to External Connectivity

Introduction to External Connectivity Before you begin Ensure you know about Programmable Fabric. Conceptual information is covered in the Introduction to Cisco Programmable Fabric and Introducing Cisco Programmable Fabric (VXLAN/EVPN) chapters.

More information

Solution Guide. Infrastructure as a Service: EVPN and VXLAN. Modified: Copyright 2016, Juniper Networks, Inc.

Solution Guide. Infrastructure as a Service: EVPN and VXLAN. Modified: Copyright 2016, Juniper Networks, Inc. Solution Guide Infrastructure as a Service: EVPN and VXLAN Modified: 2016-10-16 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net All rights reserved.

More information

MPLS VPN--Inter-AS Option AB

MPLS VPN--Inter-AS Option AB The feature combines the best functionality of an Inter-AS Option (10) A and Inter-AS Option (10) B network to allow a Multiprotocol Label Switching (MPLS) Virtual Private Network (VPN) service provider

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Configuring a Two-Tiered Virtualized Data Center for Large Enterprise Networks Release NCE 33 Modified: 2016-08-01 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California

More information

MPLS VPN Inter-AS Option AB

MPLS VPN Inter-AS Option AB First Published: December 17, 2007 Last Updated: September 21, 2011 The feature combines the best functionality of an Inter-AS Option (10) A and Inter-AS Option (10) B network to allow a Multiprotocol

More information

Configuring VXLAN EVPN Multi-Site

Configuring VXLAN EVPN Multi-Site This chapter contains the following sections: About VXLAN EVPN Multi-Site, on page 1 Licensing Requirements for VXLAN EVPN Multi-Site, on page 2 Guidelines and Limitations for VXLAN EVPN Multi-Site, on

More information

Cloud Data Center Architecture Guide

Cloud Data Center Architecture Guide Cloud Data Center Architecture Guide Modified: 2018-08-21 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net Juniper Networks, the Juniper Networks

More information

EXTREME VALIDATED DESIGN. Extreme VCS Fabric with IP Storage

EXTREME VALIDATED DESIGN. Extreme VCS Fabric with IP Storage EXTREME VALIDATED DESIGN 53-1004936-03 April 2018 2018, Extreme Networks, Inc. All Rights Reserved. Extreme Networks and the Extreme Networks logo are trademarks or registered trademarks of Extreme Networks,

More information

VMware Virtual SAN Routed Network Deployments with Brocade

VMware Virtual SAN Routed Network Deployments with Brocade VMware Virtual SAN Routed Network Deployments with Brocade Deployments TECHNICAL WHITE PAPER UPDATE NOVEMBER VERSION 1.1 Table of Contents Introduction... 2 VMware Virtual SAN Overview... 3 Brocade Network

More information

Pluribus Data Center Interconnect Validated

Pluribus Data Center Interconnect Validated Design Guide Pluribus Data Center Interconnect Validated Design Guide www.pluribusnetworks.com Terminology Reference This is a glossary of acronyms and terms used throughout this document. AS BFD BGP L2VPN

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Deploying Secure Multicast Market Data Services for Financial Services Environments Modified: 2016-07-29 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089

More information

VXLAN Design with Cisco Nexus 9300 Platform Switches

VXLAN Design with Cisco Nexus 9300 Platform Switches Guide VXLAN Design with Cisco Nexus 9300 Platform Switches Guide October 2014 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 39 Contents What

More information

Configuring MPLS and EoMPLS

Configuring MPLS and EoMPLS 37 CHAPTER This chapter describes how to configure multiprotocol label switching (MPLS) and Ethernet over MPLS (EoMPLS) on the Catalyst 3750 Metro switch. MPLS is a packet-switching technology that integrates

More information

April Brocade VDX 6740 Deployment Guide for VMware EVO:RAIL

April Brocade VDX 6740 Deployment Guide for VMware EVO:RAIL 24 April 2015 Brocade VDX 6740 Deployment Guide for VMware EVO:RAIL 2015, Brocade Communications Systems, Inc. All Rights Reserved. ADX, Brocade, Brocade Assurance, the B-wing symbol, DCX, Fabric OS, HyperEdge,

More information

Protecting an EBGP peer when memory usage reaches level 2 threshold 66 Configuring a large-scale BGP network 67 Configuring BGP community 67

Protecting an EBGP peer when memory usage reaches level 2 threshold 66 Configuring a large-scale BGP network 67 Configuring BGP community 67 Contents Configuring BGP 1 Overview 1 BGP speaker and BGP peer 1 BGP message types 1 BGP path attributes 2 BGP route selection 6 BGP route advertisement rules 6 BGP load balancing 6 Settlements for problems

More information

InterAS Option B. Information About InterAS. InterAS and ASBR

InterAS Option B. Information About InterAS. InterAS and ASBR This chapter explains the different InterAS option B configuration options. The available options are InterAS option B, InterAS option B (with RFC 3107), and InterAS option B lite. The InterAS option B

More information

MPLS VPN Multipath Support for Inter-AS VPNs

MPLS VPN Multipath Support for Inter-AS VPNs The feature supports Virtual Private Network (VPN)v4 multipath for Autonomous System Boundary Routers (ASBRs) in the interautonomous system (Inter-AS) Multiprotocol Label Switching (MPLS) VPN environment.

More information

Feature Information for BGP Control Plane, page 1 BGP Control Plane Setup, page 1. Feature Information for BGP Control Plane

Feature Information for BGP Control Plane, page 1 BGP Control Plane Setup, page 1. Feature Information for BGP Control Plane Feature Information for, page 1 Setup, page 1 Feature Information for Table 1: Feature Information for Feature Releases Feature Information PoAP diagnostics 7.2(0)N1(1) Included a new section on POAP Diagnostics.

More information

Intelligent WAN Multiple VRFs Deployment Guide

Intelligent WAN Multiple VRFs Deployment Guide Cisco Validated design Intelligent WAN Multiple VRFs Deployment Guide September 2017 Table of Contents Table of Contents Deploying the Cisco Intelligent WAN... 1 Deploying the Cisco IWAN Multiple VRFs...

More information

Configuring BGP community 43 Configuring a BGP route reflector 44 Configuring a BGP confederation 44 Configuring BGP GR 45 Enabling Guard route

Configuring BGP community 43 Configuring a BGP route reflector 44 Configuring a BGP confederation 44 Configuring BGP GR 45 Enabling Guard route Contents Configuring BGP 1 Overview 1 BGP speaker and BGP peer 1 BGP message types 1 BGP path attributes 2 BGP route selection 6 BGP route advertisement rules 6 BGP load balancing 6 Settlements for problems

More information

ACI Transit Routing, Route Peering, and EIGRP Support

ACI Transit Routing, Route Peering, and EIGRP Support ACI Transit Routing, Route Peering, and EIGRP Support ACI Transit Routing This chapter contains the following sections: ACI Transit Routing, on page 1 Transit Routing Use Cases, on page 1 ACI Fabric Route

More information

Border Provisioning Use Case in VXLAN BGP EVPN Fabrics - Multi-Site

Border Provisioning Use Case in VXLAN BGP EVPN Fabrics - Multi-Site Border Provisioning Use Case in VXLAN BGP EVPN Fabrics - Multi-Site This chapter explains LAN Fabric border provisioning using EVPN Multi-Site feature. Overview, page 1 Prerequisites, page 1 Limitations,

More information

IP fabrics - reloaded

IP fabrics - reloaded IP fabrics - reloaded Joerg Ammon Senior Principal Systems Engineer 2017-11-09 2017 Extreme Networks, Inc. All rights reserved Extreme Networks Acquisition update Oct 30, 2017:

More information

Contents. EVPN overview 1

Contents. EVPN overview 1 Contents EVPN overview 1 EVPN network model 1 MP-BGP extension for EVPN 2 Configuration automation 3 Assignment of traffic to VXLANs 3 Traffic from the local site to a remote site 3 Traffic from a remote

More information

BGP IN THE DATA CENTER

BGP IN THE DATA CENTER BGP IN THE DATA CENTER A PACKET DESIGN E-BOOK Contents Page 3 : BGP the Savior Page 4 : Traditional Data Center Architecture Traffic Flows Scalability Spanning Tree Protocol (STP) Page 6 : CLOS Architecture

More information

Provisioning Overlay Networks

Provisioning Overlay Networks This chapter has the following sections: Using Cisco Virtual Topology System, page 1 Creating Overlays, page 2 Creating Network using VMware, page 4 Creating Subnetwork using VMware, page 4 Creating Routers

More information

Configuring VXLAN EVPN Multi-Site

Configuring VXLAN EVPN Multi-Site This chapter contains the following sections: About VXLAN EVPN Multi-Site, page 1 Guidelines and Limitations for VXLAN EVPN Multi-Site, page 2 Enabling VXLAN EVPN Multi-Site, page 2 Configuring VNI Dual

More information

February Connectrix VDX-6740B IP Storage Switch Deployment Guide for VxRail Appliance

February Connectrix VDX-6740B IP Storage Switch Deployment Guide for VxRail Appliance 21 February 2016 Connectrix VDX-6740B IP Storage Switch Deployment Guide for VxRail Appliance 2016, Brocade Communications Systems, Inc. All Rights Reserved. ADX, Brocade, Brocade Assurance, the B-wing

More information

Cisco HyperFlex Systems

Cisco HyperFlex Systems White Paper Cisco HyperFlex Systems Install and Manage Cisco HyperFlex Systems in a Cisco ACI Environment Original Update: January 2017 Updated: March 2018 Note: This document contains material and data

More information

Traffic Load Balancing in EVPN/VXLAN Networks. Tech Note

Traffic Load Balancing in EVPN/VXLAN Networks. Tech Note Traffic Load Balancing in EVPN/VXLAN Networks Tech Note December 2017 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net Juniper Networks assumes no

More information

Unicast Forwarding. Unicast. Unicast Forwarding Flows Overview. Intra Subnet Forwarding (Bridging) Unicast, on page 1

Unicast Forwarding. Unicast. Unicast Forwarding Flows Overview. Intra Subnet Forwarding (Bridging) Unicast, on page 1 Unicast, on page 1 Unicast Flows Overview Intra and inter subnet forwarding are the possible unicast forwarding flows in the VXLAN BGP EVPN fabric, between leaf/tor switch VTEPs. They are explained in

More information

Data Center Configuration. 1. Configuring VXLAN

Data Center Configuration. 1. Configuring VXLAN Data Center Configuration 1. 1 1.1 Overview Virtual Extensible Local Area Network (VXLAN) is a virtual Ethernet based on the physical IP (overlay) network. It is a technology that encapsulates layer 2

More information

Agile Data Center Solutions for the Enterprise

Agile Data Center Solutions for the Enterprise Solution Brief Agile Data Center Solutions for the Enterprise IP Fabrics: Paving the Way to Digital Transformation The data center sits at the core of the business, housing mission critical applications

More information

Configuring Virtual Private LAN Service (VPLS) and VPLS BGP-Based Autodiscovery

Configuring Virtual Private LAN Service (VPLS) and VPLS BGP-Based Autodiscovery Configuring Virtual Private LAN Service (VPLS) and VPLS BGP-Based Autodiscovery Finding Feature Information, page 1 Configuring VPLS, page 1 Configuring VPLS BGP-based Autodiscovery, page 17 Finding Feature

More information

Technical Brief. Achieving a Scale-Out IP Fabric with the Adaptive Cloud Fabric Architecture.

Technical Brief. Achieving a Scale-Out IP Fabric with the Adaptive Cloud Fabric Architecture. Technical Brief Achieving a Scale-Out IP Fabric with the Adaptive Cloud Fabric Architecture www.pluribusnetworks.com Terminology Reference This is a glossary of acronyms and terms used throughout this

More information

HP A5820X & A5800 Switch Series MPLS. Configuration Guide. Abstract

HP A5820X & A5800 Switch Series MPLS. Configuration Guide. Abstract HP A5820X & A5800 Switch Series MPLS Configuration Guide Abstract This document describes the software features for the HP 5820X & 5800 Series products and guides you through the software configuration

More information

Implementing MPLS VPNs over IP Tunnels

Implementing MPLS VPNs over IP Tunnels The MPLS VPNs over IP Tunnels feature lets you deploy Layer 3 Virtual Private Network (L3VPN) services, over an IP core network, using L2TPv3 multipoint tunneling instead of MPLS. This allows L2TPv3 tunnels

More information

VXLAN EVPN Multi-Site Design and Deployment

VXLAN EVPN Multi-Site Design and Deployment White Paper VXLAN EVPN Multi-Site Design and Deployment 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 55 Contents What you will learn... 4

More information

Configuring VXLAN EVPN Multi-Site

Configuring VXLAN EVPN Multi-Site This chapter contains the following sections: About VXLAN EVPN Multi-Site, page 1 Licensing Requirements for VXLAN EVPN Multi-Site, page 2 Guidelines and Limitations for VXLAN EVPN Multi-Site, page 2 Enabling

More information

Segment Routing on Cisco Nexus 9500, 9300, 9200, 3200, and 3100 Platform Switches

Segment Routing on Cisco Nexus 9500, 9300, 9200, 3200, and 3100 Platform Switches White Paper Segment Routing on Cisco Nexus 9500, 9300, 9200, 3200, and 3100 Platform Switches Authors Ambrish Mehta, Cisco Systems Inc. Haider Salman, Cisco Systems Inc. 2017 Cisco and/or its affiliates.

More information

HPE FlexFabric 5940 Switch Series

HPE FlexFabric 5940 Switch Series HPE FlexFabric 5940 Switch Series EVPN Configuration Guide Part number: 5200-2002b Software version: Release 25xx Document version: 6W102-20170830 Copyright 2017 Hewlett Packard Enterprise Development

More information

Network Configuration Example

Network Configuration Example Network Configuration Example Configuring BGP Autodiscovery for LDP VPLS Release NCE0035 Modified: 2017-01-24 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net

More information

VXLAN Multipod Design for Intra-Data Center and Geographically Dispersed Data Center Sites

VXLAN Multipod Design for Intra-Data Center and Geographically Dispersed Data Center Sites White Paper VXLAN Multipod Design for Intra-Data Center and Geographically Dispersed Data Center Sites May 17, 2016 Authors Max Ardica, Principal Engineer INSBU Patrice Bellagamba, Distinguish System Engineer

More information

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches Migration Guide Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches Migration Guide November 2013 2013 Cisco and/or its affiliates. All rights reserved. This document is

More information

Cisco ACI Multi-Pod/Multi-Site Deployment Options Max Ardica Principal Engineer BRKACI-2003

Cisco ACI Multi-Pod/Multi-Site Deployment Options Max Ardica Principal Engineer BRKACI-2003 Cisco ACI Multi-Pod/Multi-Site Deployment Options Max Ardica Principal Engineer BRKACI-2003 Agenda ACI Introduction and Multi-Fabric Use Cases ACI Multi-Fabric Design Options ACI Stretched Fabric Overview

More information

VIRTUAL CLUSTER SWITCHING SWITCHES AS A CLOUD FOR THE VIRTUAL DATA CENTER. Emil Kacperek Systems Engineer Brocade Communication Systems.

VIRTUAL CLUSTER SWITCHING SWITCHES AS A CLOUD FOR THE VIRTUAL DATA CENTER. Emil Kacperek Systems Engineer Brocade Communication Systems. VIRTUAL CLUSTER SWITCHING SWITCHES AS A CLOUD FOR THE VIRTUAL DATA CENTER Emil Kacperek Systems Engineer Brocade Communication Systems Mar, 2011 2010 Brocade Communications Systems, Inc. Company Proprietary

More information

Ethernet VPN (EVPN) and Provider Backbone Bridging-EVPN: Next Generation Solutions for MPLS-based Ethernet Services. Introduction and Application Note

Ethernet VPN (EVPN) and Provider Backbone Bridging-EVPN: Next Generation Solutions for MPLS-based Ethernet Services. Introduction and Application Note White Paper Ethernet VPN (EVPN) and Provider Backbone Bridging-EVPN: Next Generation Solutions for MPLS-based Ethernet Services Introduction and Application Note Last Updated: 5/2014 Ethernet VPN (EVPN)

More information

Connecting to a Service Provider Using External BGP

Connecting to a Service Provider Using External BGP Connecting to a Service Provider Using External BGP First Published: May 2, 2005 Last Updated: August 21, 2007 This module describes configuration tasks that will enable your Border Gateway Protocol (BGP)

More information

CCIE R&S LAB CFG H2/A5 (Jacob s & Jameson s)

CCIE R&S LAB CFG H2/A5 (Jacob s & Jameson s) Contents Section 1 Layer 2 Technologies... 2 1.1 Jameson s Datacenter: Access port... 2 1.2 Jameson s Datacenter: Trunk ports... 4 1.3 Jameson s Datacenter: Link bundling... 5 1.4 Jameson s Branch Offices...

More information

Networking solution for consolidated IT infrastructure

Networking solution for consolidated IT infrastructure Networking solution for consolidated IT infrastructure Timo Lonka timo@extremenetworks.com Topics 1.The New Extreme 2. IP Storage and HCI Networking 3. Agile Data Center Architecture 4. Case study: Ficolo

More information

Configuring BGP: RT Constrained Route Distribution

Configuring BGP: RT Constrained Route Distribution Configuring BGP: RT Constrained Route Distribution BGP: RT Constrained Route Distribution is a feature that can be used by service providers in Multiprotocol Label Switching (MPLS) Layer 3 VPNs to reduce

More information

Configuration prerequisites 45 Configuring BGP community 45 Configuring a BGP route reflector 46 Configuring a BGP confederation 46 Configuring BGP

Configuration prerequisites 45 Configuring BGP community 45 Configuring a BGP route reflector 46 Configuring a BGP confederation 46 Configuring BGP Contents Configuring BGP 1 Overview 1 BGP speaker and BGP peer 1 BGP message types 1 BGP path attributes 2 BGP route selection 6 BGP route advertisement rules 6 BGP load balancing 6 Settlements for problems

More information

Ethernet VPN (EVPN) in Data Center

Ethernet VPN (EVPN) in Data Center Ethernet VPN (EVPN) in Data Center Description and Design considerations Vasilis Stavropoulos Sparkle GR EVPN in Data Center The necessity for EVPN (what it is, which problems it solves) EVPN with MPLS

More information

Configuring Advanced BGP

Configuring Advanced BGP CHAPTER 6 This chapter describes how to configure advanced features of the Border Gateway Protocol (BGP) on the Cisco NX-OS switch. This chapter includes the following sections: Information About Advanced

More information

VXLAN EVPN Multihoming with Cisco Nexus 9000 Series Switches

VXLAN EVPN Multihoming with Cisco Nexus 9000 Series Switches White Paper VXLAN EVPN Multihoming with Cisco Nexus 9000 Series Switches 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 27 Contents Introduction...

More information

Introduction. Keith Barker, CCIE #6783. YouTube - Keith6783.

Introduction. Keith Barker, CCIE #6783. YouTube - Keith6783. Understanding, Implementing and troubleshooting BGP 01 Introduction http:// Instructor Introduction Keith Barker, CCIE #6783 CCIE Routing and Switching 2001 CCIE Security 2003 kbarker@ine.com YouTube -

More information

Lenovo ThinkSystem NE Release Notes. For Lenovo Cloud Network Operating System 10.6

Lenovo ThinkSystem NE Release Notes. For Lenovo Cloud Network Operating System 10.6 Lenovo ThinkSystem NE10032 Release Notes For Lenovo Cloud Network Operating System 10.6 Note: Before using this information and the product it supports, read the general information in the Safety information

More information

Brocade Network OS DATA CENTER. Target Path Selection Guide August 7, 2017

Brocade Network OS DATA CENTER. Target Path Selection Guide August 7, 2017 August 7, 2017 DATA CENTER Brocade Network OS Target Path Selection Guide Brocade Network OS Target Path releases are recommended code levels for Brocade VDX Ethernet Data Center switch platforms. Use

More information

HP FlexFabric 7900 Switch Series

HP FlexFabric 7900 Switch Series HP FlexFabric 7900 Switch Series MCE Configuration Guide Part number: 5998-6188 Software version: Release 2117 and Release 2118 Document version: 6W100-20140805 Legal and notice information Copyright 2014

More information

HPE FlexFabric 5940 Switch Series

HPE FlexFabric 5940 Switch Series HPE FlexFabric 5940 Switch Series MCE Configuration Guide Part number: 5200-1024b Software version: Release 25xx Document version: 6W102-20170830 Copyright 2017 Hewlett Packard Enterprise Development LP

More information

HP 5920 & 5900 Switch Series

HP 5920 & 5900 Switch Series HP 5920 & 5900 Switch Series MCE Configuration Guide Part number: 5998-2896 Software version: Release2207 Document version: 6W100-20121130 Legal and notice information Copyright 2012 Hewlett-Packard Development

More information

Dell EMC Switch Configuration Guide for iscsi and Software-Defined Storage

Dell EMC Switch Configuration Guide for iscsi and Software-Defined Storage Dell EMC Switch Configuration Guide for iscsi and Software-Defined Storage Dell EMC Networking Infrastructure Solutions November 2017 A Dell EMC Technical White Paper Revisions Date Description Authors

More information

MPLS: Layer 3 VPNs: Inter-AS and CSC Configuration Guide, Cisco IOS Release 15SY

MPLS: Layer 3 VPNs: Inter-AS and CSC Configuration Guide, Cisco IOS Release 15SY MPLS: Layer 3 VPNs: Inter-AS and CSC Configuration Guide, Cisco IOS Release 15SY First Published: October 15, 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

Vendor: Alcatel-Lucent. Exam Code: 4A Exam Name: Alcatel-Lucent Border Gateway Protocol. Version: Demo

Vendor: Alcatel-Lucent. Exam Code: 4A Exam Name: Alcatel-Lucent Border Gateway Protocol. Version: Demo Vendor: Alcatel-Lucent Exam Code: 4A0-102 Exam Name: Alcatel-Lucent Border Gateway Protocol Version: Demo QUESTION 1 Upon the successful establishment of a TCP session between peers, what type of BGP message

More information

Configuring Virtual Private LAN Services

Configuring Virtual Private LAN Services Virtual Private LAN Services (VPLS) enables enterprises to link together their Ethernet-based LANs from multiple sites via the infrastructure provided by their service provider. This module explains VPLS

More information

Scale-Out Architectures with Brocade DCX 8510 UltraScale Inter-Chassis Links

Scale-Out Architectures with Brocade DCX 8510 UltraScale Inter-Chassis Links Scale-Out Architectures with Brocade DCX 8510 UltraScale Inter-Chassis Links The Brocade DCX 8510 Backbone with Gen 5 Fibre Channel offers unique optical UltraScale Inter-Chassis Link (ICL) connectivity,

More information

Optimizing Layer 2 DCI with OTV between Multiple VXLAN EVPN Fabrics (Multifabric)

Optimizing Layer 2 DCI with OTV between Multiple VXLAN EVPN Fabrics (Multifabric) White Paper Optimizing Layer 2 DCI with OTV between Multiple VXLAN EVPN Fabrics (Multifabric) What You Will Learn This document describes how to achieve a VXLAN EVPN multifabric design by integrating Virtual

More information

Chapter 17 BGP4 Commands

Chapter 17 BGP4 Commands Chapter 17 BGP4 Commands NOTE: This chapter describes commands in the BGP configuration level, which is present on HP devices that support IPv4 only. For information about BGP commands and configuration

More information

Introduction to Spine-Leaf Networking Designs

Introduction to Spine-Leaf Networking Designs Front cover Introduction to - Networking Designs Last Update: 7 November 2017 Explains three-tier versus spine-leaf network architectures Details the advantages and disadvantages of three-tier and spine-leaf

More information

Layer 4 to Layer 7 Design

Layer 4 to Layer 7 Design Service Graphs and Layer 4 to Layer 7 Services Integration, page 1 Firewall Service Graphs, page 5 Service Node Failover, page 10 Service Graphs with Multiple Consumers and Providers, page 12 Reusing a

More information

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme NET1350BUR Deploying NSX on a Cisco Infrastructure Jacob Rapp jrapp@vmware.com Paul A. Mancuso pmancuso@vmware.com #VMworld #NET1350BUR Disclaimer This presentation may contain product features that are

More information

Enterprise. Nexus 1000V. L2/L3 Fabric WAN/PE. Customer VRF. MPLS Backbone. Service Provider Data Center-1 Customer VRF WAN/PE OTV OTV.

Enterprise. Nexus 1000V. L2/L3 Fabric WAN/PE. Customer VRF. MPLS Backbone. Service Provider Data Center-1 Customer VRF WAN/PE OTV OTV. 2 CHAPTER Cisco's Disaster Recovery as a Service (DRaaS) architecture supports virtual data centers that consist of a collection of geographically-dispersed data center locations. Since data centers are

More information

Deploying VMware Validated Design Using OSPF Dynamic Routing. Technical Note 9 NOV 2017 VMware Validated Design 4.1 VMware Validated Design 4.

Deploying VMware Validated Design Using OSPF Dynamic Routing. Technical Note 9 NOV 2017 VMware Validated Design 4.1 VMware Validated Design 4. Deploying VMware Validated Design Using PF Dynamic Routing Technical Note 9 NOV 2017 VMware Validated Design 4.1 VMware Validated Design 4.0 Deploying VMware Validated Design Using PF Dynamic Routing You

More information

ibgp Multipath Load Sharing

ibgp Multipath Load Sharing This feature module describes the feature. This feature enables the BGP speaking router to select multiple ibgp paths as the best paths to a destination. The best paths or multipaths are then installed

More information

BGP Commands. Network Protocols Command Reference, Part 1 P1R-355

BGP Commands. Network Protocols Command Reference, Part 1 P1R-355 BGP Commands Use the commands in this chapter to configure and monitor Border Gateway Protocol (BGP). For BGP configuration information and examples, refer to the Configuring BGP chapter of the Network

More information

MPLS VPN Inter-AS with ASBRs Exchanging VPN-IPv4 Addresses

MPLS VPN Inter-AS with ASBRs Exchanging VPN-IPv4 Addresses MPLS VPN Inter-AS with ASBRs Exchanging VPN-IPv4 Addresses The Multiprotocol Label Switching (MPLS) VPN Inter-AS with Autonomous System Boundary Routers (ASBRs) Exchanging VPN-IPv4 Addresses feature allows

More information

HP FlexFabric 5930 Switch Series

HP FlexFabric 5930 Switch Series HP FlexFabric 5930 Switch Series MCE Configuration Guide Part number: 5998-4625 Software version: Release 2406 & Release 2407P01 Document version: 6W101-20140404 Legal and notice information Copyright

More information

Operation Manual BGP. Table of Contents

Operation Manual BGP. Table of Contents Table of Contents Table of Contents... 1-1 1.1 BGP/MBGP Overview... 1-1 1.1.1 Introduction to BGP... 1-1 1.1.2 BGP Message Types... 1-2 1.1.3 BGP Routing Mechanism... 1-2 1.1.4 MBGP... 1-3 1.1.5 BGP Peer

More information

VXLAN Overview: Cisco Nexus 9000 Series Switches

VXLAN Overview: Cisco Nexus 9000 Series Switches White Paper VXLAN Overview: Cisco Nexus 9000 Series Switches What You Will Learn Traditional network segmentation has been provided by VLANs that are standardized under the IEEE 802.1Q group. VLANs provide

More information

Evolved Campus Core: An EVPN Framework for Campus Networks. Vincent Celindro JNCIE #69 / CCIE #8630

Evolved Campus Core: An EVPN Framework for Campus Networks. Vincent Celindro JNCIE #69 / CCIE #8630 Evolved Campus Core: An EVPN Framework for Campus Networks Vincent Celindro JNCIE #69 / CCIE #8630 This statement of direction sets forth Juniper Networks current intention and is subject to change at

More information

Implementing VXLAN. Prerequisites for implementing VXLANs. Information about Implementing VXLAN

Implementing VXLAN. Prerequisites for implementing VXLANs. Information about Implementing VXLAN This module provides conceptual information for VXLAN in general and configuration information for layer 2 VXLAN on Cisco ASR 9000 Series Router. For configuration information of layer 3 VXLAN, see Implementing

More information

FiberstoreOS BGP Command Line Reference

FiberstoreOS BGP Command Line Reference FiberstoreOS BGP Command Line Reference Contents 1 BGP Commands...1 1.1 address-family...1 1.2 aggregate-address...2 1.3 bgp always-compare-med... 2 1.4 bgp bestpath as-path ignore...3 1.5 bgp bestpath

More information

MPLS VPN over mgre. Finding Feature Information. Last Updated: November 1, 2012

MPLS VPN over mgre. Finding Feature Information. Last Updated: November 1, 2012 MPLS VPN over mgre Last Updated: November 1, 2012 The MPLS VPN over mgre feature overcomes the requirement that a carrier support multiprotocol label switching (MPLS) by allowing you to provide MPLS connectivity

More information

Configuring BGP. Cisco s BGP Implementation

Configuring BGP. Cisco s BGP Implementation Configuring BGP This chapter describes how to configure Border Gateway Protocol (BGP). For a complete description of the BGP commands in this chapter, refer to the BGP s chapter of the Network Protocols

More information

WAN Edge MPLSoL2 Service

WAN Edge MPLSoL2 Service 4 CHAPTER While Layer 3 VPN services are becoming increasing popular as a primary connection for the WAN, there are a much larger percentage of customers still using Layer 2 services such Frame-Relay (FR).

More information

Verified Scalability Guide for Cisco APIC, Release 3.0(1k) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.0(1k)

Verified Scalability Guide for Cisco APIC, Release 3.0(1k) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.0(1k) Verified Scalability Guide for Cisco APIC, Release 3.0(1k) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.0(1k) Overview 2 General Scalability Limits 2 Fabric Topology, SPAN, Tenants, Contexts

More information

H3C S6520XE-HI Switch Series

H3C S6520XE-HI Switch Series H3C S6520XE-HI Switch Series EVPN Configuration Guide New H3C Technologies Co., Ltd. http://www.h3c.com.hk Software version: Release 1108 Document version: 6W100-20171228 Copyright 2017, New H3C Technologies

More information

Configuring MPLS L3VPN

Configuring MPLS L3VPN Contents Configuring MPLS L3VPN 1 MPLS L3VPN overview 1 Introduction to MPLS L3VPN 1 MPLS L3VPN concepts 2 MPLS L3VPN packet forwarding 5 MPLS L3VPN networking schemes 5 MPLS L3VPN routing information

More information

vcloud Director Tenant Portal Guide vcloud Director 8.20

vcloud Director Tenant Portal Guide vcloud Director 8.20 vcloud Director Tenant Portal Guide vcloud Director 8.20 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about this documentation,

More information

DOiT-200v6 VOLUME II I2 R2 4 N1. DOiT-200v6 Lab 16 Multi-Topic CCIE-Level Scenario. For CCIE Candidates

DOiT-200v6 VOLUME II I2 R2 4 N1. DOiT-200v6 Lab 16 Multi-Topic CCIE-Level Scenario. For CCIE Candidates Revision 7.0 (10/26/2005) DOiT-200v6-SCENARIO 16 Page 1 NETMASTERCLASS ROUTING AND SWITCHING CCIE TRACK DOiT-200v6 VOLUME II 4 I2 R2 3 3 R5 I5 3 3 R6 R1 4 N1 4 4 2 2 1 4 I3 R3 3 1 R4 3 N2 N1 I4 1 R7 DOiT-200v6

More information

Routing Design. Transit Routing. About Transit Routing

Routing Design. Transit Routing. About Transit Routing Transit Routing, page 1 L3Out Ingress Policy Enforcement, page 16 L3Out MTU Considerations, page 20 Shared L3Outs, page 22 L3Out Router IDs, page 27 Multiple External Connectivity, page 30 Transit Routing

More information

Deploy Application Load Balancers with Source Network Address Translation in Cisco DFA

Deploy Application Load Balancers with Source Network Address Translation in Cisco DFA White Paper Deploy Application Load Balancers with Source Network Address Translation in Cisco DFA Last Updated: 1/27/2016 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco

More information

BGP Route Reflector Commands

BGP Route Reflector Commands This chapter provides details of the commands used for configuring Border Gateway Protocol (BGP) Route Reflector (RR). address-family (BGP), on page 2 keychain, on page 5 neighbor (BGP), on page 7 remote-as

More information

Configuring basic MBGP

Configuring basic MBGP Contents Configuring MBGP 1 MBGP overview 1 Protocols and standards 1 MBGP configuration task list 1 Configuring basic MBGP 2 Controlling route advertisement and reception 2 Configuration prerequisites

More information

Verified Scalability Guide for Cisco APIC, Release 3.0(1k) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.0(1k)

Verified Scalability Guide for Cisco APIC, Release 3.0(1k) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.0(1k) Verified Scalability Guide for Cisco APIC, Release 3.0(1k) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.0(1k) Overview 2 General Scalability Limits 2 Fabric Topology, SPAN, Tenants, Contexts

More information