EXTREME VALIDATED DESIGN. Extreme VCS Fabric with IP Storage

Size: px
Start display at page:

Download "EXTREME VALIDATED DESIGN. Extreme VCS Fabric with IP Storage"

Transcription

1 EXTREME VALIDATED DESIGN April 2018

2 2018, Extreme Networks, Inc. All Rights Reserved. Extreme Networks and the Extreme Networks logo are trademarks or registered trademarks of Extreme Networks, Inc. in the United States and/or other countries. All other names are the property of their respective owners. For additional information on Extreme Networks Trademarks please see Specifications and product availability are subject to change without notice. 2017, Brocade Communications Systems, Inc. All Rights Reserved. Brocade, the B-wing symbol, and MyBrocade are registered trademarks of Brocade Communications Systems, Inc., in the United States and in other countries. Other brands, product names, or service names mentioned of Brocade Communications Systems, Inc. are listed at brocade-legal-intellectual-property/brocade-legal-trademarks.html. Other marks may belong to third parties. Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document at any time, without notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade sales office for information on feature and product availability. Export of technical data contained in this document may require an export license from the United States government. The authors and Brocade Communications Systems, Inc. assume no liability or responsibility to any person or entity with respect to the accuracy of this document or any loss, cost, liability, or damages arising from the information contained herein or the computer programs that accompany it. The product described by this document may contain open source software covered by the GNU General Public License or other open source license agreements. To find out which open source software is included in Brocade products, view the licensing terms applicable to the open source software, and obtain a copy of the programming source code, please visit 2

3 Contents Virtual Cluster Switching List of Figures... 5 Preface... 6 Extreme Validated Designs... 6 Purpose of This Document... 6 Target Audience... 6 About the Author... 7 Document History... 7 About Extreme Networks... 7 Introduction... 8 Technology Overview... 9 Benefits... 9 Terminology Virtual Cluster Switching VCS in Brief VCS Fabric Data Frames Inside a VCS Fabric VCS Traffic Forwarding VCS Services VCS Deployment Models Single-POD VCS Fabric Multi-VCS Fabric IP Storage Data Center Bridging Auto-NAS CEE-Map Dynamic Packet Buffering iscsi Initiators and Targets IP Storage Deployment Models Dedicated IP Storage Hybrid IP Storage Shared IP Storage VCS Validated Designs Hardware andsoftwarematrix VCS Fabric Configuration Fabric Bringup Edge Ports First Hop Redundancy Multitenancy IP Storage Configuration Dynamic Shared Buffer DCBX Configuration for iscsi Priority Flow Control and ETS Auto-NAS CEE-MAP Configuration Routed vs Layer 2 Switching for Storage Traffic Jumbo MTU Storage Initiator/Target Configuration Edge Services Configuration Deployment 1: Single-POD VCS Fabric With Attached Shared IP Storage

4 Terminology Fabric Wide Configuration Leaf Configuration Spine Configuration Deployment 2: Single-POD VCS Fabric with Dedicated IP Storage VCS Storage VCS Server VCS Deployment 3: Multi-VCS Fabric with Shared IP Storage VCS Storage VCS Multi-VCS Converged Fabric Illustration Examples Example 1: FVG in a 3-Stage Clos Fabric Configuration Verification Example 2: Virtual Fabric Across Disjoint VLANs on Two ToRs in a 3-Stage Clos Configuration Verification Example 3: Virtual Fabric per Interface VLAN-Scope on ToRs in a 3-Stage Clos Configuration Verification Example 4: VM-Aware Network Automation Configuration and Verification Virtual Machine Move Example 5: AMPP Configuration and Verification for VLAN Virtual Machine Moves Configuration and Verification for Virtual Fabric VLAN Example 6: Virtual Fabric Extension Configuration Verification Example 7: Auto-Fabric Configuration Verification DesignConsiderations References

5 List of Figures Virtual Cluster Switching Figure 1 on page 12 Components of a VCS Fabric Figure 2 on page 16 vlag Connectivity Options Figure 3 on page 18 TRILL Data Format Figure 4 on page 20 Layer 2 Unicast Forwarding Figure 5 on page 22 Layer 3 Forwarding with VRRP-E Figure 6 on page 24 Layer 3 Intersubnet Forwarding on VCS Fabric Figure 7 on page 25 Multicast Tree for BUM Traffic Figure 8 on page 26 VEB on Virtualized Server Environment Figure 9 on page 28 SVF in a Cloud Service-Provider Environment Figure 10 on page 29 SVF in a Highly Virtualized Environment Figure 11 on page 30 VxLAN Packet Format Figure 12 on page 31 Packet Forwarding over a VxLAN-Based DCI Implemented with VF-Extension Figure 13 on page 32 Single-POD VCS Fabric Figure 14 on page 34 Multi-VCS Fabric Interconnected Through Layer 2 vlag Figure 15 on page 36 Multi-VCS Fabric Interconnected Through VxLAN over L3 Links Figure 16 on page 40 PFC and ETS in Action over a DCBX-Capable Edge Port on a VDX Switch Figure 17 on page 42 Dedicated Storage Design with VCS Figure 18 on page 43 Hybrid IP Storage with Single-Storage vlag from ToR Figure 19 on page 44 Hybrid IP Storage with Multiple-Storage vlag from ToR Figure 20 on page 45 Single-POD VCS with Attached IP Storage Device Figure 21 on page 46 Single POD with Shared IP Storage VCS Figure 22 on page 47 Multi-VCS Using vlag with Shared IP Storage Figure 23 on page 48 Multi-VCS Using VxLAN with Shared IP Storage Figure 24 on page 52 VCS Fabric Figure 25 on page 57 vlag Figure 26 on page 61 VRRP-E in 3-Stage Clos VCS Fabric Figure 27 on page 87 Single-POD DC with Attached Storage Figure 28 on page 96 Data Center with Dedicated Storage VCS Figure 29 on page 105 Multi-VCS Fabric with Shared Storage VCS Figure 30 on page 121 FVG Topology Figure 31 on page 125 VF Across Disjoint VLANs Figure 32 on page 129 VF Per-Interface VLAN Scope Figure 33 on page 140 Virtual Fabric Extension 5

6 Terminology Preface Extreme Validated Designs Purpose of This Document Target Audience About the Author Document History Extreme Validated Designs Helping customers consider, select, and deploy network solutions for current and planned needs is our mission. Extreme Validated Designs offer a fast track to success by accelerating that process. Validated designs are repeatable reference network architectures that have been engineered and tested to address specific use cases and deployment scenarios. They document systematic steps and best practices that help administrators, architects, and engineers plan, design, and deploy physical and virtual network technologies. Leveraging these validated network architectures accelerates deployment speed, increases reliability and predictability, and reduces risk. Extreme Validated Designs incorporate network and security principles and technologies across the ecosystem of service provider, data center, campus, and wireless networks. Each Extreme Validated Design provides a standardized network architecture for a specific use case, incorporating technologies and feature sets across Extreme products and partner offerings. All Extreme Validated Designs follow best-practice recommendations and allow for customer-specific network architecture variations that deliver additional benefits. The variations are documented and supported to provide ongoing value, and all Extreme Validated Designs are continuously maintained to ensure that every design remains supported as new products and software versions are introduced. By accelerating time-to-value, reducing risk, and offering the freedom to incorporate creative, supported variations, these validated network architectures provide a tremendous value-add for building and growing a flexible network infrastructure. Purpose of This Document This Extreme validated design provides guidance for designing and implementing Extreme VCS fabric with IP storage in a data center network using Extreme hardware and software. It details the Extreme reference architecture for deploying VCS-based data centers with IP storage and VxLAN interconnectivity. It should be noted that not all features such as automation practices, zero-touch provisioning, and monitoring of the Extreme VCS fabric are included in this document. Future versions of this document are planned to include these aspects of the Extreme VCS fabric solution. The design practices documented here follow the best-practice recommendations, but there are variations to the design that are supported as well. Target Audience This document is written for Extreme systems engineers, partners, and customers who design, implement, and support data center networks. This document is intended for experienced data center architects and engineers. It assumes that the reader has a good understanding of data center switching and routing features. 6

7 Virtual Cluster Switching About the Author Eldho Jacob Principal Engineer, System and Solution Engineering. The author would like to acknowledge the following team for their technical guidance in developing this validated design: Abdul Khader: Director Krish Padmanabhan: Sr Principal Engineer Daniel DeBacker: Sr Principal Systems Engineer Sadashiv Kudlamath: Sr Product Manager Vasanthi Adusumalli: Sr SW Engineer Document History Date Part Number Description December 21, Initial version. January Updated document to reflect Extreme's acquisition of Brocade's data center networking business. April Rebranded document to Extreme Networks. About Extreme Networks Extreme Networks (NASDAQ: EXTR) networking solutions help the world s leading organizations transition smoothly to a world where applications and information reside anywhere. This vision is designed to deliver key business benefits such as unmatched simplicity, non-stop networking, application optimization, and investment protection. Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and cost while enabling virtualization and cloud computing to increase business agility. To help ensure a complete solution, Extreme Networks partners with world-class IT companies and provides comprehensive education, support, and professional services offerings. ( 7

8 Terminology Introduction This document describes converged data center network designs for Storage and Ethernet networks integrating Extreme Networks VCS technology with various IP storage solutions. The configurations and design practices documented here are fully validated and conform to the Extreme data center fabric reference architectures. The intention of this Extreme validated design document is to provide reference configurations and document best practices for building converged data center networks with VCS fabric and IP storage using Extreme VDX switches. This document describes the following architectures: Single POD VCS data center with dedicated and shared IP storage designs Multi-POD VCS data center with shared IP storage Apart from various converged data center architectures, the paper also covers various innovative features that VCS brings to the data center fabric 8

9 . Technology Overview Virtual Cluster Switching Benefits Terminology Virtual Cluster Switching VCS Deployment Models IP Storage IP Storage Deployment Models Data center networks evolved from a traditional three-tier architecture to the flat spine-leaf/clos architecture to address the requirements of newer applications and fluid workloads. In addition to scalability, high availability, and greater bandwidth, other prime architectural requirements in a data center network are workload mobility, multitenancy, network automation, and CapEx/OpEx reduction. Having traditional Layer 2 or Layer 3 technologies to solve these requirements involves significant compromises in network architecture and a higher OpEx to manage these networks. This requires a fabric that is easy to provision as in traditional Layer 2 networks, and also is non-blocking as in Layer 3 networks. VCS technology merges the best of both Layer 2 and Layer 3 networks along with a host of other Ethernet fabric innovations and IP storage features to provide a purpose-built fabric for a converged data and storage network. This white paper covers VCS and IP storage technologies (iscsi and NAS) and provides various design options for a converged Extreme data center fabric architecture. In addition to VCS and IP storage deployment models, this document covers the following VCS features: Virtual Fabric for building a scalable multitenant data center VxLAN based DCI connectivity using Virtual Fabric extension VM-aware network automation Various FHRP and fabric bring-up options Benefits Some of the key benefits of using VCS technology in the data center: Bridge Aware Routing VCS technology conforms to the TRILL principles and brings in benefits of TRILL bridge aware routing like non- blocking links, TTL for loop avoidance, faster fail-over, workload mobility, and scalability to the fabric. Topology agnostic Can be provisioned as mesh, star, Clos, or any other desired topology as per the network requirement. In this document, we will go over the Clos architecture since this is the most prevalent in data centers. Self-forming Fabric VCS fabrics are self-forming with no user intervention needed to have the fabric setup other than physical cabling of the switches. Zero-Touch provisioning VCS fabrics are ZTP capable enabling user to bring up the fabric right out of the box. DHCP Automatic Deployment (DAD) and auto-fabric features enables this. Plug and play model VCS enables a fluid and scalable true plug and play fabric for attaching servers/workloads or provisioning new nodes to the fabric thru ISL and vlag technologies. Unified fabric management VCS fabric can be provisioned to provide the user with visibility to configure, manage and control the entire data-center fabric from a single node. 9

10 Terminology Virtual Machine aware network automation VCS fabric enables hypervisor-agnostic auto provisioning of server connected ports through the AMPP feature. Efficient Load balancing Load balancing in VCS fabric is available at Layer 1 through Layer 3. Per-packet load balancing is available at Layer 1 and VCS link-state routing provides un-equal cost load balancing for efficient usage of all links in the fabric. Scalable multitenant fabric Supports L2 multitenancy beyond the traditional L2-bit VLAN space with Virtual Fabric feature based of TRILL Fine Grained Label standard. Storage Support AutoQoS, Buffering, and DCBx support for IP storage technologies and End-to-End FCoE, enables converged network for storage and server data traffic on the VCS fabric. Seamless Integration with VxLAN VCS fabric seamlessly integrates with VxLAN technology for both inter-dc and intra-dc connections using VF extension feature. Terminology Term Description ACL Access Control List. AMPP Automatic Migration of Port Profiles. ARP Address Resolution Protocol. BGP Border Gateway Protocol. BPDU Bridge Protocol Data Unit. BUM Broadcast, Unknown unicast, and Multicast. CNA Converge Network Adapter. CLI Command-Line Interface. CoS Class of Service for Layer 2. DCI Data Center Interconnect. ELDP Extreme Link Discovery Protocol. ELD Edge Loop Detection protocol. ECMP Equal Cost Multi-Path. EVPN Ethernet Virtual Private Network. IP Internet Protocol. ISL Inter-Switch Link. MAC Media Access Control. MPLS Multi-Protocol Label Switching. ND Neighbor Discovery. NLRI Network Layer Reachability Information. PoD Point of Delivery. RBridge Routing Bridge STP Spanning Tree Protocol. ToR Top of Rack switch. UDP User Datagram Protocol. vlag Virtual Link Aggregation Group. VLAN Virtual Local Area Network. VM Virtual Machine. 10

11 Virtual Cluster Switching Term VNI VPN VRF VTEP VXLAN Description VXLAN Network Identifier. Virtual Private Network. VPN Routing and Forwarding instance. An instance of the routing/forwarding table with a set of networks and hosts in a router. VXLAN Tunnel End Point. Virtual Extensible Local Area Network. Virtual Cluster Switching Layer 2 switching networks are popular for the minimal configuration and seamless mobility but are affected by blocked links, higher network fail-over times, and inefficient bandwidth utilization due to STP, resulting in networks that do not scale well. At the same time networks build on popular Layer 3 routing protocols address many of these concerns but are operational intensive and not suited for Layer 2 multitenant networks. TRILL or RFC 5556 tries to address these concerns and combines the Layer 2 and Layer 3 features into a bridge system capable of network-style routing. VCS is a TRILL compliant Ethernet fabric formed between Extreme switches. At the data plane VCS uses TRILL framing and in control plane uses proven Fiber Channel fabric protocols to form Ethernet fabric and maintain a link-state routing for the nodes. In addition to TRILL benefits, VCS provides on a host of other innovative features to provide the next-gen data-center fabric. VCS in Brief VCS fabric is formed between Extreme switches and switches in the fabric are denoted as RBridges. Links connecting the RBridges in the fabric are called Inter-Switch links or ISLs. VCS fabric connect other devices like servers, storage arrays, non-vcs switches or routers through L2 or L3 links. Based on the kind of devices attached to the physical ports of RBridge they are classified as Edge ports or Fabric ports. Edge ports connect external devices to VCS fabric and Fabric ports connect RBridges over ISL. VCS fabric provides linkaggregation for ISL's through ISL trunk and at Edge-port a multi-switch link-aggregation called vlag. More details on these are explored later in this section, meanwhile a typical VCS fabric is shown below, showing the various components of a VCS fabric. 11

12 Virtual Cluster Switching FIGURE 1 Components of a VCS Fabric In a typical Layer 2 network STP is used to form a loop-free topology and traffic gets forwarded at each node with a Layer 2 or MAC look-up. While in TRILL or VCS fabric a loop-free topology is formed using a link state routing protocol. The link-state routing protocol is used to exchange RBridge info in the fabric and this info is used to efficiently forward packets between RBridges at Layer 2. The use of a link-state routing protocol is the primary reason the fabric scales better than the classic Ethernet network based of STP. And the link-state routing protocol enables a loop-free topology without blocking any paths unlike STP. In TRILL standard ISIS is recommended while in the Extreme VCS fabric well known storage network link-state routing protocol FSPF (Fabric Shortest Path First) is used. Switches/RBridges in VCS fabric have two types of physical interfaces: Edge port and Fabric port. Fabric ports connect two switches in the same VCS fabric and forward TRILL frames. And Edge ports are L2 or L3 ports which receive and transmit regular Ethernet frames. In VCS fabric a Classical Ethernet (CE) frame enter at the Edge port of a source RBridge and then undergo Layer 2 hardware lookup. Layer 2 lookup will provide the info for TRILL encapsulation of the CE frame. The encapsulated CE frame would then get forwarded out of the Fabric ports of source RBridge based on the forwarding information provided by FSPF. The TRILL frame gets forwarded hop-by-hop on the VCS fabric to the destination RBridge where it gets decapsulated and send as regular Ethernet frame out of an Edge port after Layer 2 or Layer 3 hardware lookup. While this briefly explores VCS operation and components of the fabric, over the next few sections, VCS fabric formation using FLDP (Fabric Link Discovery Protocol), RBridge routing using FSPF, TRILL frame, and other VCS innovations will be discussed in detail. 12

13 Virtual Cluster Switching VCS Fabric The very first step after wiring up a network is to configure the switches to form a fabric. The biggest strength of Layer 2 networks is the simplistic fabric formation by the use of switch ports. To form loop-free topology STP is used in Layer 2 networks. VCS brings in this same simplicity into fabric formation, but without STP's drawbacks of blocked link and lack of multipathing. In VCS the fabric formation happens automatically and is as simple as just connecting two switches and a single line of configuration to identify the switch as part of a VCS fabric. This section will go over how the VCS fabric formation happens. VCS capable switches from Extreme can operate in two modes: VCS disabled mode wherein the switch will operate in traditional STP mode. VCS enabled mode, this mode of operation is what will be discussed in this white-paper. With VCS enabled, switches form a VCS fabric automatically across point-point links with minimal configuration. In a nut-shell the requirements for automatic fabric formation are. Each VCS fabric is identified by a VCS ID configuration. VCS ID will be same across all switches in a fabric. A switch in VCS fabric is identified as RBridge and will have a unique RBridge-ID configuration. RBridge is a switch responsible for negotiating VCS fabric connections and forwarding both TRILL and classical Ethernet traffic. Ports in a switch will be identified either as a Fabric port or Edge port by Extreme Link Discovery Protocol alternatively called Fabric link discovery protocol. The switches will discover the neighbors automatically across the fabric port if VCS ID are same and RBridge ID is unique. Fabric ports are responsible for TRILL forwarding and for fabric neighbor discovery. During Fabric neighbor discovery, across Fabric ports RBridges in a VCS fabric form ISL (inter-switch link) and trunks (group of ISLs part of same hardware/asic port-group). Edge ports connects external devices to VCS fabric or in essence will provide L2 or L3 connectivity for Server's or Routers to the VCS fabric. Edge ports can be regular L2 switch ports or be a part of a multi-chassis LAG (vlag) with a non-vcs switch (vswitch, server or regular STP switch). Once Fabric formation happens FSPF builds the distributed fabric topology on each switch/rbridge. VCS Identifiers VCS ID identifies the fabric as a whole and all switches/rbridges part of a VCS fabric should have the same VCS ID. RBridge ID is a unique ID configured to identify each RBridge part of a VCS fabric. Apart from VCS-ID and RBridge-ID configuration, the VCS mode configuration is needed for the automatic fabric formation. VCS Fabric Mode of Operation Logical Chassis Mode is the flexible and popular VCS mode of operation 1. In this mode all switches in the VCS fabric can be managed as if they were a single logical chassis. Provides unified control of the fabric from a single Principal switch/rbridge in the fabric. 1 Fabric Cluster Mode - This is another VCS mode of operation which is deprecated in the latest releases. In this mode, VCS fabric discovery and formation are automatic. But user will have to manage each switch individually. This was one of the earlier modes of operation in the VCS technology evolution and does not provide a unified configuration management capability. 13

14 Virtual Cluster Switching User will be configuring the fabric from the Principal RBridge. This is a distributed configuration mode wherein the fabric configuration info is present on all nodes providing higher availability in the fabric. The fabric wide configuration management is performed from the principal RBridge and changes are immediately updated on other RBridges. Add/Rejoin/Remove/Replacing of switches in VCS fabric is simplified with principal switch taking care of configuration management without user intervention. Operational simplicity is provided by unified view of the fabric from every switch. Fabric can be accessed by a single virtual IP bound to principal switch and can be used for fabric firmware upgrades. Logical Chassis Mode is the recommended VCS fabric mode and the deployment models in this document uses this mode. Once VCS-ID, RBridge-ID, and VCS mode are known the automatic fabric formation process begins. As part of it: All interfaces in the switch will be brought up as Edge port. Extreme Link Discovery Protocol (ELDP) is run between physical interfaces to identify ports as Edge or Fabric. Inter-Switch links and trunks are formed between VCS switches over fabric ports. After ISL and trunk formation FSPF or Fabric Shortest Path First link state routing protocol is run over ISLs to identify the shortest-path to each switch in the VCS fabric. Extreme Link Discovery Protocol Extreme Link Discovery Protocol (ELDP) attempts to discover if a Extreme VCS Fabric-capable switch is connected to any of the edge ports. ELDP is alternatively called as FLDP (Fabric Link Discovery Protocol). Ports on a VCS capable switch will first come up as Edge port with ELDP enabled. With ELDP PDU exchange between the ports neighbors in VCS fabric are formed across the inter-switch links. Ports which discover neighbors in the same VCS cluster would transition to fabric ports while others remain as Edge port. With ELDP PDU exchange the switches will the classify a port as: Edge port if the neighboring switch is not Extreme Edge port if not running VCS mode Edge port if VCS ID not same between the switches. If the neighboring switch runs VCS and the VCS ID matches the port transitions to a fabric port and an ISL or Inter-Switch Link is established. The ELDP is invoked only upon port online and is not sent periodically. Once the link type is determined, the protocol execution stops. ISL Trunk ISL or Inter-switch Links are formed automatically across fabric ports between two VCS enabled switches if the VCS-ID matches. ISL's links forward TRILL frames in VCS fabric and by default trunk all VLANs. When there are multiple ISL links between two switches and if they are part of the same ASIC port-group on both switches, these ISL links gets grouped together to form a Extreme ISL trunk. ISL Trunk is comparable to the traditional LACP LAG to provide link aggregation. Extreme ISL trunk don't run LACP but use a proprietary protocol to maintain the trunk membership. ISL trunks are formed across the same ASIC port-group and hence the max ISL's possible in a trunk group is 8. There can be multiple ISL trunks between two switches. 14

15 Virtual Cluster Switching The ISL trunk is self-forming like the ISL formation and needs no user configuration unlike LACP. The ISL trunk provides true per-packet load balancing across all member links. Compared to a traditional LAG, the ISL trunk provides a per-packet load balancing of traffic across the links. This provides very high link utilization and even distribution of traffic across the ISL trunk compared to a traditional LAG, which uses frame header hashing to distribute traffic. Principal RBridge VCS fabric after ISL formation elects a Principal RBridge. The RBridge with the lowest configured principal priority or with lowest World Wide Name (WWN) in the fabric is elected as the Principal RBridge. WWN is a unique identifier used in storage technologies. VCS capable switches are shipped with factory-programmed WWN. Principal RBridge is alternatively called as coordinator switch or fabric coordinator and performs the following functions in the fabric: Decides whether a newly joining RBridge has unique RBridge-ID and in case of conflict, the new RBridge is segregated from VCS fabric until the configuration is fixed. In logical chassis mode, all fabric wide configuration is done from the principal switch. Apart from this in AMPP feature which will be discussed later, the principal RBridge talks to vcenter to distribute port-profiles. Fabric Shortest Path First (FSPF) FSPF is the routing protocol used in VCS to create the fabric route topology for TRILL forwarding. FSPF is a well-known link-state routing protocol used in FC storage area network (SAN) fabrics. Since VCS fabric was out prior to IETF's TRILL fabric got standardized, VCS fabric uses FSPF instead of ISIS as specified for TRILL. Use of a link-state routing protocol in VCS enables having a highly scalable fabric, avoid blocked links like in STP Layer 2 networks and enables equal cost multipath aka ECMP to destination. After ISL and ISL trunk formation in VCS fabric bring-up FSPF is run to create a fabric topology. FSPF is a link-state routing protocol like OSPF or ISIS and have the following salient features. Neighborship is formed and maintained by FSPF hello packets. Maintains one neighborship per ISL. Cost to reach a given RBridge is the cumulative cost of all the links to reach that RBridge Supports only point-to-point networks Can have only one area No Stub areas and summarization Edge Ports Edge ports attach switches, servers or routers to the VCS fabric over standard IEEE 802.1Q Layer 2 and Layer 3 ports. Edge ports support industry-standard Link Aggregation Groups (LAGs) via Link Aggregation Control Protocol (LACP). Multi-Chassis Trunking (MCT) is the industry accepted solution to provide redundancy and avoid blocked links due to spanning-tree when connecting servers to multiple upstream switches. LAG-based MCT is a special case of LAG, covered in IEEE 802.3ad, in which one end of a LAG can terminate on two separate switches. Virtual Lag or vlag Virtual LAG (vlag) is an MCT solution that is included in VCS Fabric technology which extends the concept of LAG to include edge ports on multiple VCS Fabric switches. 15

16 Virtual Cluster Switching vlags can be formed in three different scenarios pre-requisite being that the LAG control-group have to be same on all the RBridges in the VCS fabric. Server multi-homed to multiple RBridges in a VCS fabric. Classical Ethernet Switch multi-homed to RBridges in a VCS fabric When connecting two VCS fabrics, since VCS behaves like a single switching domain vlags are formed across the LAG's. FIGURE 2 vlag Connectivity Options Using vlag a single server, classical Ethernet switch or another VCS fabric would connect to multiple RBridges in a VCS fabric and the fabric will act as a single node to the server/ce-switch/other. When a LAG spans multiple switches in a VCS fabric, it will be automatically detected and become a vlag. The port-channel numbers needs to be same across multiple-switches for the vlag to be formed. LACP need not be enabled but recommended. When LACP is used the LACP PDU's will use a virtual RBridge MAC to appear as a single node to the other end. vlag is comparable to the vpc technology from Cisco but doesn't need a peer-link or keep-alive mechanism like in vpc for active-active forwarding. vlag's can span across 8 VCS nodes and across 64 links providing higher node and link redundancy. Only ports of the same speed are aggregated. Edge ports in a vlag support both classic Ethernet and DCB extensions. Therefore, any edge port can forward IP, IP Storage and FCoE traffic over a vlag. vlag Operation 16

17 Virtual Cluster Switching LACP System-ID: For vlag with LACP to be active across multiple RBridges a common LACP system-id is used by LACP PDU's. Each VCS fabric have a common VCS Bridge-Mac address starting with 01:E0:52: with VCS-ID appended to have a unique mac. This VCS bridge mac is used as LACP system-id in the PDUs. Virtual RBridge-ID: When transmitting packet received over a vlag, the source RBridge-ID in TRILL frames are set to a virtual RBridgeID. The virtual RBridgeID is constructed by appending the vlag's port-channel ID to 0x400, so a vlag for port-channel 101 will have virtual RBridgeID of 0x465. By using virtual RBridgeID in the TRILL frames, member RBridges of a vlag can efficiently perform source port check for loop detection and mac-move based on the port-channel ID embedded in the virtual RBridge-ID. Primary Link: Another vital component for vlag operation is determining the primary Link. Primary link is the only link through which BUM Traffic will be transmitted. BUM traffic are transmitted out of edge ports which are either a normal non-vlag port or if it is a primary link of a vlag. Without this check BUM being multi-destination traffic will otherwise result in duplicate packets at the receiver. The actual state machine of determining primary Link is Extreme specific. This protocol also ensures that only one of the links in an RBridge becomes BUM transmitter and is also responsible to elect a new primary link on link failure, RBridge failures or other failure events. Master RBridge: The node responsible for primary link is elected as the Master RBridge. The Master RBridge is also responsible for MAC address age-out. MAC addresses learnt on VCS fabric are distributed through ENS (Ethernet Name Server). Traffic Load Balancing Traffic in a VCS fabric gets load-balanced at multiple level in the network: ISL trunks provide per-packet load balancing, while at Layer-2 TRILL based load-balancing would kick in and at Layer-3 regular IP route based load-balancing can happen over IP ECMP paths. ISL Trunk: When packets go over a Extreme ISL trunk, proprietary protocols ensure that no hashing is used and an even distribution or per-packet load balancing happens across all the links in the ISL trunk. This provides a very high link-utilization and even distribution of traffic across the ISL trunk compared to a traditional LAG which uses frame header based hashing to distribute traffic. Layer 2: VCS builds a Layer 2 routed topology using the link-state routing protocol FSPF and with support of load balancing in FSPF, load sharing for Layer 2 traffic is achieved in the VCS fabric. When doing TRILL forwarding, if a neighbor switch is reachable via several interfaces with different bandwidths, all of them are treated as equal-cost paths. Any interface with a bandwidth equal to or greater than 10 Gbps has a predetermined link cost of 500. Thus, a 10 Gbps interface has the same link cost as a 40 Gbps interface. Simplicity is a key value of VCS Fabric technology, so an implementation was chosen that does not consider the bandwidth of the interface when selecting equal-cost paths. The distributed control plane is aware of the bandwidth of each interface (ISL or Extreme ISL Trunk). Given an ECMP route to a destination RBridge, it can load-balance the traffic across the next-hop ECMP interfaces, according to the individual interface bandwidth and avoids overloading lower bandwidth interfaces. So effectively equal-cost paths for TRILL forwarding between RBridges are determined based on hop-count and traffic gets distributed between the paths based on link bandwidths. This maximizes the utilization of available links in the network. In the traditional approach, a 40 Gbps interface, which has the least cost among all other 10 Gbps paths, is used as the only route to reach the destination. In effect the lower-speed 10 GbE interfaces are not utilized, resulting in lower overall bandwidth. With VCS Fabric technology, lower bandwidth interfaces can be used to improve network utilization and efficiency. While traffic gets proportionately distributed among ECMP paths, VCS forwarding uses the regular hash algorithms to select a link for a flow. Layer 3: Layer 3 traffic in VCS fabric undergoes routing over VE interfaces as on a regular router and the traditional IP based ECMP hashing happens for these traffic. And the regular BGP and IGP routing protocol's IP ECMP techniques are available. 17

18 Virtual Cluster Switching Data Frames Inside a VCS Fabric Switch/RBridge in a VCS fabric receives classical Ethernet frame on Edge port and then gets encapsulates into a TRILL frame based on the destination MAC look-up. The encapsulated TRILL frame is forwarded out of the Fabric port into the VCS fabric. At each RBridge in the VCS fabric the traffic undergoes hop-by-hop forwarding based on the Egress DMAC and RBridge ID. At the destination RBridge the TRILL headers are stripped and the frame will undergo a traditional Layer 2 or Layer 3 lookup to forward out of the Edge port. FIGURE 3 TRILL Data Format CE Frame The CE frame in its entirety without modification becomes payload for the TRILL frame. An exception for this would be the Virtual Fabric or fine-grained TRILL scenario where the inner Dot1q tag in CE frame could be modified, this will be discussed in the later sections. Outer Ethernet Header Outer Destination MAC Address - Specifies the next hop destination RBridge. Outer Source MAC Address - Specifies the transmitting RBridge. Outer 802.1Q VLAN Tag - Depends on the core port configuration. EtherType - 0x22F3 assigned by IEEE for TRILL. Trill Header Fields Version - If not recognized, silently drop the packet at the ingress of the fabric. Reserved - Not currently in use - for future expansions. Must be set to 0 at the moment. 18

19 Virtual Cluster Switching Multi-Destination Set to 1 for BUM frames (Broadcast, Unknown-Unicast, Multicast). The frame is to be delivered to multiple destinations via a distribution tree. The egress RBridge nickname field specifies the distribution tree to use. Options Length - Currently not used Hop Count TTL to avoid infinite packet loop. TTL decremented at every hop. If TTL=0, an RBridge will drop the frame. For unicast frames, the ingress RBridge should set the TTL to a value in excess of the amount of hops it expects to use to reach the egress RBridge. For multi-destination frames, the ingress RBridge should set the TTL to at least the number of hops to reach the most distant RBridge. Multi-destination frames are most susceptible to loops and hence have strict RPF checks. Egress RB Nickname (RB ID) If the multi-destination tree is set to 0, then the egress RB nickname is the egress RBridge ID. If the multi-destination tree is set to 1, then the egress RB nickname is egress RBridge ID that is root of that tree. Ingress RB Nickname (RB ID) - Ingress RB nickname is set to a nickname/id of the ingress RBridge of the fabric. Options - present if Op-Length is non-zero. VCS Traffic Forwarding On the physical wire the frame format is TRILL when forwarding between VCS nodes. After fabric formation and defining the edge ports traffic forwarding in VCS fabric involves mac learning, handling unicast/multi-destination traffic, first hop redundancy services and ECMP handling. MAC Learning Extreme VCS node follows hardware source-mac address learning at the Edge ports similar to any standard IEEE 802.1Q bridge. An edge RBridge learns about a MAC, its VLAN, and the interface on which the MAC was seen. This learned MAC information is distributed in the VCS fabric and hence each node in the fabric knows which RBridge to forward a particular MAC/frame. The frame is forwarded into the fabric on fabric port with TRILL encapsulation, based on whether the destination address in the frame is known (unicast) or unknown (multi-destination). ens VCS distributed control plane synchronize aging and learning states across all fabric switches via the Ethernet Name Service (ens); which is a mac distribution service. ens by distribution of learned mac info avoids flooding in the fabric. ens does MAC synchronization across MCT/vLAG pairs and distributing multicast info learned at the edge ports through IGMP snooping. ens is also responsible for MAC aging in the VCS fabric. When MAC aging happens on an RBridge where it was learned, ens takes care of aging out the MAC from other RBridges in the fabric Layer 2 Unicast Traffic Forwarding The source-mac learned on the edge RBridge gets distributed by ens to every node in the fabric. When remote RBridges need to forward traffic to this MAC it looks up the Layer 2 forwarding table to figure out which RBridge to send the frame and based on this info TRILL forwarding happens to the destination RBridge. 19

20 Virtual Cluster Switching The below diagram shows the Layer 2 forwarding within a VLAN between two hosts across the VCS fabric. Traffic is destined to from over vlan 101 or essentially involves Layer 2 forwarding within a VLAN. Assumption is the ARP is resolved and MAC learning has already happened for the hosts. On RBridge 103 the traffic received on edge-port will undergo MAC lookup. MAC points to a remote RBridge port in the VCS fabric and hence the traffic undergoes TRILL encapsulation. The TRILL encapsulated packet follows the link-state based routed fabric topology created by FSPF. FIGURE 4 Layer 2 Unicast Forwarding Layer 3 Unicast Forwarding Extreme Network OS has Layer 3 routing enabled by default. VCS fabrics support VRF-lite and routing protocols like OSPF and BGP on the VE and physical interfaces. Routing and IP configurations in VCS fabrics are done at the RBridge configuration mode. So for enabling IP configurations like VRF-lite, IP address and routing on an RBridge, user has to enter the RBridge mode for that node and configure. And in logical-chassis mode all this configuration is done from the Primary RBridge. VE Interfaces VE interface provide routing functionality across vlan, VE interface is similar to the switched-vlan or SVI interface from cisco. VE interface at Layer 3 maps to corresponding Vlan at Layer 2, so for example VE interface 500 maps to vlan 500. VE interface is a Layer 3 interface on which an IP address is configured and could be enabled with FHRP and routing protocols to provide L2/L3 boundary and other L3 router functionality. 20

21 First Hop Redundancy Protocols Virtual Cluster Switching First hop redundancy protocols provide protection for the default-gateway of a subnet, by allowing multiple routers to respond on behalf of the default-gateway IP. Extreme support the following FHRPs: VRRP VRRP-E FVG VRRP is standard based while VRRP-E and FVG can be supported only across Extreme devices. VRRP Extended (VRRP-E) IETF standard based VRRP eliminates a single point of failure in the static route environment. It is an election protocol that dynamically assigns responsibilities of a virtual router to one of the VRRP enabled routers on the LAN. VRRP thus provides a higher availability default path without requiring configuration of dynamic routing or router discovery protocols on every end host. VRRP-E (VRRP Extended) is the Extreme proprietary extension to the standard VRRP protocol. It does not interoperate with VRRP. VRRP-E protocol configuration and operation is very similar to VRRP, wherein Master and Standby election happens and Master is responsible for ARP replies and propagation of control packets. From a forwarding perspective, VRRP-E provides active-active forwarding through "short path forwarding" feature compared to VRRP where Master is the only node responsible for routing. VRRP-E also supports up to 8 active/active Layer 3 gateways and in conjunction with short-path forwarding (VRRP-E SPF) yield higher redundancy, scalability and better bandwidth utilization in the fabric. VRRP-E is supported in both VRRPv2 and v3 protocol specifications and supports IPv4 and IPv6. VRRP-E can be configured only on VE interfaces. 21

22 Virtual Cluster Switching FIGURE 5 Layer-3 Forwarding with VRRP-E Figure 5 shows the efficient forwarding behavior of VRRP-E in a CLOS design. Spines are configured with VE interface and VRRP-E virtual-ip for subnet /24. Each of the virtual-ip have an associated virtual-mac (02e xx). Virtual-mac are distributed to every RBridge in the fabric by VCS. VMACs are installed with special MAC programming to load-balance traffic to all VRRP-E. As shown below all VRRP-E nodes configured for the virtual-ip would route the traffic across subnet VRRP-E does active/active forwarding across Master and standby routers while in standard VRRP only the master node would route the traffic. VRRP-E thus provide efficient load balancing and bandwidth utilization in the fabric. Fabric Virtual Gateway Fabric Virtual Gateway or FVG is Extreme proprietary implementation of a router redundancy protocol and works only in VCS fabric. FVG is a highly scalable FHRP solution compared to VRRP or VRRP-E and doesn't have any control plane PDU exchange between the nodes. Instead, it leverages the VCS Fabric services to exchange Fabric-Virtual-Gateway group information among the participating nodes. The Fabric-Virtual-Gateway feature allows multiple RBridges in a VCS Fabric to form a group of gateway routers and share the same gateway IP address for a given subnet like VRRP or VRRP-E. FVG is configured under the global VE interface mode. Configuration primarily involves configuring a Gateway-IP under the VE interface and the participating RBridges. 22

23 Virtual Cluster Switching A gateway mac (02e ff) is by default allocated per vlan/ve to reach the gateway-ip. VCS services takes care of distributing the GW mac info in the fabric. Nodes not participating in FVG will install gateway MAC and the special GW mac programming allows load-balancing of traffic to FVG members. There is no need of individual IP address under the RBridge VE interfaces like VRRP. FVG doesn't have the concept of Master and standby nodes. But one of the nodes will be elected as the ARP Responder, comparable to the Master node in VRRP-E. ARP Responder takes care of Arp request for GW-IP. All FVG member RBridges do an active/active forwarding like VRRP-E. Short path forwarding is by default enabled. When SPF is disabled, the ARP responder is responsible for traffic forwarding. Forwarding behavior for FVG is similar to as shown in VRRP-E diagram. Preventing Layer 2 flooding for Router MAC Address Every single router MAC address associated with a Layer 3 interface is synced throughout the VCS fabric. The reason for this syncing is to ensure that every single router MAC address is treated as a known MAC address within the fabric. This ensures that when any packet enters the VCS fabric destined towards a router, it is never flooded and is always unicast to its correct destination. Similarly, when routing is disabled, the router sends a message to withdraw that particular router MAC address from the VCS fabric. This behavior prevents the periodic issue of Layer 2 flooding caused by the router MAC address being aged out and we no longer require the administrator to ensure that the Layer 2 aging time is greater than the ARP aging time interval. Layer 3 Inter-VLAN Packet Forwarding Based on the constructs of VCS fabric that's discussed until now will go over inter-subnet routing scenario. Figure 6 shows the Layer 3 forwarding in VCS fabric. Traffic is routed between subnets from host IPs to or from VLAN 201 to VLAN 101. VE interfaces 101 and 201 are configured with VRRP-E GW IP and respectively on the spine's. It is assumed the that the hosts have not learned the remote MACs initially. When host wants to talk to , the host would ARP for the GW Host would arp and get gw-mac for and forward the traffic in vlan 201. When VLAN 201 traffic is received on Spine and since Spine has VE for 101 and 201, it would arp for Spine would originate the BUM for which will be received on all nodes, but the host reply would come from only and thus the spine learns the DMAC of BUM traffic forwarding is explained in the next section "Multi-destination Traffic". Once the spine learns the DMAC of , Vlan 201 traffic received on Spine would then get routed to on vlan 100. The traffic between Leaf and Spine nodes would be TRILL forwarded while at the edge it would be classical Ethernet. The diagram below explains this behavior wherein RBID-103 receives packet with SIP , DIP , and DMAC equal to vrrp-e mac of vlan 201. This CE frame is TRILL encapped on RB 103 and forwarded based on Layer 2 table info of vlan 201. Packets gets TRILL forwarded to one of Spines which in the diagram is RBID-201. At RBridge 201 the traffic has to be routed across vlan 201 to 101 subnet. So Arp is resolved for DIP at the spine. 23

24 Virtual Cluster Switching Traffic then gets routed on Spine and TRILL encapped for vlan 101 and forwarded to RBID-101. At RBridge 101 the frame gets TRILL decapped and forwarded out as regular CE-frame to destination FIGURE 6 Layer-3 Inter-subnet Forwarding on VCS Fabric Multi-Destination Traffic Broadcast, unknown-unicast, and multicast traffic or BUM traffic are multi-destination traffic which are flooded to all nodes in the fabric. To avoid duplicate traffic and loop for BUM traffic a multi-destination tree is formed rooted at the multicast root RBridge. The multidestination tree includes all RBridges in the VCS. VCS uses FSPF to calculate a loop-free multicast tree routed at the Muticast root RBridge. The multicast tree Root RBridge election happens based on higher of the configured mcast root priority of RBridges else based of RBridge-ID. An alternate multicast root is also preselected to account for primary root failure Figure 7 shows BUM traffic flow over the multi-destination tree routed at Multicast root RBridge. Traffic from the source is received on the edge port of RBridge-ID 102. The traffic gets forwarded to the multicast Root RBridge 201 on one of the links. When multiple-links exist only one of the link is selected to forward towards the root. From root RBridge 202, 203, and 204 does not have direct connection. Hence the FSPF enables links on RBridge 101 to forward mcast traffic to RBridges 202, 203 and 204. When BUM traffic is received on vlag pairs 103 and 104 or 105 and 106, only the primary link of the vlag forwards BUM traffic. 24

25 Virtual Cluster Switching FIGURE 7 Multicast Tree for BUM Traffic IGMP Snooping Layer 2 networks implement IGMP snooping to avoid multicast flooding. In VCS fabric IGMP snooping happens at the edge-port, thus the RBridge knows about the interested multicast Receivers behind its Edge ports. So when multicast traffic is received from the VCS fabric on fabric ports the RBridge will prune traffic out of Edge port based on the snooping database. But when multicast traffic is received on an edge port it gets flooded to all other RBridges in the VCS fabric. If IGMP snooping is disabled multicast traffic from the VCS fabric will be flooded on edge ports too. ens is used to distribute the IGMP snooping database to all RBridges. This helps in vlag scenario where IGMP snooping entry could be learned on any of the vlag member RBridges but multicast traffic will flow out of only the primary link. VCS Services Over this section will go over some of the VCS services like AMPP, VM-aware automation, virtual-fabric, virtual-fabric extension, and auto-fabric. Automatic Migration of Port Profile In virtualized server environment like VMware, virtual-machines (VM) are provided switching connectivity through a Virtual Ethernet Bridge (VEB) which in VMware context is called a vswitch. VEB provides a Layer 2 switch functionality and inter-vm communication albeit in software. 25

26 Virtual Cluster Switching A VEB port has a set of functions defined through port-profile like: The types of frames that are allowed on a port (whether all frames, only VLAN-tagged frames, or untagged frames) The VLAN identifiers that are allowed to be used on egress Rate-limiting attributes (such as port or access control-based rate limiting) FIGURE 8 VEB on Virtualized Server Environment Extreme switches through the port-profile feature emulates the VEB port-profile on the hypervisor. And the Extreme switches have much more advanced policy controls which can be applied through the port-profiles. Port-profile define vlan, QoS and security configuration which can be applied on multiple physical ports on the switch. Port-profile on VDX switches provide: Port profile for VLAN and quality of service (QoS) policy profiles. Port profile for FCoE policy profile. Port profile with FCoE, VLAN, and QoS policy profiles. In addition, any of the above combinations can be mixed with a security policy profile. A port profile does not contain some of the interface configuration attributes, including LLDP, SPAN, or LAG. These are associated only with the physical interface. When work-loads or virtual-machines move, hypervisor ensures that the associated VEB port profile moves with it. On Extreme switches this port-profile move based on VM move is achieved using the feature AMPP or Automatic Migration of Port Profile. In short Automatic Migration of Port Profile (AMPP) functionality provides fabric wide configuration of Ethernet policies and enables network level features to support VM mobility. 26

27 Virtual Cluster Switching AMPP configuration and operation involves the following in brief: Creating a port-profile and defining the vlan, qos and security configuration. Activate the port-profile. Associate the VM-mac's associated with the port-profile. Enable port-profile mode on the virtual-machine connected ports. After configuring profiles when a VM-mac is detected at port-profile enabled port the corresponding port-profile gets downloaded on to that port. MAC detection is basically the source-mac learning on the port. When a VM move or mac move happens the port-profile migrate. Since VCS in logical chassis mode is a distributed fabric, AMPP configuration is done on the principal switch and the AMPP profile gets activated on all RBridges in the fabric. AMPP can operate in two scenarios. The manual configuration way as described above. This is Hypervisor agnostic and AMPP activates the profiles on corresponding interfaces based on MAC detection. AMPP integrated with vcenter or the vm-aware network automation. VM-Aware Network Automation With VM-aware network automation feature, VDX switch can dynamically discover virtual assets and provision the physical ports based on this discovery. Configuration and operation of this feature involves the following: The switch is preconfigured with the relevant vcenter that exists in its environment. The discovery process entails making appropriate queries to the vcenter. After discovery, the switch/extreme Network OS enters the port profile creation phase. It creates port profiles on the switch based on discovered standard or distributed vswitch port groups. The operation creates port profiles and associated VLANs in the running configuration of the switch. MAC-address association to each port-profile are also configured based of vcenter information. Port, LAG's, vlags are put into port-profile mode automatically based on the ESX connectivity. When the Virtual Machine mac is detected behind an edge port, the corresponding port-profile is activated. Virtual Fabric Virtual Fabric is Extreme's implementation of the TRILL fine-grained label standard in VCS fabric. Traditional TRILL supports 802.1Q VLAN IDs, which in today's virtualized and highly scalable data center, pose operational issues like scaling beyond 4K VLANs, having L2 adjacency between different server-side VLANs, or reusing the same server-side VLAN for different subnets. The Virtual Fabric feature solves these problems by having the customer-tag or server-side VLANs with a 12-bit VLAN value mapped to a 24-bit VLAN value. The 24-bit VLAN space provided by Virtual Fabric allows theoretically 16 million broadcast domains, although the current virtual-fabric implementation allows only 8K in total. In essence virtual fabric can provide a per-port VLAN-scope behavior. With VF classification of a customer-tag to VF-VLAN, the port becomes part of the VF-VLAN broadcast domain or subnet in the VCS fabric. From a forwarding perspective, the VF or TRILL fine-grained label is achieved by inserting the 24-bit VLAN on the inner payload. Extreme provides two types of Virtual Fabrics. Service Virtual Fabric (SVF) 27

28 Virtual Cluster Switching Transport Virtual Fabric (TVF) Service Virtual Fabric SVF provides a one to one mapping of customer-tag to a VF vlan (VFID). The VE associated with VF vlan can be configured with L2/L3 features. The VF vlan will support all L2/L3 features like a regular 802.1Q vlan. Through SVF multitenancy on the VCS fabric can be extended beyond the 4K vlan space. Transport Virtual Fabric TVF is a VLAN aggregation feature, wherein multiple customer VLANs are mapped to a single VF-VLAN. The VF-VLAN provides L2 functionality and cannot be enabled for L3. VEs cannot be defined for TVF VLANs. SVF functionality in service-provider and virtualized environments is depicted below. When a cloud service provider provisions the virtual DC by replicating server rack PODs across server ports, different tenant domains exist with overlapping 802.1Q VLANs at the server ports. Achieving isolation of tenant domains in this scenario is not possible with the regular 802.1Q VLANs, and virtual-fabric technology can easily solve this issue. The tenant domain isolation is achieved by mapping the 802.1Q VLAN at each VDX switch interface to a different VF VLAN. This capability allows the VCS fabric to support more than the 4K VLANs permitted by the conventional 802.1Q address space. FIGURE 9 SVF in a Cloud Service Provider Environment Another use case for SVF is in a virtualized environments and the diagram below illustrates the Service VF deployment model for multitenancy and overlapping VLANs in such an environment. The data center has three PODs All three PODs (ESXi 1-3) have the identical pre-installed configuration. Each POD supports two tenants. Tenant 1 and Tenant 3 have two applications running on VLAN 10 and VLAN 20. Tennant 2 and 4 have one application each running on VLAN 30. Tenant 1 and Tenant 2 currently run on ESXi1 and ESXi2 while Tenant 3 and 4 applications run on ESXi3. With Service VF, the same VLANs (VLAN 10, 20) can be used for Tenant 1 and 3 yet their traffic is logically isolated into separate Service VFs (5010 and 5020 for Tennant 1 and 6010 and 6020 for Tenant 3). Similarly, the same VLANs for Tenant 2 and Tenant 4 are isolated into separate Service VFs (6030 for Tenant 2 and 6030 for Tenant 4). 28

29 Virtual Cluster Switching FIGURE 10 SVF in a Highly Virtualized Environment Virtual Fabric Extension Virtual-fabric extension provides connectivity for Layer 2 domains across multiple VCS fabric. VF extension achieves this by building a VxLAN based overlay network to connect these disjoint Layer 2 domains. VxLAN is a Layer-2 overlay scheme on Layer 3 network or in other words VxLAN is a MAC-in-IP encapsulation technology which stretch Layer 2 domains over a layer-3 infrastructure. Extension of layer-2 domains (vlan) is achieved by enapping Ethernet-frame within a VxLAN UDP packet and vlan of Ethernet-frame gets mapped to VNI in VxLAN header. VxLAN Network identifier or VNI identifies a VxLAN segment and is comparable to VLAN in Ethernet world. Network elements on a VxLAN segment can talk only to each other like in VLAN/Ethernet world. VNI has a 24-bit value, and thus VxLAN extends the total Layer 2 domain to 16Mil compared to 12-bit or 4K Layer 2 networks provided by VLAN. A typical VxLAN packet is shown below. 29

30 Virtual Cluster Switching FIGURE 11 VxLAN Packet Format To extend Layer 2 domains using VF-extension feature, point-to-point VxLAN tunnels are setup between the VCS fabrics and then VLANs are extended over these VxLAN tunnels by mapping the VLANs to VNIs in VxLAN header. In VF extension case the VxLAN tunnel is setup through configuration instead of automatic VxLAN tunnel setup methods like "Flood and learn" or EVPN. Both 802.1Q VLANs in the range and the virtual-fabric VLANs in the range can be extended using the VFextension feature. A simple VF-extension packet flow is shown below; details on the configuration are provided in the illustration section. In the diagram below Virtual-Fabric Gateway configuration are done on Spine. On Spine as part of Virtual-fabric gateway configuration the VxLAN tunnel endpoint (VTEP) is defined. VTEP is responsible for VxLAN encapsulation and decapsulation. VTEP would require a unique-ip per VCS fabric. On the VF gateway, one would have to manually configure the remote data-center sites VTEP IP. Based on this, VxLAN tunnels are set up between the data centers if there is IP reachability for the remote VTEP IP. The VxLAN tunnel does nott have any control plane messaging and depends only on the IP reachability of VTEP IPs. Along with the VxLAN tunnel configuration, the user would also indicate which VLANs are to be extended and the VLAN-to-VNI mapping. The packet flow sequence is for Layer 2 forwarding from the server on Datacenter-1 to Datacenter-2. Both DC-1 and DC-2 are in two different VCS fabrics. With the fabrics having Layer 3 data-center connectivity through the edge router on each data center. 1. An Ethernet frame from the server on DC-1 will be received on a VLAN at the ToR/RBridge. MAC look-up at the ToR will indicate that the frame has to be TRILL-forwarded to the spine. 2. The Ethernet frame gets encapsulated into TRILL and TRILL-forwarded to the spine of DC At the spines of DC-1, the VTEP is configured and hence on MAC lookup the TRILL frame will be decapped and the Ethernet frame from the server will be VxLAN-encapped. VxLAN encapsulation will result in an IP packet with the source IP of the packet being the VTEP IP of DC-1 and the destination IP being the VTEP IP of DC The packet will traverse from the spine to the edge router as an IP packet, and from the edge router on DC-1, the VxLAN IP packet will undergo regular IP forwarding until it reaches the destination IP, which is the VTEP end on DC The VTEP end on DC-2 will be the spine RBridges on that fabric. At the spine, the VxLAN packet will be decapped and the Ethernet frame will undergo Layer 2 lookup. 6. The Layer 2 look-up will show that the packet has to TRILL encapped and forwarded to one of the ToRs/RBridges on the DC-2 VCS fabric. 30

31 VCS Deployment Models 7. The TRILL frame will be forwarded to the destination ToR where it will be decapped and the Ethernet frame will be forwarded to the server on Datacenter-2. FIGURE 12 Packet Forwarding over a VxLAN-Based DCI Implemented with VF-Extension Auto-fabric Auto-fabric feature allows true plug and play model for VDX platform in VCS fabric. When a new switch comes up it has the "bare-metal" flag enabled, meaning it's not part of any VCS and has default "VCS ID 1" and "rbridge-id 1". If the switch is connected to a VCS fabric and the VCS fabric is pre-provisioned with the new switches WWN number to an RBridge ID. Then the new switch will reboot again and automatically add itself to the VCS fabric with the pre-provisioned RBridge-ID. Working of this feature is present in the Illustration section. VCS Deployment Models This section provides various deployment models of a VCS fabric. The network architectures are based of the Clos or leaf-spine model since it provides predictive latency, better load distribution in data centers, and VCS fabrics are inherently suited for a Clos design. Primarily the VCS deployments can be categorized based on the scale requirement of the network into a: 3-stage Clos or single-pod VCS fabric 5-stage Clos or multi-vcs fabric 31

32 VCS Deployment Models The decision to go with a single-pod or a multi-vcs fabric would primarily be decided based on the scale requirements of a fabric. Single-POD VCS Fabric A typical data-center site will have a server VCS fabric and a DC edge services block as shown below. FIGURE 13 Single-POD VCS Fabric The server VCS fabric follows a 2-tier leaf-spine architecture with spine and leaf tiers interconnecting the server racks. The server VCS fabric will provide a Layer 2 interconnect for the server racks. The VCS fabric as described earlier is TRILL compliant, providing networkstyle routing in a bridge environment with nonblocked, multipath enabled and load-balanced interconnection between RBridges. Leaf Tier The leaf tier provides connectivity for the compute or storage rack to the VCS fabric. A pair of switches referred to as vlag pair leaf acts as dual or redundant top-of-rack (ToR) in each rack. These dual ToRs are interconnected with two ISL links. The compute or storage resources are connected to the dual-tor using vlag port-channels for redundancy. The ISL links between Leaf pairs would provide back-up path if one of the RBridges loses all connectivity to the spines Each RBridge in a vlag pair connect to 4 spines through 40 gig Links, providing 320 Gbps bandwidth per rack. Spine Tier Spine Tier provide the L2-L3 boundary for the VCS fabric. Also Spines attach to the Edge Routers or the Edge-Service block to route traffic to the Internet or other Data centers. 32

33 VCS Deployment Models Spine and Leaf RBridges are interconnected by ISL. And traffic forwarding on ISLs use FSPF based routed topology to multi-path the traffic. Spine Tier would have FHRP protocols enabled for providing routing within the fabric or to the Edge routers. VCS technology will allow the FHRP GW's to be advertised to the Leaf tier for efficient load-balancing of Server traffic to the Spine. Since each spine provide active-active forwarding using FHRP short-path forwarding feature, it is also recommended to run Layer 3 routing between the Spines. This will provide a backup routing path on a spine if it were to lose all uplinks. It is recommended to interconnect the Spine switches in a ring, to avoid leaf switch as a transit in back-up routing scenarios. Edge Services Edge services provide WAN, Internet, and DCI connectivity for the data center through the edge routers. And this would be where firewall, load balancers, and VPN services would be placed. To provide redundancy, two edge routers are recommended to connect from the spines. Traffic Flow VCS ensures that there is a loop-free topology, ECMP, and efficient load balancing across the multipaths in the leaf-spine fabric. Leaf nodes act as a typical Layer 2 node doing MAC learning and Layer 2 switching under VLAN and in a VCS fabric are responsible for TRILL encapsulation/decapsulation. Layer 3 routing in this fabric is performed at the spine layer where L2 termination of VLANs on VE interfaces happens. Leaf-to-server connections are classical Ethernet links or vlags receiving and sending CE frames while the spine-to-leaf or spine-tospine links forward TRILL frames over ISLs or ISL trunks. Within the same subnet/vlan, leaf RBridges perform Layer 2 lookup and based on the MAC table at the leaf RBridge would encapsulate classical Ethernet frames into TRILL frames and forward to the destination leaf pair through the spine. The spine does TRILL switching to the leaf, where the TRILL frame will be decapsulated and sent out as a CE frame from the destination leaf's vlag. For inter-vlan traffic within data center, leaf will send TRILL frame destined to the GW on spine and spine will route across the subnet rewriting the inner CE payload's destination MAC and VLAN. The packet after routing at spine would be forwarded as TRILL frame to destination leaf RBridge, where it will decapsulate TRILL and send CE frame to the server. And for inter-dc or Internet, traffic flows from leaf to spine as TRILL, gets routed at spine, and is send out as CE packet to DC edge services or vice-versa. Multi-VCS Fabric For a highly scalable fabric multi-vcs design is recommended where multiple VCS fabric POD's are interconnected by a Super-Spine layer. A single-pod VCS fabric can scale only up to 40 nodes, and for a highly scalable data center, several of these PODs must be interconnected and a super-spine layer is recommended for this. Extreme provides two interconnection designs: Multi-VCS fabric using vlag Multi-VCS fabric using VxLAN Multi-VCS Fabric Using vlag In this design, VCS PODs are interconnected through a vlag to the super-spine as shown in Figure

34 VCS Deployment Models FIGURE 14 Multi-VCS Fabric Interconnected Through Layer 2 vlag This is a 5-stage Clos design with three tiers: leaf, spine, and super-spine. It's a simpler network design with Layer 2 extended from the leaf through the spine with the L2/L3 boundary for the data center at the super-spines. The leaf and spine tiers for a POD are in the same VCS fabric. And in this architecture, there would be multiple such leaf-spine VCS fabrics in a data center. And multiple such leaf-spine PODs are interconnected at the super-spine through vlags from the spine. All nodes in the super-spine tier are in the same VCS fabric. The super-spine provides Internet, WAN, and DCI connectivity through the edge router in the edge service block. Leaf Tier Leaf pairs form the edge of the network connecting compute or storage racks through vlags to the leaf-spine VCS fabric. CE frames received from the server racks are TRILL-forwarded from leaf to spine and vice-versa. Spine Tier Compared to the single-pod design, the spine tier in this design does not provide L2-L3 gateway functionality. The leaf-spine nodes provide a Clos design with 4 spines supporting up to 40 leafs or 20 racks, assuming that each rack is serviced by a pair of leafs/tor RBridges. Intra-VLAN traffic within a POD gets TRILL-bridged within the POD by the spine. When intra-vlan traffic has to go across the PODs, traffic is sent to the super-spine tier to be switched. And all routing traffic is also sent to the super-spine, which includes inter-vlan within the same POD or inter-vlan across the POD or to edge routing. The spine connects to the super-spine over vlags, so from each leaf-spine tier, one vlag is formed between the spines to the super- spine. Super-Spine Tier 34

35 VCS Deployment Models In the 5-Stage Clos with vlag, the super-spine is configured as the L2/L3 boundary for all VLANs in the data center. Super-spines will be configured with VE interfaces, and VRRP-E would be enabled for FHRP. When traffic must go across a POD or is routed across a subnet/vlan or out of a data center, traffic is forwarded to the super-spine tier. All RBridges in the super-spine tier are in the same VCS fabric. Spines are connected to the super-spine through vlags, and essentially CE frames are forwarded across vlags, and between super-spine RBridges forwarding is TRILL. Spine to super-spines are 40-gig links, and 4 super-spine RBridges are recommended for the best performance and better oversubscription ratios. We recommend that you interconnect the super-spine switches in a ring and run Layer 3 routing between the spines. This will provide a backup routing path on the super-spine if it were to lose all uplinks. The super-spines would connect to the edge services through the edge router to provide Internet, WAN, and DCI connectivity and services like firewall, load balancers, and VPN. Pros and Cons A multi-vcs fabric provides a higher scalable data-center architecture. This is a simplistic design in terms of configuration. Seamless Layer 2 extension between the PODs is present in this architecture without the need for configuring other features. Routing traffic would trombone between spine and super-spine tiers since the L3 GW is placed at the super-spine. Multi-VCS Fabric Using VxLAN A multi-vcs fabric using VxLAN interconnectivity is another 5-Stage Clos design that is possible with a VCS fabric to build a highly scalable data-center fabric. Extreme supports VxLAN-based DC interconnectivity through Virtual Fabric Extension technology. Figure 15 shows the topology for multi-vcs using VxLAN. 35

36 VCS Deployment Models FIGURE 15 Multi-VCS Fabric Interconnected Through VxLAN over L3 Links Multi-VCS using VxLAN leverage the Virtual fabric extension technology to interconnect multiple-pods to build a scalable data center. This 5-stage Clos fabric consists of three tiers: leaf, spine, and super-spine. Each POD is constituted of leaf and spine RBridges part of a unique VCS. Every POD is connected to the spine by L3 links. The Virtual Fabric extension feature provides VxLAN tunnel between two end points over L3 infrastructure. For interconnecting multiple PODs, static VxLAN tunnels are set up between the PODs. Every POD is configured with VTEP, and static VxLAN tunnels are set up between the PODs. The L3 links that connect PODs to super-spine form the underlay network to interconnect multiple PODs. Through the VF extension feature, Layer 2 extension is provided between each individual POD. Leaf Tier The leaf tier connects server to the leaf-spine VCS fabric through vlags. A pair of leaf/top-of-rack RBridges would service each server rack. The leaf RBridges are connected to the spine over ISL, and leaf-spine traffic forwarding happens on TRILL. The leaf tier essentially does Layer 2 forwarding of CE frames to TRILL or vice-versa. Spine Tier In multi-vcs using a VxLAN design, the spine tier acts as the L2-L3 boundary and the VxLAN VTEP endpoint for the VF extension feature. Spines will be configured with the VE interface and FHRP's to provide L3 GW functionality for the server VLANs/subnets. Spines on every POD will localize the routing for subnets under it. Thus the ARP scale and routing scale are limited to each POD. Spines have L3 connectivity to the super-spine tier. And over this routing to Internet/WAN and VF extension will happen. For routing to Internet/WAN, each POD will receive a default route from super-spine. 36

37 IP Storage When Layer 2 domains must be extended across the POD, the VF extension feature is used. For VF extension, static VxLAN tunnels through configuration are set up between spines on each POD. For this, every spine would be a VTEP end point to provide VxLAN encap/decap. And for support of BUM traffic, one of the VTEP/spines would be selected as the BUM forwarder on each POD to avoid duplicate traffic. VLANs that must be extended across PODs will be enabled under the VTEPs. Apart from 802.1Q VLANs, the VF extension feature also provides seamless extension for the virtual fabric VLANs. The virtual fabric feature is Extreme's fine-grain TRILL implementation to extend VLAN ranges beyond the traditional 4K VLAN space and allows reusability of VLANs by providing a per-interface VLAN scope. With VF extension, virtual-fabric VLANs are seamlessly extended between TRILL and VxLAN, thus providing higher multitenancy. The VLAN carried in the TRILL frame is converted as VxLAN VNI, and hence the seamless integration of both these features is achieved. The spine tier is connected to the super-spine tier over L3 links, and BGP is recommended as a routing protocol. The spine tier on each POD will receive a default route for the Internet/WAN traffic and will also exchange the directly connected physical network and the VTEP end point IPs. Super-Spine Tier The super-spine tier in this architecture provides interconnectivity through L3 links with VxLAN traffic using it for underlay network. The super-spine tier also connects the multi-vcs fabric to the edge routers. A routing protocol must be run between the super-spine nodes to exchange L3 networks and for connectivity. Edge services provide WAN, Internet, and DCI connectivity and services like firewall, load balancer, and VPN for the data center. Pros and Cons This design provide L2/L3 GW's per POD, and thus routing doesn't have to cross each POD. Hence the architecture is much more scalable and provides efficient usage of link bandwidth. Broadcast storms are limited to each POD in this multi-vcs. Provides higher multitenant architecture with use of VF extension and VF features. At the same this design involves much more configuration compared to the multi-vcs using vlag. IP Storage Over the past few years, server virtualization has become a de facto standard, and lately data centers are moving to containerized work environments, and IP storage networks have been gaining a lot more mind-share from data-center professionals. Recent market studies have shown greater adoption of IP-based network attached storage (NAS) for file-based storage and iscsi for block-based storage. The use of IP-based NAS or iscsi generates a new set of interesting challenges for the network and storage administrators in terms of performance and SLA guarantees for storage traffic across an Ethernet network that is agnostic of loss. Traditional FC storage uses a dedicated SAN network that is purpose-built for storage, and storage traffic is the only workload that runs over the infrastructure. IP storage traffic protocols, such as NAS and iscsi, are often deployed over the general-purpose LAN infrastructure, sharing bandwidth with non-storage traffic. This helps to drive efficiencies by leveraging the existing IP network infrastructure to carry storage traffic, but it also creates challenges such as the inability to guarantee the stringent SLAs that missioncritical workloads require. With VCS-based fabric technology, the simplicity and performance of FC-based SAN network can be achieved for an IP storage network. VCS provides to the Ethernet world an automated and simplified fabric bring-up, along with load-balanced multipathing of traffic and nonblocked efficient usage of fabric bandwidth. VCS fabric also supports the Data Center Bridging features and other enhancements to support the storage network. 37

38 IP Storage Over this section, we will go over the enhancements (DCB, AutoQoS, buffer management) provided by Extreme's VDX switch platform for the IP storage network and designs that can be used in data-center build-ups. Data Center Bridging Data Center Bridging (DCB) enhances Ethernet to allow consolidation of traffic types on a single link. DCB covers a set of protocols to achieve the consolidation of traffic. The Extreme VDX platform supports the following DCB protocols: Data Center Bridging Capabilities Exchange Protocol (DCBX) Priority Flow Control (PFC) Enhanced Transmission Selection (ETS) iscsi application TLV Data Center Bridging Capabilities Exchange Protocol DCBX is a discovery and capability exchange protocol that is used for conveying capabilities and configuration of PFC, ETS, iscsi priority and FCoE priority information across Ethernet links to ensure lossless forwarding for storage. DCBX leverages functionality provided by LLDP (IEEE 802.1AB) or LLDP TLV's to exchange the capability. DCBX is used primarily with FCoE and iscsi. Priority Flow Control Priority-based flow control or PFC provides a link-level flow control mechanism that can be controlled independently for each Class of Service (CoS) in the dot1q Ethernet header. PFC strives to ensure zero loss under congestion in data center bridging (DCB) networks. This is an enhancement to the Ethernet flow control mechanism. Ethernet flow control uses special PAUSE frames to pause traffic from a sender under congestion. PFC extends this concept to pausing of traffic individually for the eight 802.1Q class of service or priorities. So while Ethernet flow control pauses all traffic, PFC can pause traffic per priority. PFC achieves this by adding extra information to the PAUSE frame to identify the COS traffic. Enhanced Transmission Selection In a converged network, different traffic types affect the network bandwidth differently. The purpose of Enhanced Transmission Selection (ETS) is to allocate bandwidth based on the different priority settings of the converged traffic. When the offered load in a traffic class doesn't use its allocated bandwidth, enhanced transmission selection will allow other traffic classes to use the available bandwidth. In VDX switches the PFC and ETS are configured through cee-map policy. iscsi Application TLV DCBX Application Protocol Feature TLV (iscsi TLV) is implemented in the Extreme switches to announce the iscsi support and priority map for iscsi protocol traffic over the DCB link. Extreme VDX switches can be configured globally or on interface level for advertising iscsi Application TLV. Administrator should also edit the CEE Map to configure PFC for desired iscsi priority and to allocate required bandwidth to iscsi priority. Once a storage initiator or target receives the switch side DCBX with iscsi App TLV configured, it can do iscsi data traffic with switch dictated priority (default Priority=4). If the links between storage initiator-to-vdx switch or storage target-to-vdx switch is congested, the switch will help avoid discards by sending Pause frames specific to iscsi priority(4) with a time set on how long the pause needs to be done. On receiving a pause frame, initiator or target should pause for the required time so that traffic is not dropped since the links are congested. 38

39 IP Storage Auto-NAS While DCBX can be used for exchanging ISCSI and FCoE COS parameters, Extreme supports Auto-NAS feature to allocate fabric wide priority for NAS traffic. Extreme provides configuration to classify NAS traffic to a particular COS and bandwidth guarantee for this COS. Thus, Auto QoS for NAS creates a minimum bandwidth guarantee for Network Attached Storage traffic. Auto QoS for NAS is disabled by default; it must be enabled to allow NAS packets to have the correct service levels. CEE-Map As we saw above, DCB is a set of enhancements (DCBX, PFC, and ETS) to Ethernet to provide better guaranteed service in the LAN. DCB is marketed by different vendors under names like Data Center Ethernet or Converged Enhanced Ethernet (CEE). Extreme supports these standards and provides additional support for NAS traffic in the VCS fabric through Auto-NAS. At the edge of VCS fabric, RBridge edge ports and network storage/server NICs negotiate the iscsi COS (4 by default), PFC, and ETS using DCBX. Similarly Auto-NAS classifies NAS traffic to a COS (2 by default). But to ensure iscsi and NAS traffic has a lossless behavior throughout the fabric, the same QoS policy must be applied to the entire VCS fabric or essentially all the ISL links in the fabric. For this VCS provides simplified QoS provisioning through CEE-map configuration. The CEE-map combines priority remapping, queuing, congestion control, and scheduling into a single profile called CEE-MAP. This simplifies the user provisioning by avoiding many invalid QoS combinations. CEE-map is enabled by default on all VCS ISL ports. But CEE-map must be modified to provide proper BW reservations for iscsi and NAS traffic. CEE-map is also used to enable the PFC and ETS configuration at the edge ports for DCB. To enumerate the above-discussed technologies, Figure 16 shows the IP-storage traffic classification and BW reservation using PFC and ETS. The default CEE-map has been modified to have NAS traffic mapped to COS-2 and iscsi to COS-4. The LAN gets 40 percent, NAS and iscsi get 25 percent each, and FCoE is allocated 10 percent of the BW as per the policy. 39

40 IP Storage FIGURE 16 PFC and ETS in Action over a DCBX-Capable Edge Port on a VDX Switch Dynamic Packet Buffering Dynamic packet buffering allows user to set a per-port Rx and Tx packet buffers to a recommended limit. Dynamic packet Buffering would support micro-bursts happening on the TOR side when there are multiple initiators from the server side for storage traffic. Extreme recommends a 2MB of buffer per port in the converged networks. iscsi Initiators and Targets A storage network edge consists of two types of equipment: initiators and targets. Initiators, such as hosts or servers are data consumers. Targets, such as disk arrays or tape libraries, are data providers. An important consideration for storage administrator with respect to enabling iscsi would be about initiators and Target device so which kind of HBA's (Host Bus Adaptor) to use. iscsi initiators fall into three categories; Software, Hardware Dependent, or Hardware Independent. Software Initiators Software initiators are supported by hypervisors, Windows or Linux operating system. Thus they can use the standard Ethernet NICs for iscsi with the initiators build into the OS, using the server's CPU and resources for initiating iscsi TCP sessions and sending/receiving SCSI commands. Independent Hardware iscsi Adaptor 40

41 IP Storage Deployment Models The independent hardware adapter is a separate storage iscsi HBA that has its own TCP/IP stack separate from a host (server or VM). It works as an HBA with an IP interface that is configured on the iscsi NIC (does not use the host's routing table). Dependent Hardware iscsi Adaptor The dependent rely on the host to provide the iscsi and SCSI command layer and use the same routing table as the host. While the adaptor provides TCP offload engine to offload SCSI command translation to TCP encapsulation on the adaptor. Of the three NIC types, the Independent iscsi NIC offers the best performance, followed by the NIC with TOE (TCP offload engine), and then the software iscsi NIC. Also another consideration for the administrators should be that software initiators (ESXi, windows etc.) mostly don't support DCBX while the hardware based iscsi initiators supports DCBX but this needs to be confirmed based hardware vendor data-sheets. The validation section will cover how COS classification can be achieved for scenarios wherein initiators at the edge-ports support DCBX and don't support DCBX. Similarly not all Software Target or Storage array vendors support DCBX and in those scenarios manual COS and QoS configuration of iscsi traffic is the way to ensure bandwidth guarantees for storage traffic. IP Storage Deployment Models While until now we briefly went over some of the supported features in Extreme VDX platforms and other considerations to support IP storage traffic namely iscsi and NAS in VCS fabric. The validated design section and illustrations section will cover these features and there configuration in more detail. Along with storage features another key focus of this paper is to outline the various IP storage network design options that's possible with VCS. Based on the storage requirements of an organization storage networks could be: Dedicated Hybrid Shared As the names imply, dedicated storage network is about having a separate networks for both Storage and Compute Servers. Shared storage is about having a common network infrastructure for both storage and server traffic. While Hybrid is a middle ground to get the best of dedicated and shared by bifurcating the storage and server traffic at the TOR's thus the network infrastructures are not fully shared. The following section will go into more details on each of these. Dedicated IP Storage Dedicated IP-Storage network provides the best performance by maintaining reliable and predictable performance for storage in data centers. Figure 17 shows dedicated storage design with VCS. 41

42 IP Storage Deployment Models FIGURE 17 Dedicated Storage Design With VCS Storage I/O require a network with guaranteed delivery. But with using Ethernet and IP as the transport for storage traffic, they are "best effort" delivery transports and hence not guaranteed. This means that TCP has to handle the guaranteed delivery, which can be very difficult if congestion occurs. By isolating the storage network, high-traffic non-storage events that could impact mission-critical workloads can be eliminated, providing more predictability for workload bandwidth, along with higher performance and availability. A dedicated network for storage can ensure that there is no congestion due to LAN traffic without QoS tuning and other congestion avoidance techniques. A dedicated storage network thus ensures Low Deterministic Latency, Guaranteed Delivery, Smaller Administrative Domain which is easier to troubleshoot and fewer configuration compromises. In this design storage traffic is send out of dedicated NIC's from the Servers to the storage VCS fabric and LAN traffic gets send out to Server VCS fabric. Servers can use independent hardware iscsi NIC's to achieve the best storage performance. VCS is suited for both Storage and LAN fabrics with its simplified fabric bring-up of Layer 2 networks and RBridge Routing to provide multipathing, load balancing, and efficient usage of link bandwidth in the fabric. Dedicated network for storage ensures reliable performance but cost would be one of biggest impediments. Since there would be a higher hardware foot print with need for separate dedicated Storage TOR's on every server rack and also the servers would need to have dedicated NIC's or HBA's for storage and LAN traffic. 42

43 Hybrid IP Storage For implementing Hybrid IP storage design the LAN and Storage traffic are diverted at the TOR level. Storage and LAN traffic are sent from the server on the same link to the TOR/Leaf layer and so are essentially going to be affected only by congestion at TOR. The Storage traffic would be unaffected by over-subscription towards the Spine. Having DCBX ISCSI TLV exchange, Auto-NAS and Buffer management at TOR can alleviate the congestion that happens at the TOR. Storage traffic that bifurcates from TOR gets forwarded to the Storage VCS and then to the Storage Arrays. Two different designs for Hybrid IP storage are shown below. FIGURE 18 Hybrid IP-Storage with Single Storage vlag from TOR In the above deployment model, storage VCS are connected to the TORs by a vlag. Storage and Server VCS are two separate VCS's which are interconnected through vlag and hence this design would be affected by the limitation that a vlag can span across only eight RBridges. There cannot be multiple vlag pairs between the two VCS's to avoid loops. Thus scalability of only 8 RBridges in a vlag would be a big concern with this design. But for smaller data centers, this is an ideal design by reducing the need of TORs for storage VCS fabric and dedicated HBAs for storage. These hybrid designs at the server side for the iscsi initiators can use the converged network adaptors (CNAs) or use software initiators. 43

44 IP Storage Deployment Models FIGURE 19 Hybrid IP Storage with Multiple Storage vlag from TOR The deployment model above doesn't really utilize the benefits of VCS. Every ToR is in a separate VCS fabric, spines are in one VCS, and storage is in another VCS. This design won't have the limitation of scale, but it will be configuration intensive due to the number of VCS fabrics per ToR. And you will need to have many vlag pairs configured between these VCS fabrics, thus losing the management and fabric bring-up simplicity provided by VCS. Shared IP Storage A shared deployment model is where the same network infrastructure is shared by the LAN and IP storage network in a data center. The deployment models below provide shared storage connectivity for single-pod VCS and multi-vcs server fabrics. A converged network is a tricky proposition since storage traffic requires a lossless medium. And IP storage running on top of Ethernet, which uses only the TCP mechanism to guarantee service level for storage, would pose serious issues for storage traffic. Extreme VDX platform has some nifty features to alleviate problems in this converged environment. Details on these are covered in Technology Overview and validated Design section. Here we will focus on the various design possibilities for a converged network for LAN and IP storage traffic using VCS. 44

45 FIGURE 20 Single-POD VCS with Attached IP Storage Device In the above shared storage network design, the LAN and the IP storage devices attach to different places in the network and both LAN and storage traffic are carried in a shared VCS fabric. Leaf or the TOR layer in this topology connects the servers where the storage initiators and compute are located. So the servers originate both LAN and storage traffic to the same top-of-rack switches. While connecting the servers to TORs, the network administrator could go with either: Dedicated NICs for storage and LAN network Having the storage and LAN share the same Ethernet NIC But in either case, the LAN and IP storage traffic will share the same VCS infrastructure and would need special features to support service guarantees for IP storage traffic. The storage target or the storage arrays would connect to the spine directly, and IP storage traffic would always flow from leaf to spine or vice-versa. The storage traffic could be routed/layer 3 or switched/layer 2. The DCBx and other QoS features would guarantee bandwidth for storage. In the validated design, we would go over details on how to achieve lossless behavior for IP storage traffic in this network. The design below is a different take on the same single-pod design, difference being the storage devices instead of being directly connected to the common VCS fabric is connected through a storage VCS. The storage and the shared VCS fabric are connected by a vlag spanning all the RBridges on the spine. The need of such a design would be to support greater number of Storage Devices and in scenarios where the storage devices needs to be managed separately. 45

46 IP Storage Deployment Models And in this design the Shared VCS infrastructure would need IP storage related features turned on. And since storage VCS would only carry IP-storage traffic wouldn't really need IP storage feature like BW reservation but having PFC and DCBx would ensure end-to-end lossless behavior for ip-storage traffic. FIGURE 21 Single-POD with Shared IP Storage VCS 46

47 FIGURE 22 Multi-VCS Using vlag with Shared IP Storage 47

48 IP Storage Deployment Models FIGURE 23 Multi-VCS Using VxLAN With Shared IP Storage Shared IP-storage connectivity options for multi-vcs fabric are shown in Figure 22 and Figure 23. Figure 22 shows shared storage design with a multi-vcs fabric connected through vlags and Figure 23 shows the storage design with multi-vcs fabric connected via VxLAN through L3 links. Storage devices are connected through a storage VCS and the storage VCS is attached at the super-spine. In this design the DCBx, PFC and ETS needs to be enabled on the Server VCS fabric, super-spine VCS fabric and on the Storage VCS fabric to attain guaranteed bandwidth performance. The storage initiator and storage target could be connected to the server and storage VCS fabrics as discussed in the Single-POD shared IP storage designs in Figure 20 and Figure

49 VCS Validated Designs Hardware and Software Matrix VCS Fabric Configuration IP Storage Configuration Edge Services Configuration Deployment 1: Single-POD VCS Fabric With Attached Shared IP Storage Deployment 2: Single-POD VCS Fabric with Dedicated IP Storage VCS Deployment 3: Multi-VCS Fabric with Shared IP Storage VCS This section will provide details on validated data center fabric with VCS and IP storage and their configuration. Following are the validated design for data center deployments based on 3-Stage Clos and 5-Stage Clos VCS fabric with various IP storage design options. The following are the validated designs: Single-POD VCS fabric with attached shared IP storage Single-POD VCS fabric with dedicated IP storage VCS Multi-VCS fabric with shared IP storage VCS For these three validated designs, most of the configuration options are common and repetitive so these common configurations are explained in detail initially and the deployment model sub-section will provide the configuration templates for the above 3 designs. The topology used for describing the configuration options initially is "Single-POD VCS fabric with attached shared IP storage" as shown in Figure 20 on page 47. The common paradigms covered here as part of validated design are: VCS Fabric and associated configuration for server LAN traffic IP Storage Configuration Edge Service Configuration Along with this the illustration section would dwell into specifics of other features that have been validated and can be enabled in VCS fabric or for IP Storage. Hardware and Software Matrix For the validated topology, Extreme VDX platform were used in the VCS fabric and Extreme MLXe platform was used as Edge router. For the server side VMs on ESXi host where connected through Converged network adaptors. Storage Initiators configured where software initiators, hardware dependent and independent hardware HBAs. Both iscsi & NAS storage arrays were used as storage Targets. Hardware Matrix below provides more details on the actual hardware. TABLE 1 Platforms Used in this Validated Design Places in the Network Platform Software Version Leaf Nodes VDX 6740 VDX S Network OS 7.1.0b Spine Nodes VDX Q Network OS 7.1.0b Super-spine Nodes VDX Q Network OS 7.1.0b 49

50 VCS Fabric Configuration VCS Fabric Configuration This section details on the configuration and validation steps to build a VCS fabric involving: Fabric bringup Edge port configuration First Hop Redundancy Multitenancy Fabric Bringup VCS fabric bringup can be envisioned as configuring a new fabric or adding a new Rbridge to an existing fabric. In either scenarios the fabric bringup can happen with minimal or no user intervention. The pre-requisite for VCS fabric bringup are: VCS ID which identifies a VCS fabric. Rbridge ID which identifies uniquely every switch/rbridge in the fabric. VCS fabric mode which by default in latest releases is Logical Chassis mode. Fabric Bringup could be achieved either by: Manually configuring the individual switch with VCS-id, RBridge-ID. Use auto-fabric feature to provide the info. ZTP (zero touch provisioning) using DHCP based DAD This section will cover the manual way of bringing up a node in VCS fabric, auto-fabric is covered in illustration section. 50

51 VCS Fabric Configuration Before enabling VCS the VDX switch has to be brought up in the minimum supported NOS software version. The new firmware could be installed either using USB boot or net boot. Also at this point one would also go for installing 10-gig and 40-gig port upgrade license if more 10-gig and 40-gig ports are needed in the fabric. After firmware upgrade and license installation the next steps would be for the VCS configuration. By default switches come up with VCS ID 1 and logical chassis mode enabled. With the following config a switch is configured with vcsid 100 and RBridgeId 101 with vcs mode by default being logical-chassis. v # vcs vcsid 100 rbridge-id 101 After the VCS configuration, switch will prompt for reboot and after reboot all ports of the switch will come up as Edge port with "fabric ISL" configuration. With fabric ISL enabled ports will forward BLDP PDU's out to discover the neighbors, form ISL or ISL trunks and form the VCS cluster for VCS id 100. Ports which discover neighbors in the same VCS cluster would transition to fabric ports while others remain as Edge port. Ethernet interfaces in the VCS fabric have RBridge-ID prefixed as shown below, this is different from physical interface representation in other Ethernet fabrics. A typical VCS fabric can be brought up by a single-line of command on every node and VCS technology takes care of the rest of fabric formation. Similarly adding a new node to the VCS fabric follows the same. The below illustration shows the amount of configuration needed per-switch to bring-up a loop-free, multi-path enabled, scalable, TRILL compliant 3-stage CLOS Ethernet fabric with VCS. 51

52 VCS Fabric Configuration FIGURE 24 VCS Fabric Here we will go over the common show commands to verify the fabric that was built by this single-line of VCS configuration. "Show vcs" displays information about each RBridge in the fabric. Principal RBridge as we discussed plays an important role in logical chassis mode for verifying the uniqueness of RBridge in the fabric and also for configuration distribution. After a switch becomes part of VCS, all fabric wide configuration can be performed only from the Principal RBridge. 52

53 VCS Fabric Configuration User can provide RBridge priority to select the RBridge in a fabric, we recommend to have one of the Spine Node as the Principal RBridge. A lower number means a higher priority. v (config)# rbridge-id 201 v (config-rbridge-id-201)# logical-chassis principal-priority 5 v (config-rbridge-id-201)# Note: For switchover to new principal RBridge execute logical-chassis principal switchover after configuring the priority. v # logical-chassis principal switchover v # v # show vcs Config Mode : Distributed VCS Mode : Logical Chassis VCS ID : 100 VCS GUID : a853ccdf-5ff9-4a32-8b21-eecad Total Number of Nodes : 8 Rbridge-Id WWN Management IP VCS Status Fabric Status HostName :00:50:EB:1A:F1:2D:7C Online Online v :00:50:EB:1A:DE:7D: Online Online v :00:50:EB:1A:DE:46: Online Online v :00:50:EB:1A:F5:81: Online Online v :00:50:EB:1A:95:23: Online Online sw0 201 >10:00:50:EB:1A:AA:68:D4* Online Online v :00:00:27:F8:F4:95: Online Online v :00:00:27:F8:F1:3D: Online Online v v # As discussed in Technology Overview section, for BUM traffic a multicast tree is formed by the VCS fabric routed at one of the RBridge. To force this tree to be routed at one of the spine for optimal performance the following multicast priority configuration on Spine RBridge is recommended. Thing to note is multicast-root is different from Principal RBridge. 53

54 VCS Fabric Configuration VCS fabric can be managed from a single IP by configuring virtual IP, which will be associated with the principal RBridge. 54

55 VCS Fabric Configuration v (config)# vcs virtual ip address /24 v (config)# v (config)# do show vcs virtual-ip Virtual IP : /24 Associated rbridge-id : 201 Oper-Status : up v (config)# ISL aka Inter Switch Links are formed as part of VCS fabric bringup using BLDP, the following show command displays the fabric ISL's on RBridge-ID 102. ISL ports does TRILL forwarding in the VCS fabric. From the src-interface and nbr-interface in the o/p below one can infer that RBridge 102 is attached to four other RBridges namely 101, 201, 202 and 203. Between two VCS RBridges, if there are multiple ISLs and the ISLs are part of the same VDX switch port-groups, then link aggregation happens automatically across the links to form ISL trunk. The link-aggregation is Extreme innovation which brings per-packet load balancing across the ISL trunks. v # show fabric trunk Rbridge-id: 102 Trunk Group Src Index Source Interface Nbr Index Nbr Interface Nbr-WWN Te 102/0/23 86 Te 101/0/23 10:00:50:EB:1A:F1:2D:7C 1 87 Te 102/0/24 87 Te 101/0/24 10:00:50:EB:1A:F1:2D:7C Show fabric route topology below, displays the RBridge routes from the source switch to destination switches. FSPF builds the route topology which will be used for VCS hardware forwarding. v # show fabric route topology Total Path Count: 9 Src Dst Out Out ECMP Nbr Nbr RB-ID RB-ID Index Interface Grp Hops Cost Index Interface BW Trunk Te 101/0/ Te 102/0/23 20G Yes Fo 101/0/ Fo 202/0/97 40G Yes Fo 101/0/ Fo 203/0/97 40G Yes Fo 101/0/ Fo 201/0/1 40G Yes Fo 101/0/ Fo 202/0/97 40G Yes Fo 101/0/ Fo 203/0/97 40G Yes Fo 101/0/ Fo 201/0/1 40G Yes 55

56 VCS Fabric Configuration Fo 101/0/ Fo 202/0/97 40G Yes Fo 101/0/ Fo 203/0/97 40G Yes v # Edge Ports In VCS edge ports connect non-vcs devices or different VCS-ID switches to the fabric. Edge ports typically would have switchport configuration when connected to server racks or L3/IP configuration when connected to Edge Service nodes. Extreme switches implement industry standard multi-chassis trunking through vlag. Regular LAG's on Extreme switches would automatically transition to a vlag's based on whether a particular port-channel control group is receiving same LACP PDU's on multiple-nodes of a VCS fabric. Static vlags are formed when the same port-channel control-group is used on multiple RBridges in a VCS fabric without LACP. Extreme vlag solution can span across eight RBridges providing better redundancy compared to typical two node redundancy. Though ISL links between vlag RBridge pairs is not needed to form vlag, it's recommended to have 2 ISL links between vlag pair RBridges for back-up path and higher redundancy. In a typical data center, two ToRs would service a rack of compute or storage devices. And these devices would form a vlag to the dual ToRs. Before configuring the vlags, enable the VLANs needed in the fabric; all this configuration is done from the principal RBridge. interface Vlan 101 Configures a vlag on RBridge 101 and 102 with LACP. v # show run int po 1 interface Port-channel 1 switchport switchport mode trunk switchport trunk allowed vlan add 101 no shutdown v # v # show run int te 101/0/1 interface TenGigabitEthernet 101/0/1 channel-group 1 mode active type standard no fabric isl enable no fabric trunk enable no shutdown v # v # show run int te 102/0/1 interface TenGigabitEthernet 102/0/1 channel-group 1 mode active type standard no fabric isl enable no fabric trunk enable no shutdown v # A typical vlag formed across a pair of RBridges is shown below. 56

57 VCS Fabric Configuration FIGURE 25 vlag vlag show commands 57

58 VCS Fabric Configuration show lacp counter 1 Traffic statistics Port LACPDUs Marker Sent Recv Sent Recv Aggregator Po 1 Te 101/0/ Aggregator Po 1 Te 102/0/ v # v # show port-channel 1 LACP Aggregator: Po 1 (vlag) Aggregator type: Standard Ignore-split is enabled Member rbridges: rbridge-id: 101 (1) rbridge-id: 102 (1) Admin Key: Oper Key 0001 Partner System ID - 0x8000, e-88-c6 Partner Oper Key 0001 Member ports on rbridge-id 101: Link: Te 101/0/1 (0x650C002000) sync: 1 Member ports on rbridge-id 102: Link: Te 102/0/1 (0x660C002000) sync: 1 * Pckt err Sent Recv v # vlag Configuration Defaults and Other Considerations The following highlighted config shows the default config that is enabled on Port-channel 1. vlag ignore-split The vlag ignore-split command is enabled by default and is recommended to avoid traffic loss when the vlag member RBridge is reloaded. As discussed earlier, the end devices behind the vlag perceive two cluster switches as one switch because they have the same system ID advertised in LACP. Under rare conditions, when all ISLs between the two cluster switches are broken and both the cluster switches continue to advertise the same system ID to their LACP partner, a "segmented fabric" or "split-brain" condition exists, where the end host or edge switch might not detect this segmentation and could continue to treat both the vlag switches as one switch. To recover gracefully from the split-brain scenario. When all ISLs between the VDX cluster switches go down, the switch with the lower RBridge ID uses LACP to inform the edge switch partner that it has segmented out of the port-channel. It does this by changing its advertised system ID. When the edge switch learns a different system ID on one of its members, it removes this member from that portchannel and continues to function with only one vlag member the switch with the higher RBridge ID. 58

59 VCS Fabric Configuration The graceful handling of the split-brain scenario will cause a longer duration of packet loss in another scenario when the vlag member with higher RBridgeID is reloaded. Though this is not a real split-brain scenario, the switch with the lower RBridge ID may not be able to differentiate and thus would inform the partner about a changed system ID. In such a case, LACP will renegotiate and form the portchannel, which could flap the port-channel, impacting traffic momentarily. The same effect could occur when the switch boots up and rejoins the fabric. The vlag ignore-split option prevents the switch with lower RBridge ID from changing its system ID, so both switches will continue to advertise the same system ID. This action prevents the partner edge switch from detecting a change when one of the member switches is reloaded and the traffic is handled gracefully. The chance of split-brain occurring in VCS is low due to the higher redundancy in a fabric built with ISLs. But if a vlag pair does not have many redundant paths to reach each other, the vlag ignore-split command should be disabled. To use the vlag ignore-split option, redundancy should be built around ISLs to prevent a situation in which all ISLs are broken at the same time. Extreme recommends using multiple ISLs and routing those ISLs through different physical paths or conduits to eliminate the possibility of accidental damage to all links at the same time. spanning-tree shutdown The spanning-tree shutdown option is on by default and is recommended at the edge ports since by default xstp is tunneled through the VCS fabric and could lead to inconsistent states in the STP network unless properly configured with VCS fabric in the middle. Since STP is not enabled, we recommend using the "Edge Loop Detection" feature for loop detection on edge ports if the customer environment has many classical Ethernet access switches connected to the edge ports. Edge Loop Detection details can be found in the illustration section. lacp default-up This configuration is recommended only on edge ports connected to servers in PXE-boot environments or where the server expects the switch to send LACP PDUs first. This option forces the port-channel to be up even if there are no LACP PDUs received on the interface port. interface TenGigabitEthernet 101/0/1 channel-group 1 mode active type standard no fabric isl enable no fabric trunk enable lacp timeout long lacp default-up no shutdown Static vlag The following configuration is to enable a static vlag in environments where LACP is not supported, like servers that use a standard VMware vswitch where LACP is not supported. Only the distributed vswitch supports LACP. interface Port-channel 2 no vlag ignore-split switchport switchport mode trunk switchport trunk allowed vlan add 201 switchport trunk tag native-vlan spanning-tree shutdown no shutdown interface TenGigabitEthernet 101/0/2 channel-group 2 mode on type standard fabric isl enable fabric trunk enable 59

60 VCS Fabric Configuration no shutdown interface TenGigabitEthernet 102/0/2 channel-group 2 mode on type standard fabric isl enable fabric trunk enable no shutdown v # show port-channel 2 Static Aggregator: Po 2 (vlag) Aggregator type: Standard Member rbridges: rbridge-id: 101 (1) rbridge-id: 102 (1) Member ports on rbridge-id 101 Te 101/0/2 Member ports on rbridge-id 102 Te 102/0/2 * v # L3 Edge Port An edge port could be an L2 port with switchport configuration as seen in the previous sections or be a L3 port with IP configuration. Here, we look at an L3 edge-port configuration on the spine which connects to the edge router. The configuration below shows an L3 port on the spine connected to the edge port. interface TenGigabitEthernet 201/0/33 no fabric isl enable no fabric trunk enable ip address /24 no shutdown First Hop Redundancy As covered in the "Technology Overview" section, FHRP provides protection for the default gateway of a subnet. VCS fabric provides three first hop redundancy options: VRRP VRRP-E FVG VRRP-E, which is based on VRRP, is the most popular of these three and is supported only across Extreme switches. VRRP-E configs are provided below, while FVG configuration is provided in the illustration section. Details on the protocol operations can be found in the "Technology Overview" section. In the 3-stage Clos fabric, the VRRP-E config is done on the spine RBridges. VRRP-E configurations are done on the VE interface, which is comparable to SVI (switched VLAN interface) in Cisco parlance. The leaf pairs would have the VRRP-E virtual MAC pointing to the RBridges as a virtual-router port. Internally in the VCS forwarding plane, the virtual router points to all RBridges that are configured for VRRP-E. Thus the traffic from the servers would get load-balanced from the leaf to the spine RBridges with VRRP-E. As depicted in Figure 26, the VRRP-E and VE configuration is done on the spine. 60

61 VCS Fabric Configuration FIGURE 26 VRRP-E in a 3-Stage Clos VCS Fabric VRRP-E configuration is done under the RBridge mode of the spines; the following config enables the VRRP-E feature. rbridge-id 201 protocol vrrp-extended rbridge-id 202 protocol vrrp-extended Enabling IPv4 and IPv6 configurations for VE 101 under the spine RBridges 201 and 202. VE interface 101 on each RBridge would need a unique IPv4 and IPv6 address. The VRRP-E group number configured is used to calculate the virtual MAC for the group. So the group number must be the same across the RBridges. Virtual IP would be the gateway used by the servers for the x subnet under VLAN 101. Short-path forwarding would enable active-active forwarding on the VRRP-E GWs. 61

62 VCS Fabric Configuration Tracking VRRP-E provides route and port tracking, allowing the virtual router to lower its priority if the exit path interface goes down, allowing another virtual router in the same VRRP (or VRRP-E) group to take over. In this design, we will be using back-up routing between spine RBridges enabled for VRRP-E to handle up-link failure instead of tracking. With short-path-forwarding, tracking is not beneficial. Backup routing provides better path redundancy and link utilization compared to tracking. This is covered in the "Edge Services Configuration" section. VRRP-E show Commands VRRP-E session master is responsible for responding to ARP request but VCS have internal mechanism for other VRRP-E peers to sync-up the ARP. Recommended to enable the SPF or short-path forwarding for active-active forwarding behavior for better load-balancing of traffic across the spines. 62

63 VCS Fabric Configuration v # show vrrp interface ve 101 ============Rbridge-id:201============ Total number of VRRP session(s) : 1 VRID 1 Interface: Ve 101; Ifindex: Mode: VRRPE Admin Status: Enabled Description : Address family: IPv4 Version: 2 Authentication type: No Authentication State: Backup Session Master IP Address: Virtual IP(s): Configured Priority: unset (default: 100); Current Priority: 100 Advertisement interval: 1 sec (default: 1 sec) Preempt mode: DISABLE (default: DISABLED) Advertise-backup: DISABLE (default: DISABLED) Backup Advertisement interval: 60 sec (default: 60 sec) Short-path-forwarding: Enabled Revert Priority: unset; SPF reverted: No Hold time: 0 sec (default: 0 sec) Trackport: Port(s) Priority Port Status ======= ======== =========== Statistics: Advertisements: Rx: , Tx: Gratuitous ARP: Tx: v # The MAC address table from RBridge 201 and 102 is shown below. The highlighted MACs point to the virtual MAC of the VRRP-E session. In RBridge-201 the MAC is shown as a local system MAC. While in RBridge-102, it's a remote MAC pointing to a virtual router (VR1), which internally maps to RBridge 201 and 202. Thus any traffic received on the edge port of RBridge-102 for VRRP-E virtual-mac gets load-balanced to RBridge 201 and 202. And on RBridge 201 and 202 since SPF is enabled the traffic gets routed. 63

64 IP Storage Configuration Multitenancy VCS fabric can have three different multitenancy options in the datacenter. The simplest of it would be Layer 2 multitenancy with 802.1Q VLANs in the range VCS fabric provides a loop-free, multi-path and non-blocking Layer 2 network. Second option would be to use VRF or Virtual routing and forwarding functionality. VCS has full routing protocol support for VRFs across OSPF, BGP, and static route. The third option is to use Virtual Fabric feature, which can scale the VCS fabric up to 8K VLANs. Various configuration options for this feature are discussed in the illustration section. The virtual-fabric extension feature using VxLAN can seamlessly connect multiple data center sites through an L3 cloud providing DCI connectivity. The VF-extension feature can extend both 802.1Q VLANs and Virtual Fabric VLANs thus providing multitenancy across multiple DC sites. This feature is also discussed in the technology and illustration section. IP Storage Configuration This section covers validated configurations for iscsi, NAS, buffer-management, multi-pathing and storage Initiator/Target to support a shared network for LAN and storage. Fabric wide configuration for IP storage is particularly important in a converged network where Storage and LAN traffic shares the same infrastructure. Extreme features to support IP storage ensure that there is lossless storage and bandwidth guarantee for Storage traffic. The following configuration are done to ensure guaranteed service levels for storage. At the port-level dynamic buffers can be tuned up to 8MB of shared buffer per port. DCBX exchange is enabled for iscsi TLV exchange to announce priority, PFC & ETS configuration. Fabric wide COS value is set for NAS traffic using the AutoNAS feature. PFC and ETS setting are enabled on interface using CEE-map feature and get exchanged with storage Initiators and Targets using DCBX. CEE-map is applicable for the entire VCS fabric, so any changes made for iscsi and NAS gets reflected on all links in the VCS fabric. Dynamic Shared Buffer Dynamic shared buffer can be used to handle bursty traffic. Upto 8-MB of shared buffer across a port-group could be used for handling bursty traffic. Recommended configuration is to have 2 MB per-queue and is by default enabled in the latest NOS software releases. This can be changed according to the network requirement, one thing to note would be that too much of buffer would increase the latency and should be used with caution. Below is the configuration done for RBridge-ID 101, this config is needed only on releases below NOS 7.x and is by default enabled from NOS 7.0 release. rbridge-id 101 qos tx-queue limit 1024 qos rcv-queue limit 2048 CLI output of qos queues shows the max-buffer limit for each queue has gone up to the configured value. 64

65 IP Storage Configuration v # show qos rcv-queue interface tengigabitethernet 101/0/32 Interface TenGigabitEthernet 101/0/32 Receive Queues In-use 0 bytes, Max buffer bytes 0 packets dropped In-use Max TC Bytes Bytes v # v # show qos tx-queue interface tengigabitethernet 101/0/32 Interface TenGigabitEthernet 101/0/32 Transmit Queues In-use 0 bytes, Max buffer bytes 0 packets dropped In-use Max TC Bytes Bytes v # This shared buffer is set for all traffic type but the behavior changes when PFC is enabled on some queues like iscsi and FCOE for lossless traffic. For lossless traffic, since we cannot ensure that buffers in the shared pool are always available, no borrowing is enabled. Instead, ASIC driver would carve out buffers and reserve them for any lossless ingress queue. Below is the output when PFC is enabled. 65

66 IP Storage Configuration DCBX Configuration for iscsi The DCB capability exchange needs to be enabled on the edge ports connecting to the Server NIC's and on the Edge ports connected to Storage Arrays. Also on storage and server side the corresponding NIC's have to support DCB & DCBX. By default LLDP and DCBX is enabled and supported on VDX platform. DCBx uses lldp iscsi application TLV to exchange capability info, so the configuration for DCBx are under lldp. The LLDP configuration is VCS fabric wide configuration. 66

67 IP Storage Configuration By default "iscsi-app-tlv" exchange is not enabled, the following config enables it. v # show running-config protocol lldp protocol lldp advertise dcbx-iscsi-app-tlv advertise dcbx-tlv v # With iscsi TLV, DCBx exchange the COS priority to be used for iscsi traffic and the PFC/ETS setting on switch. By default COS value is 4, the value can be changed using the "iscsi-priority" command under "protocol lldp". The highlighted section in the following output shows the configured LLDP parameters for iscsi by DCBX. v # show lldp LLDP Global Information system-name: v system-description: Extreme-VDX-VCS 100 description: State: Enabled Mode: Receive/Transmit Advertise transmitted: 30 seconds Hold time for advertise: 120 seconds Tx Delay Timer: Transmit TLVs: 1 seconds Chassis ID TTL IEEE DCBx DCBx FCoE Logical Link DCBx FCoE Priority Values: 3 DCBx iscsi Priority Values: 4 v # Port ID System Name DCBx FCoE App DCBx iscsi App Following output would show the lldp neighbor exchange. v # show lldp neighbors int te 101/0/32 Local Intf Dead Interval Remaining Life Remote Intf Chassis ID Tx Rx System Name Te 101/0/ v # e.1e2f.a e.1e2f.a

68 IP Storage Configuration Priority Flow Control and ETS By default PFC is not enabled on the switch interfaces and are not exchanged with the neighbors. The following config would enable PFC and BW reservation for NAS and iscsi traffic and this gets announced as ETS to the neighboring device(initiator or Target) which in this case is a server converged network adaptor. iscsi app-tlv is used for the exchange in LLDP packets. PFC & BW reservation gets enabled through "cee-map" configuration. Here the defaults are changed for cee-map, since FCoE traffic is not expected in this network the BW reservation is lowered for it & this is shared with NAS and iscsi traffic. 68

69 IP Storage Configuration Verifying the PFC and QoS settings after applying cee-map. v # show cee maps CEE Map 'default' Precedence: 1 Remap Fabric-Priority to Priority 0 Remap Lossless-Priority to Priority 0 Priority Group Table 1: Weight 10, PFC Enabled, BW% 10 2: Weight 40, PFC Disabled, BW% 40 3: Weight 25, PFC Disabled, BW% 25 4: Weight 25, PFC Enabled, BW% : PFC Disabled 15.1 : PFC Disabled 15.2 : PFC Disabled 15.3 : PFC Disabled 15.4 : PFC Disabled 15.5 : PFC Disabled 15.6 : PFC Disabled 15.7 : PFC Disabled Priority Table CoS: PGID: Enabled on the following interfaces: Te 101/0/32 v # 69

70 IP Storage Configuration 70

71 IP Storage Configuration Auto-NAS iscsi priority of 4 is announced through DCBX but NAS doesn't use DCBX. For this Extreme provides "nas auto-qos" & "nas server" configuration for NAS. Enabling auto-qos for NASon the VCS will auto-configure the CEE map to have a BW reservation for COS-2 traffic. And with command "nas server-ip" all NAS traffic with the nas-server will be classified as COS-2. Thus NAS traffic gets BW reservation in the VCS-fabric to ensure reliability. Auto-nas and the corresponding RBridge side configuration for NAS device connectivity is shown below. vlag with lacp is enabled for redundancy when connecting to the NAS storage device. Many storage vendors provide multiple options other than lacp for linkaggregation and since they are vendor specific it's not covered here. 71

72 IP Storage Configuration 72

73 IP Storage Configuration Verifying NAS traffic classification v (config)# do show port-channel 9 LACP Aggregator: Po 9 (vlag) Aggregator type: Standard Ignore-split is enabled Member rbridges: rbridge-id: 202 (1) rbridge-id: 203 (1) Admin Key: Oper Key 0009 Partner System ID - 0x0014, c9-2e Partner Oper Key 4135 Member ports on rbridge-id 202: Link: Te 202/0/18 (0xCA0C024000) sync: 1 * Member ports on rbridge-id 203: Link: Te 203/0/18 (0xCB0C024000) sync: 1 v (config)# v (config)# do show port-channel 9 LACP Aggregator: Po 9 (vlag) Aggregator type: Standard Ignore-split is enabled Member rbridges: rbridge-id: 202 (1) rbridge-id: 203 (1) Admin Key: Oper Key 0009 Partner System ID - 0x0014, c9-2e Partner Oper Key 4135 Member ports on rbridge-id 202: Link: Te 202/0/18 (0xCA0C024000) sync: 1 * Member ports on rbridge-id 203: Link: Te 203/0/18 (0xCB0C024000) sync: 1 v (config)# v (config)# do show qos int po 9 Interface TenGigabitEthernet 202/0/18 Provisioning mode cee CEE Map default Default CoS 0 Interface COS trust cos In-CoS: Out-CoS/TrafficClass: 0/6 1/6 2/5 0/6 0/6 5/6 6/6 0/6 Interface DSCP trust untrusted DSCP-to-DSCP Mutation map 'default' (dscp= d1d2) d1 : d : : : : : : : Per-Traffic Class Tail Drop Threshold (bytes) TC: Threshold: Flow control mode PFC CoS3 TX on, RX on CoS4 TX on, RX on Traffic Class Scheduler configured for 1 Strict Priority queues 73

74 IP Storage Configuration TrafficClass: DWRRWeight: Interface TenGigabitEthernet 203/0/18 Provisioning mode cee CEE Map default Default CoS 0 Interface COS trust cos In-CoS: Out-CoS/TrafficClass: 0/6 1/6 2/5 0/6 0/6 5/6 6/6 0/6 Interface DSCP trust untrusted DSCP-to-DSCP Mutation map 'default' (dscp= d1d2) d1 : d : : : : : : : Per-Traffic Class Tail Drop Threshold (bytes) TC: Threshold: Flow control mode PFC CoS3 TX on, RX on CoS4 TX on, RX on Traffic Class Scheduler configured for 1 Strict Priority queues TrafficClass: DWRRWeight: v (config)# v # show system internal nas Rbridge Auto-NAS : Enabled Cos : 2 Dscp : Not set Traffic Class : 5 Server ip /32 vrf default-vrf v # v (config)# v (config)# do show nas statistics all Rbridge Server ip /32 vrf default-vrf matches 0 packets Rbridge Server ip /32 vrf default-vrf matches 0 packets Rbridge Server ip /32 vrf default-vrf matches 0 packets Rbridge Server ip /32 vrf default-vrf 74

75 IP Storage Configuration matches 1632 packets Rbridge Server ip /32 vrf default-vrf matches 0 packets Rbridge Server ip /32 vrf default-vrf matches 0 packets Rbridge Server ip /32 vrf default-vrf matches 0 packets Rbridge Server ip /32 vrf default-vrf matches 1634 packets v (config)# CEE-MAP Configuration Extreme allows only one cee-map policy called "default" and based on the VCS fabric QoS requirements this could be modified. CEEmap is by default enabled on all ISL links and like we saw earlier this can be used for enabling PFC for iscsi. The cee-map used in this validated design is included below. Note that by default "qos auto-nas" enables 20% BW reservation for cos-2 traffic while in this policy 25% is allocated for COS-2. cee-map default priority-group-table 1 weight 10 pfc on priority-group-table 2 weight 40 pfc off priority-group-table 3 weight 25 pfc off priority-group-table 4 weight 25 pfc on priority-table Routed vs Layer 2 Switching for Storage Traffic In this validated design the iscsi Initiator or the NAS server side is connected to the Leaf RBridges and the iscsi Target or NAS Storage devices where connected directly to the Spine RBridges. Storage traffic from server to storage arrays could be switched/layer 2 forwarded or be routed/layer-3 forwarded. Switched Storage Traffic Storage end points on the same VLAN is a pretty straight-forward case since both end points are in same subnet. And based on MACaddress-table, VCS layer-2 forwarding ensures that the Storage traffic is TRILL unicast forwarded between the correct RBridge edge ports. 75

76 IP Storage Configuration Routed Storage Traffic When storage end points are in two different vlans, routing is needed. For routed scenario its recommend that the Edge port connecting to iscsi or NAS Storage Array be configured as a switchport access. And the corresponding VE interface for access vlan be enabled with VRRP-E config on all Spine RBridges. By configuring VE interface & VRRP-E on all Spines the need for a routing protocol is avoided for routed storage traffic. For routed scenario always ensure that the gateways are configured on the servers and Storage array correctly and ensure these gateways are configured on the VE-interface with VRRP-E or FHRP configuration on Spine RBridges. Ensure connectivity through a simple ping test from the Storage Array or VDX switches. Configuration for routed scenario is shown below. v (config)# do show run int po 9 interface Port-channel 9 vlag ignore-split switchport switchport mode access switchport access vlan 3 spanning-tree shutdown no shutdown v (config)# do show run rb int ve 3 rbridge-id 201 interface Ve 3 no ip proxy-arp ip address /24 vrrp-extended-group 1 virtual-ip advertisement-interval 1 enable no preempt-mode short-path-forwarding no shutdown rbridge-id 202 interface Ve 3 no ip proxy-arp ip address /24 vrrp-extended-group 1 virtual-ip advertisement-interval 1 enable no preempt-mode short-path-forwarding no shutdown rbridge-id 203 interface Ve 3 no ip proxy-arp ip address /24 vrrp-extended-group 1 virtual-ip enable no preempt-mode short-path-forwarding no shutdown v (config)# 76

77 IP Storage Configuration Jumbo MTU For optimal storage performance ensure jumbo mtu is enabled on the VCS fabric, server side and on storage arrays. Storage and server side configurations are vendor specific and generally the mtu is set to For VCS default-mtu is 2500 and for best performance of storage & server traffic it's recommended to use jumbo mtu. When configuring jumbo mtu ensure that all participating storage Targets and Initiators can support Jumbo mtu and the configuration needs to be applied on these devices correctly in addition to configuring jumbo mtu on the VCS fabric. Jumbo-mtu configuration on VCS is highly simplified, in the latest NOS releases (7.0 and up) configuring jumbo-mtu is a single-line config on the VCS fabric. There is no need to configure MTU on every RBridge and interfaces, it's a global configuration applied from principal RBridge. mtu 9216 ipv6 mtu 9018 ip mtu 9018 Verifying the applied mtu configuration on a VE interface and fabric port. v # show run rb int ve 101 rbridge-id 201 interface Ve 101 ipv6 address 10:0:65::201/96 ipv6 vrrp-extended-group 2 virtual-ip 10:0:65::1 enable no preempt-mode short-path-forwarding no ip proxy-arp ip address /24 vrrp-extended-group 1 virtual-ip advertisement-interval 1 enable no preempt-mode short-path-forwarding no shutdown v # show ipv6 interface ve 101 Ve 101 is up protocol is up IPv6 Address: 10:0:65::201/96 Primary Confirmed IPv6 Address: 10:0:65::1/128 Primary Confirmed IPv6 Address: fe80::52eb:1aff:feaa:68d7/128 Link Local Confirmed IPv6 multicast groups locally joined: ff02::1 ff02::2 ff02::1:ff00:1 ff02::1:ff00:201 ff02::1:ffaa:68d7 IPv6 MTU: 9018 Vrf : default-vrf v # v # show ip interface ve 101 Backed vlaues 0 Ve 101 is up protocol is up Primary Internet Address is /24 broadcast is IP MTU is 9018 Proxy Arp is not Enabled IP fast switching is enabled Vrf : default-vrf v # 77

78 IP Storage Configuration v # show run int fo 201/0/1 interface FortyGigabitEthernet 201/0/1 fabric isl enable fabric trunk enable no shutdown v # show int fo 201/0/1 FortyGigabitEthernet 201/0/1 is up, line protocol is up (connected) Hardware is Ethernet, address is 50eb.1aaa.68db Current address is 50eb.1aaa.68db Pluggable media present Interface index (ifindex) is MTU 9216 bytes IP MTU 9018 bytes LineSpeed Actual : Mbit LineSpeed Configured : Auto, Duplex: Full v # show run int fo 201/0/1 interface FortyGigabitEthernet 201/0/1 fabric isl enable fabric trunk enable no shutdown v # show int fo 201/0/1 FortyGigabitEthernet 201/0/1 is up, line protocol is up (connected) Hardware is Ethernet, address is 50eb.1aaa.68db Current address is 50eb.1aaa.68db Pluggable media present Interface index (ifindex) is MTU 9216 bytes IP MTU 9018 bytes LineSpeed Actual : Mbit LineSpeed Configured : Auto, Duplex: Full Storage Initiator/Target Configuration Some of the consideration from networking and storage configuration when doing a converged storage and LAN traffic are. Storage connectivity from servers Multi-pathing options for storage traffic Support for DCBx Storage Connectivity Storage connectivity from servers to the VCS fabric could be done in two ways: Dedicated NIC's or HBA's for storage traffic to the TOR's. Thus the TOR's will receive storage and LAN traffic on different Edge ports on the TOR's. Or the same physical port could carry both LAN and storage traffic. Here the server NIC's could be of two kinds. An Ethernet NIC which doesn't support iscsi and depends on the operating system to handle iscsi. 2 nd type would be a converged network adaptor kind which can handle both LAN and iscsi traffic. Connectivity from storage arrays to the Spine could be provisioned through LAG or other multi-link option provided by the storage vendor. Multi-Pathing for Storage Traffic Storage Traffic can have multi-pathing enabled while attaching storage target and the storage initiator to the VCS fabric. 78

79 IP Storage Configuration Connecting Storage Array When connecting the storage Arrays it's recommended to have redundancy through multi-homed network connectivity to the VCS fabric. Recommend using vlag's (LACP or static) on the VDX side along with the recommended link-aggregation technique from the Storage Array. Link-aggregation would provide redundancy and BW aggregation. Storage vendors provide different options for linkaggregation which include LACP, static Ethernet-channel or other proprietary methodologies. Have vlag configured with DCBx and "cee-map default". interface Port-channel 9 cee default vlag ignore-split switchport switchport mode access switchport access vlan 3 spanning-tree shutdown no shutdown Connectivity from Server to TOR for Storage When connecting servers to TOR/RBridges in VCS fabric network connectivity-wise there are two options to provide redundancy: The iscsi host uplinks are configured as an LACP teamed/bonded interface forming a vlag across the VCS fabric. When configuring iscsi host uplinks in a vlag pair if there is only one flow to the iscsi Target then this configuration provides only link-redundancy. Link aggregation does not improve the throughput of a single I/O flow. A single flow will always traverse only one path. The iscsi host uplinks are configured as individual links with host/hypervisor OS multipath software managing the failover and load balancing. Since host/hypervisor manages multi-pathing these traffic can provide better bandwidth utilization and multipathing for a storage I/O traffic. But many a times won't support DCBx and iscsi COS marking. In validated design vlag or NIC teaming was used for multi-pathing. DCBx Support When using iscsi for storage there are two possibilities from a network perspective and careful consideration has to be given in terms of whether the storage device support DCBX for a lossless iscsi traffic behavior. Storage Target/Initiators supporting DCBX Storage Target/Initiators which don't support DCBX The configuration for these scenarios are covered in this section and these applies for only iscsi traffic. NAS does not have NIC level support like iscsi and so will depend on the auto-nas feature to provide guaranteed bandwidth for NAS traffic. Storage Target/Initiators Supporting DCBX iscsi Hardware dependent Initiators mostly support DCBX and in such scenarios the DCBX and CEE-map configuration on edge ports ensures that iscsi traffic is marked with proper COS and along with PFC for iscsi COS will ensure that there is lossless behavior for iscsi traffic. The previous "IP storage configuration" covers how to enable DCBX, PFC and ETS and the config for this would be as follows. cee-map default priority-group-table 1 weight 10 pfc on priority-group-table 2 weight 40 pfc off priority-group-table 3 weight 25 pfc off priority-group-table 4 weight 25 pfc on priority-table

80 IP Storage Configuration interface TenGigabitEthernet 102/0/32 cee default switchport switchport mode trunk switchport trunk allowed vlan all switchport trunk tag native-vlan spanning-tree shutdown no fabric isl enable no fabric trunk enable no shutdown interface port-channel 1 cee default switchport switchport mode trunk switchport trunk allowed vlan all switchport trunk tag native-vlan spanning-tree shutdown no fabric isl enable no fabric trunk enable no shutdown The details on this configuration is already covered in previous section, we won't go into details of the configuration here. The following configuration was used with a Converged Network adaptor (CNA) which supported DCBX. The iscsi traffic received was tagged. And the edge port 102/0/32 received both LAN and iscsi traffic. While using CNA's one would have to make the appropriate DCBX configuration on the NIC side if the Server management tools don't support it. In this case VLAN configuration for the Initiator was configured on the CNA to enable VLAN. When port is an access port the iscsi priority is exchanged using tagged-priority or VLAN 0. For PFC to work for iscsi COS priority tagging has to be enabled on switch-side. With priority-tag enabled switch will sent untagged packets with VLAN 0 and proper COS priority settings. Included is the edge-port config on switch side. interface TenGigabitEthernet 102/0/32 cee default priority-tag switchport switchport mode access switchport access vlan 105 spanning-tree shutdown no fabric isl enable no fabric trunk enable no shutdown Storage Target/Initiators not supporting DCBX When Storage devices do not support DCBX, COS for iscsi traffic must be marked by configuration at the Edge port. With COS marking and BW reservation for iscsi traffic COS through cee-map, QoS bandwidth guarantees can be ensured for IP storage traffic. Thus a VCS network could have mixed IP storage environment with devices at edge ports where DCBX is supported and COS/PFC negotiation can be achieved and devices where DCBX is not supported on the initiator/target devices but bandwidth guarantee is ensured in the fabric through COS & QoS configuration at the edge-port. The configuration discussed here is applicable for all initiators/targets including hardware dependent ones which don't support DCBX. In VDX COS marking can be achieved using "qos cos" configuration on interface. Using the "qos cos 4" config untagged iscsi traffic is marked with a cos value of 4. And cee-map ensures that COS-4 has a guaranteed bandwidth reservation in the fabric. COS Config on a Trunk Port 80

81 IP Storage Configuration The following configuration was used with initiator that doesn't support DCBX for iscsi. The "qos cos" command only works for untagged packets. For tagged packet VDX honors the COS carried in frame and this command has no affect. The Storage traffic is expected untagged and carried on vlan 103 while LAN traffic comes tagged on vlan 104 The iscsi initiator was configured as untagged and on the switch side the vlan is classified as native-vlan on the port. And with "qos cos 4" config all untagged traffic was marked for iscsi COS of 4. 81

82 IP Storage Configuration COS Config on an Access Port Following config was applied on a Spine RBridge Edge port connected to an iscsi storage array which didn't support DCBX. The iscsi Target was on an L3 port, so the port was configured as access for vlan 11 and "qos cos 4" was used to mark all incoming traffic. interface TenGigabitEthernet 203/0/2 switchport switchport mode access switchport access vlan 11 qos cos 4 spanning-tree shutdown no fabric isl enable no fabric trunk enable no shutdown rbridge-id 201 interface Ve 101 no ip proxy-arp ip address /24 vrrp-extended-group 1 virtual-ip enable 82

83 Edge Services Configuration no preempt-mode short-path-forwarding no shutdown rbridge-id 202 interface Ve 101 no ip proxy-arp ip address /24 vrrp-extended-group 1 virtual-ip enable no preempt-mode short-path-forwarding no shutdown rbridge-id 203 interface Ve 11 no ip proxy-arp ip address /24 vrrp-extended-group 1 virtual-ip enable no preempt-mode short-path-forwarding no shutdown Edge Services Configuration Spine RBridges in "Single-POD VCS fabric with attached shared IP storage" design or Figure20 connect to the Edge Routers over L3 ports. The edge ports on the Spine RBridges are configured with ip address and ebgp peering with the Edge Routers. Edge routers announce default route for internet/wan access. And from the Spines connected routes are redistributed to the Edge Router. Spine Node Configuration for Edge services includes: IP-address on L3 port to Edge Router. Router-id configuration on all Spine RBridges for routing stability. BGP neighbor configuration to Edge Router & redistribute the connected interfaces. Edge Router will advertise default-route. Enabling VE interface and OSPF on a VLAN for backup routing within the spine & re-originate default BGP route into OSPF. This will ensure that when there are uplink failures to Edge router the traffic gets routed to the other spine RBridge over this routing-vlan and gets forwarded to Edge Router. Edge-Router configuration includes: BGP neighborship towards the Spine. Announce default-route to Spines to attract traffic from DC to outside. Edge-Routers also peer to Internet Service Provider & WAN and also provide connectivity to firewall and load balancers. This part of configuration is not covered here since these are independent of Extreme devices. Spine Configuration interface TenGigabitEthernet 201/0/33:1 no fabric isl enable 83

84 Edge Services Configuration no fabric trunk enable ip address /24 ipv6 address 10:201:1::201/96 no shutdown rbridge-id 201 ip router-id router bgp local-as neighbor 10:201:1::1 remote-as 100 neighbor remote-as 100 address-family ipv4 unicast redistribute connected address-family ipv6 unicast redistribute connected neighbor 10:201:1::1 activate router ospf default-information-originate redistribute connected area 0 ipv6 router ospf area 0 default-information-originate redistribute connected interface Ve 1001 ipv6 address 10:3:233::201/96 ipv6 ospf area 0 ip ospf area 0 ip address /24 no shutdown Edge Router Configuration interface ethernet 1/1 enable ip address /24 ipv6 address 10:201:1::1/96 ip route /0 null0 ipv6 route ::/0 null0 router bgp local-as 100 neighbor remote-as neighbor remote-as 200 neighbor remove-private-as neighbor 50:1:1::2 remote-as 200 neighbor 50:1:1::2 remove-private-as address-family ipv4 unicast maximum-paths 8 maximum-paths ibgp 32 maximum-paths ebgp 32 next-hop-recursion neighbor default-originate exit-address-family address-family ipv6 unicast default-information-originate maximum-paths 8 84

85 Edge Services Configuration maximum-paths ibgp 32 maximum-paths ebgp 32 next-hop-recursion neighbor 10:201:1::201 activate neighbor 50:1:1::2 activate Verification of Config 85

86 Edge Services Configuration 86

87 Deployment 1: Single-POD VCS Fabric With Attached Shared IP Storage Deployment 1: Single-POD VCS Fabric With Attached Shared IP Storage Single-POD VCS datacenter site shown below involves buildout of a single-pod VCS fabric to carry both LAN and storage Traffic with the storage device attached directly to the VCS fabric at the Spines. Edge Services include connectivity for internet, WAN, and DCI provided by Edge Routers and services like VPN, Firewall and load balancers. FIGURE 27 Single-POD DC With Attached Storage Key points for datacenter build-out in this design. The VCS fabric is essentially a 2-tier leaf-spine architecture with a pair of Leafs or top-of-rack RBridges servicing each rack. The shared VCS fabric carries both Storage and LAN traffic. A pair of Leaf's form vlags to the Server. Leaf RBridges only have vlans enabled and there are no routing function enabled. Spines interconnect the Leaf's and act as the L2-L3 gateway to route traffic between subnet or out of datacenter through the Edge Routers. Spines have VRRP-E enabled to provide the L2-L3 GW function. Spines have BGP peering to Edge Router and announce the connected subnet to Edge Router. Edge Routers announce default-route to the Spines through BGP peering. Spine nodes have a routing session between them using OSPF this ensures that traffic converges on link failures to Edge Router. Through OSPF default route is announced by the spine. 87

88 Deployment 1: Single-POD VCS Fabric With Attached Shared IP Storage Storage Arrays are connected over vlag's and can have host doing both Routed (L3) and Layer 2 connectivity. The vlag's are configured as switchport and VE were enabled for the storage vlan with VRRP-E on all spine nodes. This ensures that storage traffic didn't be announced through a routing protocol. The Server NIC's where connected to the Leaf RBridges to carry storage and LAN traffic on a single link or had separate dedicated NIC's to carry storage and LAN traffic separately. Storage traffic had BW reservation for iscsi and NAS in the VCS fabric. And DCBX was used to enable iscsi COS, PFC & ETS with devices which support it. Auto-NAS ensure that NAS traffic is marked to COS-2 and has BW reservation in the fabric. Included are the configuration from a Leaf and Spine which was used in the 3-Stage CLOS validation. All this configuration were applied from the Principal RBridge with Logical Chassis Mode enabled in the VCS fabric. Separate configuration for Leaf and Spine are only for better user understandability. Fabric Wide Configuration vcs vcsid 100 rbridge-id 101 vcs vcsid 100 rbridge-id

89 Deployment 1: Single-POD VCS Fabric With Attached Shared IP Storage Leaf Configuration Auto-Enabled ISL Configuration on Leaf Manual Configuration on Leaf 89

90 Deployment 1: Single-POD VCS Fabric With Attached Shared IP Storage 90

91 Deployment 1: Single-POD VCS Fabric With Attached Shared IP Storage Spine Configuration 91

92 Deployment 1: Single-POD VCS Fabric With Attached Shared IP Storage 92

93 Deployment 1: Single-POD VCS Fabric With Attached Shared IP Storage 93

94 Deployment 1: Single-POD VCS Fabric With Attached Shared IP Storage 94

95 Deployment 2: Single-POD VCS Fabric with Dedicated IP Storage VCS Deployment 2: Single-POD VCS Fabric with Dedicated IP Storage VCS Single-POD VCS datacenter site shown below involves buildout of a Server VCS fabric to carry LAN traffic and a dedicated Storage VCS fabric to which the Initiators and Storage Targets physically connect. Edge Services connect to the Spine RBridges in Server VCS fabric and provide internet, WAN and DCI connectivity and services like VPN, Firewall and load balancers. 95

96 Deployment 2: Single-POD VCS Fabric with Dedicated IP Storage VCS FIGURE 28 Datacenter With Dedicated Storage VCS The advantages and need for a dedicated Storage VCS are covered in the Technology Overview section. Build-out of datacenter in this design involves: Separate dedicated VCS fabrics for Storage and Server traffic wherein the server LAN traffic is handled by the Server VCS fabric and iscsi and NAS traffic are handled by Storage VCS fabric. Servers are connected by vlags to the server VCS fabric. Servers could connect to the Storage VCS fabric over multiple storage uplinks with vlag and non-vlag based redundancy options for the IP storage traffic. In this datacenter design BW reservation for IP storage traffic is not of much concern but enabling minimum BW reservation for NAS and iscsi will ensure service guarantees in period of congestion. If the storage device supports DCBX then enabling COS marking and PFC for iscsi traffic will ensure lossless behavior. Edge Routers do BGP peering to the Spine tier on Server VCS fabric and announce default-route to VCS Spine nodes for internet/wan/dci traffic. Storage VCS Storage VCS could follow the spine-leaf based CLOS design as shown in the figure above or could be a fully-meshed VCS fabric. The decision to go with either would be based on the Storage VCS fabric scale. If the storage fabric requirement is only for couple of nodes to do storage connectivity and there is no requirement of routing in the network then user could go with a flat l2 network without the need for VE interfaces and routing. 96

97 Deployment 2: Single-POD VCS Fabric with Dedicated IP Storage VCS In the validated design a flat l2 network of two VDX switches was used with routing enabled on all RBridges. Since it's a dedicated VCS for storage traffic congestion shouldn't be of a concern but DCB is enabled to ensure guaranteed bandwidth for untoward traffic scenarios in the fabric. The same configuration model could be followed for the CLOS design too, restricting routing using VE interfaces at the Spine. 97

98 Deployment 2: Single-POD VCS Fabric with Dedicated IP Storage VCS 98

99 Deployment 2: Single-POD VCS Fabric with Dedicated IP Storage VCS 99

100 Deployment 2: Single-POD VCS Fabric with Dedicated IP Storage VCS Server VCS The server VCS follows a 3-stage CLOS design with 4 Spine RBridges and multiple Leaf RBridges in vlag pairs as shown in the diagram above. Configuration for one of the leaf and Spine RBridges are shown below, the RBridge-id of leaf is 101 and of Spine is 201. Fabric Wide Configuration vcs vcsid 100 rbridge-id 101 vcs vcsid 100 rbridge-id

101 Deployment 2: Single-POD VCS Fabric with Dedicated IP Storage VCS 101

102 Deployment 2: Single-POD VCS Fabric with Dedicated IP Storage VCS Configuration Pertaining to Leaf 102

103 Deployment 2: Single-POD VCS Fabric with Dedicated IP Storage VCS Configuration Pertaining to Spine 103

104 Deployment 3: Multi-VCS Fabric with Shared IP Storage VCS Deployment 3: Multi-VCS Fabric with Shared IP Storage VCS Multi-VCS datacenter site shown below involves buildout of a multi-vcs fabric to carry LAN traffic and IP storage traffic. Multi-VCS fabric is formed by connecting multiple VCS fabrics through vlag to a Super-Spine VCS. Storage devices or Targets attach to Storage VCS and the Storage VCS forms vlag to the super-spine. Edge Services connect to the Super-Spine RBridges providing internet, WAN and DCI connectivity and services like VPN, Firewall and load balancers. 104

105 Deployment 3: Multi-VCS Fabric with Shared IP Storage VCS FIGURE 29 Multi-VCS Fabric With Shared Storage VCS Multi-VCS design is a 5-stage CLOS fabric with the individual spine-leaf POD's connected by vlags to the Super-Spine layer. Buildout of datacenter in this design involves: Each DC POD consists of a spine-leaf VCS fabric. Servers attach to the Leaf Rbridges through vlags and also the spines connect to the super-spine layer through vlag. DC POD is a converged fabric carrying both IP storage and server LAN traffic. DC POD only does Layer 2 and TRILL forwarding there is no routing function enabled on the PODs. Similarly the Storage VCS fabric also does only Layer 2 TRILL forwarding. Routing in this design happens at the super-spine layer, so all VE and routing protocol functions are enabled on the superspine. Storage Initiators are on Servers connected to DC POD VCS fabrics and Storage Target connect to Storage VCS fabric. Storage VCS fabric attach to the super-spine layer through vlag. In this datacenter design IP storage techniques need to be enabled on the individual POD VCS's, super-spine VCS and on Storage VCS fabric. Servers connect to the DC POD vcs fabric and for multi-pathing vlag or non-vlag based redundancy options for the IP storage traffic could be used. Storage devices/target attach to Storage VCS fabric through vlag's or individual ports. Routing for IP storage traffic happens on the super-spine. CEE-map based BW reservation is done on all VCS fabrics for iscsi and NAS traffic. Auto-NAS is enabled on all VCS fabrics for NAS traffic marking and BW reservation. 105

106 Deployment 3: Multi-VCS Fabric with Shared IP Storage VCS For iscsi, if the device supports DCBx then it's used for COS marking and PFC/ETS. Otherwise the traffic is marked to COS-4 and BW reservation is ensured. Edge Routers connect to the Super-Spine RBridges and do BGP peering to the VCS fabric. Edge routers announce defaultroute to Spine nodes for internet/wan/dci traffic. Between the super-spine RBridges back-up routing is run on a reserved vlan using OSPF. Storage VCS Storage Targets are connected to the Storage VCS and in turn connected to the datacenter site through vlag to Super-Spine Tier. Storage VCS would be enabled with DCB functions in the VCS and across the vlag connecting to the super-spine. Storage VCS will do only layer-2 function for the storage vlans. Routing for storage traffic will happen on super-spine tier. Storage Targets could attach through vlag or individual switchport links, some form of link-aggregation is recommended for redundancy. Storage vendors provide different link-aggregation options and for BVD LACP was used. 106

107 Deployment 3: Multi-VCS Fabric with Shared IP Storage VCS 107

108 Deployment 3: Multi-VCS Fabric with Shared IP Storage VCS 108

109 Deployment 3: Multi-VCS Fabric with Shared IP Storage VCS 109

110 Deployment 3: Multi-VCS Fabric with Shared IP Storage VCS Multi-VCS Converged Fabric The multi-vcs converged fabric follows a 5-stage CLOS design with super-spine, spine and leaf tiers as shown in the Figure 29 on page 107. Each Spine-leaf POD is connected through vlags to the Super-spine tier. The Configuration for one of the leaf and Spine RBridges in a POD and for one of the super-spine RBridge are shown below, the RBridge-id of leaf is 101, Spine is 201, and Super- Spine is 221. POD Configuration Fabric Wide Configuration on POD vcs vcsid 100 rbridge-id 101 vcs vcsid 100 rbridge-id

111 Deployment 3: Multi-VCS Fabric with Shared IP Storage VCS POD Leaf Configuration 111

112 Deployment 3: Multi-VCS Fabric with Shared IP Storage VCS 112

113 Deployment 3: Multi-VCS Fabric with Shared IP Storage VCS 113

114 Deployment 3: Multi-VCS Fabric with Shared IP Storage VCS POD Spine Configuration Spine in a multi-vcs fabric will only have vlans and do layer-2 forwarding. The Spine connect to Super-spine tier through vlag. 114

115 Deployment 3: Multi-VCS Fabric with Shared IP Storage VCS Super-Spine Configuration Super-Spine Tier connects to the storage VCS and DC POD VCS through vlag's. Super-spine is where routing happens in the datacenter. Super-spine is attached to the Edge-Services through L3 links. vcs vcsid 220 rbridge-id

116 Deployment 3: Multi-VCS Fabric with Shared IP Storage VCS 116

117 Deployment 3: Multi-VCS Fabric with Shared IP Storage VCS 117

118 Deployment 3: Multi-VCS Fabric with Shared IP Storage VCS 118

119 Deployment 3: Multi-VCS Fabric with Shared IP Storage VCS 119

120 Deployment 3: Multi-VCS Fabric with Shared IP Storage VCS Illustration Examples Example 1: FVG in a 3-Stage Clos Fabric Example 2: Virtual Fabric Across Disjoint VLANs on Two ToRs in a 3-Stage Clos Example 3: Virtual Fabric per Interface VLAN-Scope on ToRs in a 3-Stage Clos Example 4: VM-Aware Network Automation Example 5: AMPP Example 6: Virtual Fabric Extension Example 7: Auto-Fabric In this section we will illustrate the use cases using sections of the validated design network topology as appropriate. This will help the reader to further understand the deployment scenarios. Example 1: FVG in a 3-Stage Clos Fabric FVG is a more scalable first hop redundancy protocol compared to VRRP-E. So when the requirement is to have more than 1K FHRP gateways, FVG is recommended. FVG would work only within a VCS fabric and cannot be used in multi-vcs topology using vlag. The topology would be as shown below. VCS fabric, vlan and vlag configuration would remain the same as VRRP-E for FVG too. The VE configuration differs for FVG and VRRP-E. VRRP-E configuration is done on VE interface under the RBridge configuration mode. While for FVG the configuration is done on VE interface under the Global configuration mode. FVG member RBridges are then added to this configuration. FVG members in a CLOS fabric are the spine. There is no need of VE interface configuration under the spine RBridges & hence the VE interface do not have individual IP address configuration. 120

121 Example 1: FVG in a 3-Stage Clos Fabric FIGURE 30 FVG Topology Configuration Configuration example is shown for IPv4 and IPv6 FVG gateway for vlan 401. The vlan 401, vlag and the underlying VCS fabric configuration is already present. Router configuration for FVG will enable the feature and set default configuration applicable for interfaces like accept-unicast-arp-request for ARP refresh scenarios and periodic GARP origination so as to update the GW mac on the switches below VCS fabric. Interface configuration for VE 401 is shown below. There is no requirement of ip address configuration under VE for FVG and by default short-path forwarding is enabled. 121

122 Example 1: FVG in a 3-Stage Clos Fabric Tracking is supported in FVG but instead of tracking back-up routing between spines is recommended. Verification Verifying IPv4 FVG FVG show output from gateway RBridge "202" and non-gateway Rbridge "102" is shown below. 122

123 Example 1: FVG in a 3-Stage Clos Fabric Verifying IPv6 FVG IPv6 FVG output from RBridge 202 for VE 401. v # show ipv6 fabric-virtual-gateway interface ve 401 det ============Rbridge-id:202============ Interface: Ve 401; Ifindex: Admin Status: Enabled Description : Address family: IPV6 State: Active ARP responder Rbridge-id:

124 Example 2: Virtual Fabric Across Disjoint VLANs on Two ToRs in a 3-Stage Clos Gateway IP: 10:0:191::1/96 Gateway MAC Address: 02e fe Load balancing configuration: Enabled Load balancing current status: Enabled Load balancing threshold priority: unset Gratuitous ARP Timer: 240 sec Hold time: 0 sec (default: 0 sec) Total no. of state changes: 1 ND Advertisements Sent: 521 Last state change: 14d.6h.57m.34s ago Track Priority: 0 v # Verify MAC Address Table Load-balancing behavior of FVG can be verified from the mac-table. The MAC address-table on FVG gateways 201, 202, and 203 will show the FVG gw-mac will be shown as local system mac. While on a non-gw RBridge (102) it will show as a remote virtual-router (VR) mac. Internally under VCS forwarding perspective the VR will point to the three FVG gateway RBridge's, thus achieving load-balancing of traffic. Example 2: Virtual Fabric Across Disjoint VLANs on Two ToRs in a 3-Stage Clos The virtual-fabric feature is explained in detail in "Technology Overview" section. Virtual-fabric allows customers to connect disjoint VLANs on the same network or reuse VLANs on per-interface VLAN-scope to have more than 4K VLANs in the fabric. 124

125 Example 2: Virtual Fabric Across Disjoint VLANs on Two ToRs in a 3-Stage Clos In this example, we illustrate providing L2 adjacency between two different VLANs on different ToRs or how to achieve VLAN translation on a VCS fabric using virtual fabric. A sectional view of the fabric is shown below with a single sspine and two ToR pair of leafs. VLAN 306 on Rack1 and VLAN 311 on Rack2 are mapped to VF-VLAN VE IP address and VRRP-E configs are done on Thus, two disjoint VLANs can be brought together on the same subnet. FIGURE 31 VF Across Disjoint VLANs Configuration VLAN and Port-Channel Configuration Enable the virtual-fabric feature and configure vlan and port-channel. Under Po3 on RBridge 101 and 102, VLAN 306 is mapped to VF-VLAN Under Po7 on RBridge 103 and 104, VLAN 311 is mapped to VF-VLAN After classification, ctags 306 and 311 belong to a single broadcast domain, virtual-fabric VLAN

126 Example 2: Virtual Fabric Across Disjoint VLANs on Two ToRs in a 3-Stage Clos 126

127 Example 2: Virtual Fabric Across Disjoint VLANs on Two ToRs in a 3-Stage Clos VE Interface Configuration VE interface configuration is done on the RBridge mode on the spines. VE interface is configured for VLAN 5311 and not 306 or 311. v # show run rb int ve 5311 rbridge-id 201 interface Ve 5311 ipv6 address 10:0:137::201/96 ipv6 vrrp-extended-group 2 virtual-ip 10:0:137::1 enable short-path-forwarding no ip proxy-arp ip address /24 vrrp-extended-group 1 virtual-ip enable short-path-forwarding no shutdown v # Verification Verify VLAN and MAC Table The VLAN table will show that customer-tags 306 and 311 are mapped to VF-VLAN v # show vlan 5311 VLAN Name (F)-FCoE (R)-RSPAN State Ports (u)-untagged (c)-converged Classification (T)-TRANSPARENT (t)-tagged ================ =============== ========================== =============== 5311 VLAN5311 ACTIVE Po 3(t) ctag 306 Po 7(t) ctag 311 v # Verify ARP and MAC Table ARPs for the hosts are resolved under virtual-fabric VLAN The MAC table is populated for VLAN 5311 on RBridge 101,102, 103, and

128 Example 3: Virtual Fabric per Interface VLAN-Scope on ToRs in a 3-Stage Clos Example 3: Virtual Fabric per Interface VLAN-Scope on ToRs in a 3-Stage Clos In a highly virtualized data center, the same 802.1Q VLANs must be reused across multiple interfaces on the same rack. So from a ToR perspective, the 802.1Q VLAN must have a VLAN-scope per interface. In this example, we illustrate per-interface VLAN-scope on a ToR using virtual-fabric in VCS. A sectional view of the fabric is shown below with a single spine and two ToR pair of leafs. VLAN 501 on Port-channel51 and Po52 on Rack1 is mapped to VF-VLAN 5501 and VF-VLAN 5502 respectively. Similarly VLAN 501 on Po151 and Po152 on Rack2 is mapped to VF-VLAN 5501 and VF-VLAN 5502 respectively. With this config, VLAN 501 on port-channel51 and Po151 will belong to the same broadcast domain or subnet. While VLAN 501 on Po52 and Po152 will be on the same subnet. This is possible with virtual-fabric, which provides a per-interface VLAN-scope. Thus providing a higher multitenancy option in the data center by reusing the 802.1Q VLAN range (2 4096) across multiple customers. 128

129 Example 3: Virtual Fabric per Interface VLAN-Scope on ToRs in a 3-Stage Clos FIGURE 32 VF Per-Interface VLAN-Scope Configuration VLAN and Port-Channel Configuration Enable virtual-fabric feature and configure vlan and port-channel. Under Po51 on RBridge 101 and 102 vlan 501 is mapped to VF-vlan Under Po52 on RBridge 101 and 102 vlan 501 is mapped to VF-vlan Under Po151 on RBridge 103 and 104 vlan 501 is mapped to VF-vlan Under Po152 on RBridge 103 and 104 vlan 501 is mapped to VF-vlan The per-interface vlan-scope provided by VF-fabric allows reusing of vlan 501 for different broadcast domains. interface vlan 5502 interface vlan 5502 interface Port-channel 51 vlag ignore-split switchport switchport mode trunk switchport trunk allowed vlan add 5501 ctag 501 switchport trunk tag native-vlan spanning-tree shutdown no shutdown interface TenGigabitEthernet 101/0/21 channel-group 51 mode active type standard 129

130 Example 3: Virtual Fabric per Interface VLAN-Scope on ToRs in a 3-Stage Clos no shutdown interface Port-channel 52 vlag ignore-split switchport switchport mode trunk switchport trunk allowed vlan add 5502 ctag 501 switchport trunk tag native-vlan spanning-tree shutdown no shutdown interface TenGigabitEthernet 101/0/22 channel-group 52 mode active type standard no shutdown VE Interface Configuration VE interface configuration is done on the RBridge mode on spines. VE interface is configured for vlan 5301 & 5302 and not the customer-tags 101, 102, 103, and 104. rbridge-id 201 interface Ve 5501 ipv6 address 10:0:1F5::201/96 ipv6 vrrp-extended-group 2 virtual-ip 10:0:1F5::1 enable no preempt-mode short-path-forwarding no ip proxy-arp ip address /24 vrrp-extended-group 1 virtual-ip enable no preempt-mode short-path-forwarding no shutdown rbridge-id 201 interface Ve 5502 ipv6 address 10:0:1F6::202/96 ipv6 vrrp-extended-group 2 virtual-ip 10:0:1F6::1 enable no preempt-mode short-path-forwarding no ip proxy-arp ip address /24 vrrp-extended-group 1 virtual-ip enable no preempt-mode short-path-forwarding no shutdown 130

131 Example 3: Virtual Fabric per Interface VLAN-Scope on ToRs in a 3-Stage Clos Verification Verify VLAN and MAC Table The VLAN table will show that customer-tag's 306 and 311 are mapped to VF-vlan Verify ARP and MAC Table ARPs for the hosts are resolved under virtual-fabric vlan The MAC table is populated for vlan 5311 on RBridge 101,102, 103, and

132 Example 4: VM-Aware Network Automation Example 4: VM-Aware Network Automation VM-aware network automation as discussed in the Technology section integrates AMPP with vcenter. It allows network automation using port-profiles in a VMware environment. Configuration needed to achieve VM-aware network automation in VCS fabric are: vcenter information Enable LLDP on ESXi standard vswitch or distributed vswitch Remaining steps in enabling port-profile and the associated MAC configuration and its discovery are automatic. Configuration and verification goes through this procedure. 132

133 Example 4: VM-Aware Network Automation Configuration and Verification For VM-aware network the VCS fabric is configured with the vcenter information. Below config does that and this allows discovery process to start by initiating dialogue over the vcenter SOAP-api. vcenter myvcenter url username administrator password "xxxxxx" use-vrf mgmt-vrf vcenter myvcenter activate vcenter myvcenter discovery ignore-delete-all-response 10 After the vcenter discovery succeeds the RBridges on the VCS fabric needs to identify the ESXi host which are behind RBridge edgeports. For this the standard vswitch is enabled for LLDP meanwhile RBridge edge-ports are by default enabled for LLDP. Note: This config is done on esxi shell of ESXi host to enable lldp on vswitch ~]# ssh ~ # ~ # esxcfg-vswitch vswitch3 -b listen ~ # ~ # esxcfg-vswitch vswitch3 -B both ~ # ~ # esxcli network vswitch standard list -v vswitch3 vswitch3 Name: vswitch3 Class: etherswitch Num Ports: 5632 Used Ports: 3 Configured Ports: 128 MTU: 1500 CDP Status: both Beacon Enabled: false Beacon Interval: 1 Beacon Threshold: 3 Beacon Required By: Uplinks: vmnic5 Portgroups: ejacob ~ # v # show lldp nei int te 104/0/32 Local Intf Dead Interval Remaining Life Remote Intf Chassis ID Tx Rx System Name Te 104/0/ e.1e17.cb68 000e.1e17.cb v # Once LLDP discovery of the vswitch happens across a switch port the corresponding ports are automatically moved into port-profile mode. Also the RBridge will poll the VMware vcenter to get the port-profile's for the ESXi host and the associated mac for each profile. The associated mac would be the VM mac. At this point the MAC is not associated to an Edge port. Once the VM sends traffic, mac table on switch will learn this mac and the associated profile gets applied to the edge-port. Below shows the port-profile mode that gets automatically enabled when RBridge Edge port form LLDP neighborship with the vswitch. 133

134 Example 4: VM-Aware Network Automation Shows the port-profile is activated on the RBridge and the VM mac is associated. But port-profile is not yet applied on the interface. v # show mac-address-table port-profile Total MAC addresses : 0 v # v # show port-profile interface ten 104/0/32 Interface Port-Profile Te 104/0/32 None v # v # Once some traffic from the VM hits the edge-port 104/0/32, source-mac is learned and if the source-mac matches to the port-profile. The port-profile gets applied on the Edge-port as seen below. v # show mac-address-table port-profile Legend: Untagged(U), Tagged (T), Not Forwardable(NF) and Conflict(C) VlanId Mac-address Type State Port-Profile Ports a4.2fa7 Dynamic Active Profiled(T) Te 104/0/32 Total MAC addresses : 1 v # v # show port-profile interface ten 104/0/32 134

135 Example 4: VM-Aware Network Automation Interface Te 104/0/32 Port-Profile auto_myvcenter_datacenter-2_ejacob v # v # show port-profile status applied Port-Profile PPID Activated Associated MAC Interface auto_myvcenter_datacenter-2_ejacob Yes a4.2fa7 Te 104/0/32 v # Virtual Machine Move Virtual machine move is automatically detected and port-profile would get applied on the new edge port based on MAC move detection by AMPP. Shows the initial state before the VM move. The profile is applied on RBridge-104 port 104/0/32. v # show port-profile interface ten 104/0/32 Interface Port-Profile Te 104/0/32 auto_myvcenter_datacenter-2_ejacob v # v # show port-profile status applied Port-Profile PPID Activated Associated MAC Interface auto_myvcenter_datacenter-2_ejacob Yes a4.2fa7 Te 104/0/32 v # v # show mac-address-table port-profile Legend: Untagged(U), Tagged (T), Not Forwardable(NF) and Conflict(C) VlanId Mac-address Type State Port-Profile Ports a4.2fa7 Dynamic Active Profiled(T) Te 104/0/32 Total MAC addresses : 1 v # v (conf-if-te-102/0/32)# do show port-profile interface ten 102/0/32 Interface Port-Profile Te 102/0/32 None v (conf-if-te-102/0/32)# do show port-profile status inc auto_myvcenter_datacenter-2_ejacob /0/32 v (conf-if-te-102/0/32)# Yes a4.2fa7 Te VM move was done using VSphere and after that when the mac is learnt on RBridge 102 port 102/0/32, the profile is deleted from 104 and applied on /10/18-06:41:26, [NSM-2006], , SW/0 Active DCE, INFO, v , Port-profile auto_myvcenter_datacenter-2_ejacob removed successfully on TenGigabitEthernet 104/0/ /10/18-06:41:26, [NSM-2004], , SW/0 Active DCE, INFO, v , Port-profile auto_myvcenter_datacenter-2_ejacob application succeeded on TenGigabitEthernet 102/0/32. v (config)# do show port-profile int ten 104/0/32 Interface Port-Profile Te 104/0/32 None v (config)# v (config)# do show port-profile int ten 102/0/32 Interface Port-Profile Te 102/0/32 auto_myvcenter_datacenter-2_ejacob v (config)# v (config)# do show port-profile status applied Port-Profile PPID Activated Associated MAC auto_myvcenter_datacenter-2_ejacob /0/32 Yes a4.2fa7 v (config)# v (config)# do show mac-address-table port-profile Legend: Untagged(U), Tagged (T), Not Forwardable(NF) and Conflict(C) Interface Te 135

136 Example 5: AMPP VlanId Mac-address Type State Port-Profile Ports a4.2fa7 Dynamic Active Profiled(T) Te 102/0/32 Total MAC addresses : 1 v (config)# Example 5: AMPP AMPP or Automatic migration of port profile feature is explained in the Technology Overview. Here we will look at how AMPP can be configured and verified on a vlag and what are the various options. While here the focus is on AMPP on vlag, for individual Ethernet port the configuration and verification is same as vlag. AMPP configuration and operation involves the following in brief. Creating a port-profile and defining the vlan, qos and security configuration. Activate the port-profile. Associate the VM-mac's associated with the port-profile. Enable port-profile mode on the virtual-machine connected ports. After configuring profiles when a VM-mac is detected at port-profile enabled port the corresponding port-profile gets downloaded on to that port. MAC detection is basically the source-mac learning on the port. When a VM move or mac move happens the port-profile migrate. AMPP could be enabled for regular vlans as well as for virtual-fabric vlans, both these configuration will be covered. Configuration and Verification for VLAN The following configuration shows creation of a port-profile for vlan, activating it, associates mac's with port-profile and enabling portprofile mode on vlag. Configuration are done from the principal RBridge and port-profile is a VCS fabric wide configuration. interface vlan port-profile vm11-1 vlan-profile switchport switchport mode trunk switchport trunk allowed vlan add v (config)# port-profile vm11-1 activate v (config)# port-profile vm11-1 static port-profile vm11-1 static port-profile vm11-1 static port-profile vm11-1 static port-profile vm11-1 static v # show run int po 4 interface Port-channel 4 vlag ignore-split port-profile-port no shutdown 136

137 Example 5: AMPP v # The profile is activated after the above configuration but hasn't yet associated with a port. The association of the port-profile to a port happens only after the mac associated with the profile gets learned on the port. v # show port-profile status Port-Profile PPID Activated Associated MAC vm Yes v # show port-profile interface po 4 Interface Port-Profile Po 4 None v # Interface None None None None None Once mac associated with a port-profile gets learned on any of the VLANs, corresponding port-profile gets activated on the port. v # show mac-address-table vlan 331 VlanId Mac-address Type State Ports f8f1.3d03 System Remote XX 203/X/X f8f System Remote XX 202/X/X e0.52e System Remote XX VR1/X/X eb.1aaa.68d7 System Remote XX 201/X/X Total MAC addresses : 4 v # show port-profile interface po 4 Interface Port-Profile Po 4 None v # show mac-address-table vlan 331 VlanId Mac-address Type State Ports f8f1.3d03 System Remote XX 203/X/X f8f System Remote XX 202/X/X e0.52e System Remote XX VR1/X/X eb.1aaa.68d7 System Remote XX 201/X/X Total MAC addresses : 4 v # 2016/10/13-22:43:18, [NSM-2004], , SW/0 Active DCE, INFO, v , Port-profile vm11-1 application succeeded on Portchannel 4. v # show mac-address-table vlan 331 VlanId Mac-address Type State Ports Dynamic Active Po f8f1.3d03 System Remote XX 203/X/X f8f System Remote XX 202/X/X e0.52e System Remote XX VR1/X/X eb.1aaa.68d7 System Remote XX 201/X/X Total MAC addresses : 5 v # show port-profile interface po 4 Interface Port-Profile Po 4 vm11-1 v # v # show mac-address-table port-profile Legend: Untagged(U), Tagged (T), Not Forwardable(NF) and Conflict(C) VlanId Mac-address Type State Port-Profile Ports Dynamic Active Profiled(T) Po 4 Total MAC addresses : 1 v # The port-profile is activated as long as the MAC is present on the mac-table, once the mac expires port-profile association from the port is removed. Below show o/p will show this behavior. v # 2016/10/13-22:53:36, [NSM-2006], , SW/0 Active DCE, INFO, v , Port-profile vm11-1 removed successfully on Portchannel 4. v # v # v # show mac-address-table port-profile Total MAC addresses : 0 v # v # show mac-address-table vlan 331 VlanId Mac-address Type State f8f1.3d03 System Remote f8f System Remote e0.52e System Remote eb.1aaa.68d7 System Remote Ports XX 203/X/X XX 202/X/X XX VR1/X/X XX 201/X/X 137

138 Example 5: AMPP Total MAC addresses : 4 v # v # show port-profile interface port-channel 4 Interface Port-Profile Po 4 None v # 2016/10/13-23:05 Also an interface could be associated with multiple port-profiles as long as there is no conflict in the port-profile properties. 2016/10/13-23:05:55, [NSM-2004], , SW/0 Active DCE, INFO, v , Port-profile vm11-1 application succeeded on Port-channel /10/13-23:09:06, [NSM-2004], , SW/0 Active DCE, INFO, v , Port-profile vm11-2 application succeeded on Portchannel 4. v # show mac-address-table port-profile Legend: Untagged(U), Tagged (T), Not Forwardable(NF) and Conflict(C) VlanId Mac-address Type State Port-Profile Ports Dynamic Active Profiled(T) Po Dynamic Active Profiled(T) Po Dynamic Active Profiled(T) Po 4 Total MAC addresses : 3 v # show port-profile interface po 4 Interface Port-Profile Po 4 vm11-1 vm11-2 v # Virtual Machine Moves VM move will be detected by learning of new mac at a new node on a different vlag. AMPP will take care of removing the applied portprofile in the case of VM move based on MAC move detection. In the example below Po4 learns mac on RBridge-101 (hostname: v ) initially and when VM move's to RBridge-103 (v ). AMPP takes care of removing the port-profile on Po4 and applying the port-profile on Po8. v # 2016/10/14-18:13:35, [NSM-2004], , SW/0 Active DCE, INFO, v , Port-profile vm11-1 application succeeded on Portchannel 4. v # show mac-address-table port-profile Legend: Untagged(U), Tagged (T), Not Forwardable(NF) and Conflict(C) VlanId Mac-address Type State Port-Profile Ports Dynamic Active Profiled(T) Po 4 Total MAC addresses : 1 v # v # show port-profile interface po 4 Interface Port-Profile Po 4 vm11-1 v # show port-profile interface po 8 Interface Port-Profile Po 8 None v # v # show port-profile status applied Port-Profile PPID Activated Associated MAC Interface vm Yes Po 4 v # v # 2016/10/14-18:15:46, [NSM-2006], , SW/0 Active DCE, INFO, v , Port-profile vm11-1 removed successfully on Portchannel 4. v # v # show port-profile interface po 4 Interface Port-Profile Po 4 None v # v # v # 2016/10/14-18:15:46, [NSM-2004], , SW/0 Active DCE, INFO, v , Port-profile vm11-1 application succeeded on Portchannel 8. v # show mac-address-table port-profile Legend: Untagged(U), Tagged (T), Not Forwardable(NF) and Conflict(C) VlanId Mac-address Type State Port-Profile Ports Dynamic Active Profiled(T) Po 8 Total MAC addresses : 1 v # v # show port-profile interface po 4 138

139 Example 5: AMPP Interface Po 4 v # Port-Profile None v # show port-profile interface po 8 Interface Port-Profile Po 8 vm11-1 v # v # show port-profile status applied Port-Profile PPID Activated vm11-1 v # v # 18 Yes Associated MAC Interface Po 8 Configuration and Verification for Virtual Fabric VLAN The following configuration shows creation of a port-profile for virtual-fabric vlan. The ctag is 341 and is mapped to VF-vlan interface vlan 5341 port-profile ampp-svf-1 vlan-profile switchport switchport mode trunk switchport trunk allowed vlan add 5341 ctag 341 port-profile ampp-svf-1 activate port-profile ampp-svf-1 static b port-profile ampp-svf-1 static c port-profile ampp-svf-1 static b port-profile ampp-svf-1 static c v # show run int po 4 interface Port-channel 4 vlag ignore-split port-profile-port no shutdown v # Like in AMPP for 802.1Q vlan the profile is activated for VF-vlan after the above configuration. And once mac associated with a portprofile gets learned the corresponding port-profile gets applied on the port. In this case the mac is learned on the VF-vlan 5341 and the ctag is /10/14-21:27:05, [NSM-2004], , SW/0 Active DCE, INFO, v , Port-profile ampp-svf-1 application succeeded on Port-channel 4. v # show mac-address-table vlan 5341 VlanId Mac-address Type State Ports b Dynamic Active Po f8f1.3d03 System Remote XX 203/X/X f8f System Remote XX 202/X/X e System Remote XX VR1/X/X eb.1aaa.68d7 System Remote XX 201/X/X Total MAC addresses : 5 v # v # show port-profile status applied Port-Profile PPID Activated Associated MAC Interface ampp-svf-1 24 Yes b Po 4 v # show mac-address-table port-profile Legend: Untagged(U), Tagged (T), Not Forwardable(NF) and Conflict(C) VlanId Mac-address b Total MAC addresses : Type State Port-Profile Ports Dynamic Active 1 Profiled(T) Po 4 139

140 Example 6: Virtual Fabric Extension v # v # show port-profile interface po 4 Interface Port-Profile Po 4 ampp-svf-1 v # v # show vlan 341 % Error: Interface vlan 341 doesn't exist v # show vlan 5341 VLAN Name State Ports Classification (F)-FCoE (R)-RSPAN (u)-untagged (c)-converged (T)-TRANSPARENT (t)-tagged ================ ======== ====== ==================== ===================== 5341 VLAN5341 ACTIVE Po 4(c) ctag 341 Po 8(c) ctag 341 Te 104/0/32(c) ctag 341 v # Example 6: Virtual Fabric Extension VF Extension extends Layer 2 domains between VCS fabrics over Layer 3 clouds. Using VF extension layer-2 connectivity can be provided for multiple VCS fabrics in a single datacenter or across datacenter. As covered in technology overview section, VF Extension use Virtual extensible Local Area Network (VXLAN) as an overlay for tunneling traffic between separate sites interconnected by Layer 3 networks. Here we will go about how to configure VF extension to connect two data-center sites, topology used for this is shown below. FIGURE 33 Virtual Fabric Extension 140

141 Example 6: Virtual Fabric Extension Configuration Configuration are done on the Spine nodes and involves the following. 1. Define source loopback IP for vxlan VTEP (virtual tunnel Endpoint) for a vcs fabric. Each vcs fabric will have a unique IP across all RBridges participating in the VF extension. 2. Configure the virtual-gateway which will include the gateway type, vlan to vni mapping, vlans which have to be extended and the remote data-center site's VTEP IP. For the VxLAN tunnel to be setup the VTEP IP's should be reachable. For this the loopback IP used for VTEP is advertised through BGP to Edge Router from each RBridge. VTEP-IP is unique per datacenter-site, so both RBridges are configured with same Loopback1 and IP. Configuration shown is for Datacenter-1, similar config needs to done on Datacenter-2 for the VxLAN tunnels to be setup. After the VTEP IP config and ensuring the route reachability is there is on both datacenters for this IP, enable the overlay-gateway configuration. This configuration is done in the global-mode. 141

142 Example 6: Virtual Fabric Extension Meanwhile since the Datacenter Interconnect (DCI) in the validated design is directly connected private-ip x is used for VTEP IP's. If the DCI connectivity is through the internet then a globally routable address has to be used for VTEP-IP. Verification IP is advertised to Internet-router through BGP to the other DC, similarly the route remote DC's VTEP-ip is also learned. Internet-router#show ip route Type Codes - B:BGP D:Connected I:ISIS O:OSPF R:RIP S:Static; Cost - Dist/Metric BGP Codes - i:ibgp e:ebgp ISIS Codes - L1:Level-1 L2:Level-2 OSPF Codes - i:inter Area 1:External Type 1 2:External Type 2 s:sham Link STATIC Codes - d:dhcpv6 Destination Gateway Port Cost Type Uptime src-vrf / eth 1/1 200/0 Bi 0m19s / eth 1/2 200/0 Bi 0m19s - Internet-router#show ip bgp Number of BGP Routes matching display condition : 3 Status codes: s suppressed, d damped, h history, * valid, > best, i internal x:best-external Origin codes: i - IGP, e - EGP,? - incomplete Network Next Hop MED LocPrf Weight Path *>i / ? *i / ? Last update to IP routing table: 0h0m39s, 3 path(s) installed: Route is to be sent to 1 peers: (200) Internet-router# To the Spine nodes default-route is advertised from the MLXe for reachability to remote VTEP-IP v # show ip route Total number of IP routes: 886 Type Codes - B:BGP D:Connected O:OSPF S:Static U:Unnumbered +:Leaked route; Cost - Dist/Metric BGP Codes - i:ibgp e:ebgp OSPF Codes - i:inter Area 1:External Type 1 2:External Type 2 s:sham Link Destination Gateway Port Cost Type Uptime / Te 201/0/33:1 200/0 Bi 7h55m v # VxLAN tunnel don't have a control plane and depend on the route reachability of destination VTEP-IP, which is in this case. 142

143 Example 6: Virtual Fabric Extension v # show overlay-gateway Overlay Gateway "VxLAN-Ext-1", ID 1, rbridge-ids Type Layer 2-Extension, Tunnel mode VXLAN IP address ( Loopback 1 ), Vrf default-vrf Admin state up Number of tunnels 1 Packet count: RX TX Byte count : RX (NA) TX v # Below table shows the ARP and mac-learning of remote datacenter hosts, is one such host. v # show arp ve 101 Address Mac-address Interface MacResolved Age Type eb.1ade.4657 Ve 101 yes 01:04:32 Dynamic eb.1af1.2d7f Ve 101 yes 00:49:13 Dynamic a4.2fa7 Ve 101 yes 01:01:04 Dynamic f8f Ve 101 yes 00:59:41 Dynamic f8f1.3d03 Ve 101 yes 00:52:25 Dynamic eb.1a95.5a03 Ve 101 yes 00:01:12 Dynamic v # show mac-address-table vlan 101 VlanId Mac-address Type State Ports 143

144 Example 7: Auto-Fabric f8f1.3d03 System Remote XX 203/X/X f8f System Remote XX 202/X/X a4.2fa7 Dynamic Remote Te 104/0/ e System Active XX 201/X/X e0.52a System Active XX 201/X/X eb.1a95.5a03 Dynamic Active Tu eb.1ade.4657 System Remote XX 103/X/X eb.1af1.2d7f System Remote XX 101/X/X Total MAC addresses : 8 v # Below is the configuration from Datacenter-2 for reference. DC2-Spine1# show run rb int lo 1 rbridge-id 222 interface Loopback 1 no shutdown ip address /32 rbridge-id 223 interface Loopback 1 no shutdown ip address /32 DC2-Spine1# show running-config overlay-gateway overlay-gateway VxLAN-Ext-1 type Layer 2-extension ip interface Loopback 1 attach rbridge-id add map vlan vni auto site DC-1 ip address extend vlan add ,5301 activate DC2-Spine1# show tunnel br Number of tunnels: 1 Tunnel 61441, mode VXLAN, rbridge-ids Admin state up, Oper state up Source IP , Vrf default-vrf Destination IP DC2-Spine1# DC2-Spine1# show overlay-gateway Overlay Gateway "VxLAN-Ext-1", ID 1, rbridge-ids Type Layer 2-Extension, Tunnel mode VXLAN IP address ( Loopback 1 ), Vrf default-vrf Admin state up Number of tunnels 1 Packet count: RX TX Byte count : RX (NA) TX DC2-Spine1# Example 7: Auto-Fabric Auto-fabric feature allows true plug and play of new switches into VCS fabric. As mentioned in the "VCS fabric bringup" section Auto-fabric is another way of adding a new node to VCS fabric apart from manually entering the "VCS id" and "RBridge-ID" at the switch console. 144

145 Example 7: Auto-Fabric For Auto-fabric, the principal RBridge is preprovisioned with the new switches WWN to RBridge mapping. After the new switch comes up it discovers VCS fabric through its connected link through Extreme FLDP. And it downloads the VCS information from the principal RBridge, reboots and comes up in the VCS fabric. Configuration The new switch will have bare-metal flag enabled and will initially come up with default config of "VCS ID 1" and "RBridge ID 1". Once the new switch discovers that there is a VCS fabric attached and that fabric has the new switches WWN mapped to an RBridge-id 105. It will download the VCS fabric configuration from principal RBridge and come up as RBridge 105 in the VCS fabric. v # show vcs Config Mode : Distributed VCS Mode : Logical Chassis VCS ID : 100 VCS GUID : a853ccdf-5ff9-4a32-8b21-eecad Total Number of Nodes : 7 Rbridge-Id WWN Management IP VCS Status Fabric Status HostName :00:50:EB:1A:F1:2D:7C Online Online v :00:50:EB:1A:DE:7D: Online Online v :00:50:EB:1A:DE:46: Online Online v :00:50:EB:1A:F5:81: Online Online v >10:00:50:EB:1A:AA:68:D4* Online Online v :00:00:27:F8:F4:95: Online Online v :00:00:27:F8:F1:3D: Online Online v v # v (config)# v (config)# preprovision rbridge-id 105 wwn 10:00:50:EB:1A:95:23:84 v (config-preprovision-rbridge-id-105)# Verification CLI output below shows the new switch has come up with default VCS-id 1 and RBridgeId 1 and has bare-metal flag enabled. On enabling the links connecting to VCS Id 100. Switch detects it's already preprovisioned on VCS fabric 100 and reboots to come up in the VCS fabric with RBridge Id

146 Example 7: Auto-Fabric After the new switch (sw0) reboots and comes up in VCS fabric 100 as RBridge-id 105 it syncs all fabric and forwarding info. Included below is show o/p confirming the switch is up in VCS fabric with configuration and HW forwarding info synced. v (config)# do show vcs Config Mode : Distributed VCS Mode : Logical Chassis VCS ID : 100 VCS GUID : a853ccdf-5ff9-4a32-8b21-eecad Total Number of Nodes : 8 Rbridge-Id WWN Management IP VCS Status Fabric Status HostName :00:50:EB:1A:F1:2D:7C Online Online v :00:50:EB:1A:DE:7D: Online Online v :00:50:EB:1A:DE:46: Online Online v :00:50:EB:1A:F5:81: Online Online v :00:50:EB:1A:95:23: Online Online sw0 201 >10:00:50:EB:1A:AA:68:D4* Online Online v :00:00:27:F8:F4:95: Online Online v :00:00:27:F8:F1:3D: Online Online v v (config)# sw0# show vcs Config Mode : Distributed VCS Mode : Logical Chassis VCS ID : 100 VCS GUID : a853ccdf-5ff9-4a32-8b21-eecad Total Number of Nodes : 8 Rbridge-Id WWN Management IP VCS Status Fabric Status HostName :00:50:EB:1A:F1:2D:7C Online Online v :00:50:EB:1A:DE:7D: Online Online v

Network Virtualization in IP Fabric with BGP EVPN

Network Virtualization in IP Fabric with BGP EVPN EXTREME VALIDATED DESIGN Network Virtualization in IP Fabric with BGP EVPN Network Virtualization in IP Fabric with BGP EVPN Version 2.0 9035383 February 2018 2018, Extreme Networks, Inc. All Rights Reserved.

More information

EXTREME VALIDATED DESIGN. Network Virtualization in IP Fabric with BGP EVPN

EXTREME VALIDATED DESIGN. Network Virtualization in IP Fabric with BGP EVPN EXTREME VALIDATED DESIGN Network Virtualization in IP Fabric with BGP EVPN 53-1004308-07 April 2018 2018, Extreme Networks, Inc. All Rights Reserved. Extreme Networks and the Extreme Networks logo are

More information

EXTREME VALIDATED DESIGN. Extreme IP Fabric Architecture

EXTREME VALIDATED DESIGN. Extreme IP Fabric Architecture EXTREME VALIDATED DESIGN 53-1004890-04 April 2018 2018, Extreme Networks, Inc. All Rights Reserved. Extreme Networks and the Extreme Networks logo are trademarks or registered trademarks of Extreme Networks,

More information

VXLAN Overview: Cisco Nexus 9000 Series Switches

VXLAN Overview: Cisco Nexus 9000 Series Switches White Paper VXLAN Overview: Cisco Nexus 9000 Series Switches What You Will Learn Traditional network segmentation has been provided by VLANs that are standardized under the IEEE 802.1Q group. VLANs provide

More information

Data Center Configuration. 1. Configuring VXLAN

Data Center Configuration. 1. Configuring VXLAN Data Center Configuration 1. 1 1.1 Overview Virtual Extensible Local Area Network (VXLAN) is a virtual Ethernet based on the physical IP (overlay) network. It is a technology that encapsulates layer 2

More information

VIRTUAL CLUSTER SWITCHING SWITCHES AS A CLOUD FOR THE VIRTUAL DATA CENTER. Emil Kacperek Systems Engineer Brocade Communication Systems.

VIRTUAL CLUSTER SWITCHING SWITCHES AS A CLOUD FOR THE VIRTUAL DATA CENTER. Emil Kacperek Systems Engineer Brocade Communication Systems. VIRTUAL CLUSTER SWITCHING SWITCHES AS A CLOUD FOR THE VIRTUAL DATA CENTER Emil Kacperek Systems Engineer Brocade Communication Systems Mar, 2011 2010 Brocade Communications Systems, Inc. Company Proprietary

More information

VXLAN Design with Cisco Nexus 9300 Platform Switches

VXLAN Design with Cisco Nexus 9300 Platform Switches Guide VXLAN Design with Cisco Nexus 9300 Platform Switches Guide October 2014 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 39 Contents What

More information

Huawei CloudEngine Series. VXLAN Technology White Paper. Issue 06 Date HUAWEI TECHNOLOGIES CO., LTD.

Huawei CloudEngine Series. VXLAN Technology White Paper. Issue 06 Date HUAWEI TECHNOLOGIES CO., LTD. Issue 06 Date 2016-07-28 HUAWEI TECHNOLOGIES CO., LTD. 2016. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of

More information

Contents. EVPN overview 1

Contents. EVPN overview 1 Contents EVPN overview 1 EVPN network model 1 MP-BGP extension for EVPN 2 Configuration automation 3 Assignment of traffic to VXLANs 3 Traffic from the local site to a remote site 3 Traffic from a remote

More information

IP Fabric Reference Architecture

IP Fabric Reference Architecture IP Fabric Reference Architecture Technical Deep Dive jammon@brocade.com Feng Shui of Data Center Design 1. Follow KISS Principle Keep It Simple 2. Minimal features 3. Minimal configuration 4. Configuration

More information

HPE FlexFabric 5940 Switch Series

HPE FlexFabric 5940 Switch Series HPE FlexFabric 5940 Switch Series EVPN Configuration Guide Part number: 5200-2002b Software version: Release 25xx Document version: 6W102-20170830 Copyright 2017 Hewlett Packard Enterprise Development

More information

Brocade Ethernet Fabrics

Brocade Ethernet Fabrics Brocade Ethernet Fabrics Brocade 9 June 2015 On-Demand Data Center Network Technologies Strategic focus areas FIBRE CHANNEL FABRICS ETHERNET FABRICS CORE ROUTING SDN NFV CLOUD ORCHESTRATION 2015 Brocade

More information

Cloud Data Center Architecture Guide

Cloud Data Center Architecture Guide Cloud Data Center Architecture Guide Modified: 2018-08-21 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net Juniper Networks, the Juniper Networks

More information

Solution Guide. Infrastructure as a Service: EVPN and VXLAN. Modified: Copyright 2016, Juniper Networks, Inc.

Solution Guide. Infrastructure as a Service: EVPN and VXLAN. Modified: Copyright 2016, Juniper Networks, Inc. Solution Guide Infrastructure as a Service: EVPN and VXLAN Modified: 2016-10-16 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net All rights reserved.

More information

Hierarchical Fabric Designs The Journey to Multisite. Lukas Krattiger Principal Engineer September 2017

Hierarchical Fabric Designs The Journey to Multisite. Lukas Krattiger Principal Engineer September 2017 Hierarchical Fabric Designs The Journey to Multisite Lukas Krattiger Principal Engineer September 2017 A Single Fabric, a Single Data Center External Layer-3 Network Pod 1 Leaf/ Topologies (aka Folded

More information

Implementing VXLAN. Prerequisites for implementing VXLANs. Information about Implementing VXLAN

Implementing VXLAN. Prerequisites for implementing VXLANs. Information about Implementing VXLAN This module provides conceptual information for VXLAN in general and configuration information for layer 2 VXLAN on Cisco ASR 9000 Series Router. For configuration information of layer 3 VXLAN, see Implementing

More information

Unicast Forwarding. Unicast. Unicast Forwarding Flows Overview. Intra Subnet Forwarding (Bridging) Unicast, on page 1

Unicast Forwarding. Unicast. Unicast Forwarding Flows Overview. Intra Subnet Forwarding (Bridging) Unicast, on page 1 Unicast, on page 1 Unicast Flows Overview Intra and inter subnet forwarding are the possible unicast forwarding flows in the VXLAN BGP EVPN fabric, between leaf/tor switch VTEPs. They are explained in

More information

Higher scalability to address more Layer 2 segments: up to 16 million VXLAN segments.

Higher scalability to address more Layer 2 segments: up to 16 million VXLAN segments. This chapter tells how to configure Virtual extensible LAN (VXLAN) interfaces. VXLANs act as Layer 2 virtual networks over Layer 3 physical networks to stretch Layer 2 networks. About VXLAN Encapsulation

More information

Optimizing Layer 2 DCI with OTV between Multiple VXLAN EVPN Fabrics (Multifabric)

Optimizing Layer 2 DCI with OTV between Multiple VXLAN EVPN Fabrics (Multifabric) White Paper Optimizing Layer 2 DCI with OTV between Multiple VXLAN EVPN Fabrics (Multifabric) What You Will Learn This document describes how to achieve a VXLAN EVPN multifabric design by integrating Virtual

More information

Introduction to External Connectivity

Introduction to External Connectivity Before you begin Ensure you know about Programmable Fabric. Conceptual information is covered in the Introduction to Cisco Programmable Fabric and Introducing Cisco Programmable Fabric (VXLAN/EVPN) chapters.

More information

VXLAN EVPN Multihoming with Cisco Nexus 9000 Series Switches

VXLAN EVPN Multihoming with Cisco Nexus 9000 Series Switches White Paper VXLAN EVPN Multihoming with Cisco Nexus 9000 Series Switches 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 27 Contents Introduction...

More information

Extreme Networks How to Build Scalable and Resilient Fabric Networks

Extreme Networks How to Build Scalable and Resilient Fabric Networks Extreme Networks How to Build Scalable and Resilient Fabric Networks Mikael Holmberg Distinguished Systems Engineer Fabrics MLAG IETF TRILL Cisco FabricPath Extreme (Brocade) VCS Juniper QFabric IEEE Fabric

More information

MP-BGP VxLAN, ACI & Demo. Brian Kvisgaard System Engineer, CCIE SP #41039 November 2017

MP-BGP VxLAN, ACI & Demo. Brian Kvisgaard System Engineer, CCIE SP #41039 November 2017 MP-BGP VxLAN, ACI & Demo Brian Kvisgaard System Engineer, CCIE SP #41039 November 2017 Datacenter solutions Programmable Fabric Classic Ethernet VxLAN-BGP EVPN standard-based Cisco DCNM Automation Modern

More information

Traffic Load Balancing in EVPN/VXLAN Networks. Tech Note

Traffic Load Balancing in EVPN/VXLAN Networks. Tech Note Traffic Load Balancing in EVPN/VXLAN Networks Tech Note December 2017 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net Juniper Networks assumes no

More information

Multi-site Datacenter Network Infrastructures

Multi-site Datacenter Network Infrastructures Multi-site Datacenter Network Infrastructures Petr Grygárek rek 2009 Petr Grygarek, Advanced Computer Networks Technologies 1 Why Multisite Datacenters? Resiliency against large-scale site failures (geodiversity)

More information

NETWORK ARCHITECTURES AND CONVERGED CLOUD COMPUTING. Wim van Laarhoven September 2010

NETWORK ARCHITECTURES AND CONVERGED CLOUD COMPUTING. Wim van Laarhoven September 2010 NETWORK ARCHITECTURES AND CONVERGED NETWORKS FABRICS FOR CLOUD COMPUTING Wim van Laarhoven Wim.vanlaarhoven@brocade.com +31653794500 16 September 2010 Legal Disclaimer All or some of the products detailed

More information

Implementing VXLAN in DataCenter

Implementing VXLAN in DataCenter Implementing VXLAN in DataCenter LTRDCT-1223 Lilian Quan Technical Marketing Engineering, INSBU Erum Frahim Technical Leader, ecats John Weston Technical Leader, ecats Why Overlays? Robust Underlay/Fabric

More information

Ethernet VPN (EVPN) in Data Center

Ethernet VPN (EVPN) in Data Center Ethernet VPN (EVPN) in Data Center Description and Design considerations Vasilis Stavropoulos Sparkle GR EVPN in Data Center The necessity for EVPN (what it is, which problems it solves) EVPN with MPLS

More information

Cisco ACI Multi-Pod/Multi-Site Deployment Options Max Ardica Principal Engineer BRKACI-2003

Cisco ACI Multi-Pod/Multi-Site Deployment Options Max Ardica Principal Engineer BRKACI-2003 Cisco ACI Multi-Pod/Multi-Site Deployment Options Max Ardica Principal Engineer BRKACI-2003 Agenda ACI Introduction and Multi-Fabric Use Cases ACI Multi-Fabric Design Options ACI Stretched Fabric Overview

More information

BESS work on control planes for DC overlay networks A short overview

BESS work on control planes for DC overlay networks A short overview BESS work on control planes for DC overlay networks A short overview Jorge Rabadan IETF99, July 2017 Prague 1 Agenda EVPN in a nutshell BESS work on EVPN for NVO3 networks EVPN in the industry today Future

More information

Pluribus Data Center Interconnect Validated

Pluribus Data Center Interconnect Validated Design Guide Pluribus Data Center Interconnect Validated Design Guide www.pluribusnetworks.com Terminology Reference This is a glossary of acronyms and terms used throughout this document. AS BFD BGP L2VPN

More information

Hochverfügbarkeit in Campusnetzen

Hochverfügbarkeit in Campusnetzen Hochverfügbarkeit in Campusnetzen Für die deutsche Airheads Community 04. Juli 2017, Tino H. Seifert, System Engineer Aruba Differences between Campus Edge and Campus Core Campus Edge In many cases no

More information

Implementing Cisco Data Center Infrastructure v6.0 (DCII)

Implementing Cisco Data Center Infrastructure v6.0 (DCII) Implementing Cisco Data Center Infrastructure v6.0 (DCII) COURSE OVERVIEW: Implementing Cisco Data Center Infrastructure (DCII) v6.0 is a five-day instructor-led course that is designed to help students

More information

Cisco FabricPath Technology Introduction

Cisco FabricPath Technology Introduction Cisco FabricPath Technology Introduction Marian Klas mklas@cisco.com 2011 Cisco and/or its affiliates. ll rights reserved. Cisco Public 1 Cisco Public 2 State of Existing Layer 2 Networks FabricPath Revolutionary

More information

Exam Questions

Exam Questions Exam Questions 642-997 DCUFI Implementing Cisco Data Center Unified Fabric (DCUFI) v5.0 https://www.2passeasy.com/dumps/642-997/ 1.Which SCSI terminology is used to describe source and destination nodes?

More information

Design and Implementations of FCoE for the DataCenter. Mike Frase, Cisco Systems

Design and Implementations of FCoE for the DataCenter. Mike Frase, Cisco Systems Design and Implementations of FCoE for the DataCenter Mike Frase, Cisco Systems SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies

More information

Configuring Virtual Port Channels

Configuring Virtual Port Channels Configuring Virtual Port Channels This chapter describes how to configure virtual port channels (vpcs) on Cisco Nexus 5000 Series switches. It contains the following sections: Information About vpcs, page

More information

Network Configuration Example

Network Configuration Example Network Configuration Example MetaFabric Architecture 2.0: Configuring Virtual Chassis Fabric and VMware NSX Modified: 2017-04-14 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089

More information

Enterprise. Nexus 1000V. L2/L3 Fabric WAN/PE. Customer VRF. MPLS Backbone. Service Provider Data Center-1 Customer VRF WAN/PE OTV OTV.

Enterprise. Nexus 1000V. L2/L3 Fabric WAN/PE. Customer VRF. MPLS Backbone. Service Provider Data Center-1 Customer VRF WAN/PE OTV OTV. 2 CHAPTER Cisco's Disaster Recovery as a Service (DRaaS) architecture supports virtual data centers that consist of a collection of geographically-dispersed data center locations. Since data centers are

More information

"Charting the Course... Implementing Cisco Data Center Infrastructure (DCII) Course Summary

Charting the Course... Implementing Cisco Data Center Infrastructure (DCII) Course Summary Description Course Summary v6.0 is a five-day instructor-led course that is designed to help students prepare for the Cisco CCNP Data Center certification and for professional-level data center roles.

More information

Introduction. Network Architecture Requirements of Data Centers in the Cloud Computing Era

Introduction. Network Architecture Requirements of Data Centers in the Cloud Computing Era Massimiliano Sbaraglia Network Engineer Introduction In the cloud computing era, distributed architecture is used to handle operations of mass data, such as the storage, mining, querying, and searching

More information

Provisioning Overlay Networks

Provisioning Overlay Networks This chapter has the following sections: Using Cisco Virtual Topology System, page 1 Creating Overlays, page 2 Creating Network using VMware, page 4 Creating Subnetwork using VMware, page 4 Creating Routers

More information

Internet Engineering Task Force (IETF) ISSN: A. Sajassi Cisco J. Uttaro AT&T May 2018

Internet Engineering Task Force (IETF) ISSN: A. Sajassi Cisco J. Uttaro AT&T May 2018 Internet Engineering Task Force (IETF) Request for Comments: 8388 Category: Informational ISSN: 2070-1721 J. Rabadan, Ed. S. Palislamovic W. Henderickx Nokia A. Sajassi Cisco J. Uttaro AT&T May 2018 Usage

More information

H3C S6520XE-HI Switch Series

H3C S6520XE-HI Switch Series H3C S6520XE-HI Switch Series EVPN Configuration Guide New H3C Technologies Co., Ltd. http://www.h3c.com.hk Software version: Release 1108 Document version: 6W100-20171228 Copyright 2017, New H3C Technologies

More information

April Brocade VDX 6740 Deployment Guide for VMware EVO:RAIL

April Brocade VDX 6740 Deployment Guide for VMware EVO:RAIL 24 April 2015 Brocade VDX 6740 Deployment Guide for VMware EVO:RAIL 2015, Brocade Communications Systems, Inc. All Rights Reserved. ADX, Brocade, Brocade Assurance, the B-wing symbol, DCX, Fabric OS, HyperEdge,

More information

EXAM Brocade Certified Ethernet Fabric Professional 2013 Exam.

EXAM Brocade Certified Ethernet Fabric Professional 2013 Exam. Brocade EXAM - 150-810 Brocade Certified Ethernet Fabric Professional 2013 Exam TYPE: DEMO http://www.examskey.com/150-810.html Examskey Brocade 150-810 exam demo product is here for you to test the quality

More information

Internet Engineering Task Force (IETF) Request for Comments: N. Bitar Nokia R. Shekhar. Juniper. J. Uttaro AT&T W. Henderickx Nokia March 2018

Internet Engineering Task Force (IETF) Request for Comments: N. Bitar Nokia R. Shekhar. Juniper. J. Uttaro AT&T W. Henderickx Nokia March 2018 Internet Engineering Task Force (IETF) Request for Comments: 8365 Category: Standards Track ISSN: 2070-1721 A. Sajassi, Ed. Cisco J. Drake, Ed. Juniper N. Bitar Nokia R. Shekhar Juniper J. Uttaro AT&T

More information

White Paper. Huawei Campus Switches VXLAN Technology. White Paper

White Paper. Huawei Campus Switches VXLAN Technology. White Paper White Paper Huawei Campus Switches VXLAN Technology White Paper 1 Terms Abbreviation VXLAN NVo3 BUM VNI VM VTEP SDN Full English Name Virtual Extensible Local Area Network Network Virtualization over L3

More information

Ethernet VPN (EVPN) and Provider Backbone Bridging-EVPN: Next Generation Solutions for MPLS-based Ethernet Services. Introduction and Application Note

Ethernet VPN (EVPN) and Provider Backbone Bridging-EVPN: Next Generation Solutions for MPLS-based Ethernet Services. Introduction and Application Note White Paper Ethernet VPN (EVPN) and Provider Backbone Bridging-EVPN: Next Generation Solutions for MPLS-based Ethernet Services Introduction and Application Note Last Updated: 5/2014 Ethernet VPN (EVPN)

More information

Service Graph Design with Cisco Application Centric Infrastructure

Service Graph Design with Cisco Application Centric Infrastructure White Paper Service Graph Design with Cisco Application Centric Infrastructure 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 101 Contents Introduction...

More information

H3C S7500E-X Switch Series

H3C S7500E-X Switch Series H3C S7500E-X Switch Series EVPN Configuration Guide Hangzhou H3C Technologies Co., Ltd. http://www.h3c.com Software version: S7500EX-CMW710-R7523P01 Document version: 6W100-20160830 Copyright 2016, Hangzhou

More information

LARGE SCALE IP ROUTING LECTURE BY SEBASTIAN GRAF

LARGE SCALE IP ROUTING LECTURE BY SEBASTIAN GRAF LARGE SCALE IP ROUTING LECTURE BY SEBASTIAN GRAF MODULE 07 - MPLS BASED LAYER 2 SERVICES 1 by Xantaro MPLS BASED LAYER 2 VPNS USING MPLS FOR POINT-TO-POINT LAYER 2 SERVICES 2 by Xantaro Why are Layer-2

More information

Configuring APIC Accounts

Configuring APIC Accounts This chapter contains the following sections: Adding an APIC Account, page 1 Viewing APIC Reports, page 3 Assigning an APIC account to a Pod, page 15 Handling APIC Failover, page 15 Adding an APIC Account

More information

Overview. Overview. OTV Fundamentals. OTV Terms. This chapter provides an overview for Overlay Transport Virtualization (OTV) on Cisco NX-OS devices.

Overview. Overview. OTV Fundamentals. OTV Terms. This chapter provides an overview for Overlay Transport Virtualization (OTV) on Cisco NX-OS devices. This chapter provides an overview for Overlay Transport Virtualization (OTV) on Cisco NX-OS devices., page 1 Sample Topologies, page 6 OTV is a MAC-in-IP method that extends Layer 2 connectivity across

More information

Configuring StackWise Virtual

Configuring StackWise Virtual Finding Feature Information, page 1 Restrictions for Cisco StackWise Virtual, page 1 Prerequisites for Cisco StackWise Virtual, page 2 Information About Cisco Stackwise Virtual, page 2 Cisco StackWise

More information

Configuring Virtual Port Channels

Configuring Virtual Port Channels This chapter contains the following sections: Information About vpcs, page 1 Guidelines and Limitations for vpcs, page 10 Configuring vpcs, page 11 Verifying the vpc Configuration, page 25 vpc Default

More information

Configuring VXLAN EVPN Multi-Site

Configuring VXLAN EVPN Multi-Site This chapter contains the following sections: About VXLAN EVPN Multi-Site, on page 1 Licensing Requirements for VXLAN EVPN Multi-Site, on page 2 Guidelines and Limitations for VXLAN EVPN Multi-Site, on

More information

UM DIA NA VIDA DE UM PACOTE CEE

UM DIA NA VIDA DE UM PACOTE CEE UM DIA NA VIDA DE UM PACOTE CEE Marcelo M. Molinari System Engineer - Brazil May 2010 CEE (Converged Enhanced Ethernet) Standards Making 10GbE Lossless and Spanning-Tree Free 2010 Brocade Communications

More information

VXLAN Deployment Use Cases and Best Practices

VXLAN Deployment Use Cases and Best Practices VXLAN Deployment Use Cases and Best Practices Azeem Suleman Solutions Architect Cisco Advanced Services Contributions Thanks to the team: Abhishek Saxena Mehak Mahajan Lilian Quan Bradley Wong Mike Herbert

More information

Fabric Virtualization Made Simple

Fabric Virtualization Made Simple White Paper Fabric Virtualization Made Simple Allied Telesis optimizes the private data center Introduction The innovations in data storage have led to significant developments in the implementation of

More information

RBRIDGES LAYER 2 FORWARDING BASED ON LINK STATE ROUTING

RBRIDGES LAYER 2 FORWARDING BASED ON LINK STATE ROUTING 1 RBRIDGES LAYER 2 FORWARDING BASED ON LINK STATE ROUTING Donald E. Eastlake 3 rd donald.eastlake@stellarswitches.com CONTENTS Introduction Ethernet and Spanning Tree RBridge Features TRILL Encapsulation

More information

Deploy Application Load Balancers with Source Network Address Translation in Cisco DFA

Deploy Application Load Balancers with Source Network Address Translation in Cisco DFA White Paper Deploy Application Load Balancers with Source Network Address Translation in Cisco DFA Last Updated: 1/27/2016 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco

More information

Deploying LISP Host Mobility with an Extended Subnet

Deploying LISP Host Mobility with an Extended Subnet CHAPTER 4 Deploying LISP Host Mobility with an Extended Subnet Figure 4-1 shows the Enterprise datacenter deployment topology where the 10.17.1.0/24 subnet in VLAN 1301 is extended between the West and

More information

OTV Technology Introduction and Deployment Considerations

OTV Technology Introduction and Deployment Considerations CHAPTER 1 OTV Technology Introduction and Deployment Considerations This document introduces a Cisco innovative LAN extension technology called Overlay Transport Virtualization (OTV). OTV is an IP-based

More information

Brocade.Actualtests v by.Manini.160q. Exam Code: Exam Name: Brocade Certified Ethernet Fabric Professional 2013 Exam

Brocade.Actualtests v by.Manini.160q. Exam Code: Exam Name: Brocade Certified Ethernet Fabric Professional 2013 Exam Brocade.Actualtests.150-810.v2014-03-13.by.Manini.160q Number: 150-810 Passing Score: 800 Time Limit: 120 min File Version: 22.5 http://www.gratisexam.com/ Exam Code: 150-810 Exam Name: Brocade Certified

More information

Configuring Virtual Private LAN Services

Configuring Virtual Private LAN Services Virtual Private LAN Services (VPLS) enables enterprises to link together their Ethernet-based LANs from multiple sites via the infrastructure provided by their service provider. This module explains VPLS

More information

Agile Data Center Solutions for the Enterprise

Agile Data Center Solutions for the Enterprise Solution Brief Agile Data Center Solutions for the Enterprise IP Fabrics: Paving the Way to Digital Transformation The data center sits at the core of the business, housing mission critical applications

More information

RBRIDGES/TRILL. Donald Eastlake 3 rd Stellar Switches AND IS-IS.

RBRIDGES/TRILL. Donald Eastlake 3 rd Stellar Switches AND IS-IS. RBRIDGES/TRILL AND IS-IS 1 Donald Eastlake 3 rd donald.eastlake@stellarswitches.com, +1-508-333-2270 Stellar Switches CAVEATS The base protocol specification may change: It is currently in TRILL WG Last

More information

Module 5: Cisco Nexus 7000 Series Switch Administration, Management and Troubleshooting

Module 5: Cisco Nexus 7000 Series Switch Administration, Management and Troubleshooting The Detailed course Modules for (DCNX7K) Configuring Cisco Nexus 7000 Switches Training Online: Module 1: Cisco Nexus 7000 Series Switches Cisco unified fabric trends Nexus 7000 series switch Deployment

More information

Configuring Cisco Nexus 7000 Series Switches

Configuring Cisco Nexus 7000 Series Switches Configuring Cisco Nexus 7000 Series Switches DCNX7K v3.1; 5 Days, Instructor-led Course Description The Configuring Cisco Nexus 7000 Switches (DCNX7K) v3.0 course is a 5-day ILT training program that is

More information

WiNG 5.x How-To Guide

WiNG 5.x How-To Guide WiNG 5.x How-To Guide Tunneling Remote Traffic using L2TPv3 Part No. TME-08-2012-01 Rev. A MOTOROLA, MOTO, MOTOROLA SOLUTIONS and the Stylized M Logo are trademarks or registered trademarks of Motorola

More information

Nexus 9000/3000 Graceful Insertion and Removal (GIR)

Nexus 9000/3000 Graceful Insertion and Removal (GIR) White Paper Nexus 9000/3000 Graceful Insertion and Removal (GIR) White Paper September 2016 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 21

More information

Cisco Virtual Networking Solution for OpenStack

Cisco Virtual Networking Solution for OpenStack Data Sheet Cisco Virtual Networking Solution for OpenStack Product Overview Extend enterprise-class networking features to OpenStack cloud environments. A reliable virtual network infrastructure that provides

More information

HP Routing Switch Series

HP Routing Switch Series HP 12500 Routing Switch Series EVI Configuration Guide Part number: 5998-3419 Software version: 12500-CMW710-R7128 Document version: 6W710-20121130 Legal and notice information Copyright 2012 Hewlett-Packard

More information

Configuring Q-in-Q VLAN Tunnels

Configuring Q-in-Q VLAN Tunnels Information About Q-in-Q Tunnels, page 1 Licensing Requirements for Interfaces, page 7 Guidelines and Limitations, page 7 Configuring Q-in-Q Tunnels and Layer 2 Protocol Tunneling, page 8 Configuring Q-in-Q

More information

Cisco Catalyst 6500 Series Wireless LAN Services Module: Detailed Design and Implementation Guide

Cisco Catalyst 6500 Series Wireless LAN Services Module: Detailed Design and Implementation Guide Cisco Catalyst 6500 Series Wireless LAN Services Module: Detailed Design and Implementation Guide Introduction This is the first of a series of documents on the design and implementation of a wireless

More information

Cisco Certdumps Questions & Answers - Testing Engine

Cisco Certdumps Questions & Answers - Testing Engine Cisco Certdumps 642-996 Questions & Answers - Testing Engine Number: 642-996 Passing Score: 797 Time Limit: 120 min File Version: 16.8 http://www.gratisexam.com/ Sections 1. A 2. B 3. C 4. Exhibit Case

More information

"Charting the Course... Troubleshooting Cisco Data Center Infrastructure v6.0 (DCIT) Course Summary

Charting the Course... Troubleshooting Cisco Data Center Infrastructure v6.0 (DCIT) Course Summary Description Troubleshooting Cisco Data Center Infrastructure v6.0 (DCIT) Course Summary v6.0 is a five-day instructor-led course that is designed to help students prepare for the Cisco CCNP Data Center

More information

Technical Brief. Achieving a Scale-Out IP Fabric with the Adaptive Cloud Fabric Architecture.

Technical Brief. Achieving a Scale-Out IP Fabric with the Adaptive Cloud Fabric Architecture. Technical Brief Achieving a Scale-Out IP Fabric with the Adaptive Cloud Fabric Architecture www.pluribusnetworks.com Terminology Reference This is a glossary of acronyms and terms used throughout this

More information

Cisco Nexus 7000 Series NX-OS VXLAN Configuration Guide

Cisco Nexus 7000 Series NX-OS VXLAN Configuration Guide First Published: 2015-05-07 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 2016

More information

Lenovo ThinkSystem NE Release Notes. For Lenovo Cloud Network Operating System 10.6

Lenovo ThinkSystem NE Release Notes. For Lenovo Cloud Network Operating System 10.6 Lenovo ThinkSystem NE10032 Release Notes For Lenovo Cloud Network Operating System 10.6 Note: Before using this information and the product it supports, read the general information in the Safety information

More information

Question No : 1 Which three options are basic design principles of the Cisco Nexus 7000 Series for data center virtualization? (Choose three.

Question No : 1 Which three options are basic design principles of the Cisco Nexus 7000 Series for data center virtualization? (Choose three. Volume: 162 Questions Question No : 1 Which three options are basic design principles of the Cisco Nexus 7000 Series for data center virtualization? (Choose three.) A. easy management B. infrastructure

More information

Cisco HyperFlex Systems

Cisco HyperFlex Systems White Paper Cisco HyperFlex Systems Install and Manage Cisco HyperFlex Systems in a Cisco ACI Environment Original Update: January 2017 Updated: March 2018 Note: This document contains material and data

More information

HPE FlexFabric 7900 Switch Series

HPE FlexFabric 7900 Switch Series HPE FlexFabric 7900 Switch Series VXLAN Configuration Guide Part number: 5998-8254R Software version: Release 213x Document version: 6W101-20151113 Copyright 2015 Hewlett Packard Enterprise Development

More information

VXLAN Technical Brief A standard based Data Center Interconnection solution Dell EMC Networking Data Center Technical Marketing February 2017

VXLAN Technical Brief A standard based Data Center Interconnection solution Dell EMC Networking Data Center Technical Marketing February 2017 VXLAN Technical Brief A standard based Data Center Interconnection solution Dell EMC Networking Data Center Technical Marketing February 2017 A Dell EMC VXLAN Technical White Paper 1 THIS WHITE PAPER IS

More information

Spirent TestCenter EVPN and PBB-EVPN AppNote

Spirent TestCenter EVPN and PBB-EVPN AppNote Spirent TestCenter EVPN and PBB-EVPN AppNote Executive summary 2 Overview of EVPN 2 Relevant standards 3 Test case: Single Home Test Scenario for EVPN 4 Overview 4 Objective 4 Topology 4 Step-by-step instructions

More information

JN0-343 Q&As. Juniper Networks Certified Internet Specialist (JNCIS-ENT) Pass Juniper JN0-343 Exam with 100% Guarantee

JN0-343 Q&As. Juniper Networks Certified Internet Specialist (JNCIS-ENT) Pass Juniper JN0-343 Exam with 100% Guarantee JN0-343 Q&As Juniper Networks Certified Internet Specialist (JNCIS-ENT) Pass Juniper JN0-343 Exam with 100% Guarantee Free Download Real Questions & Answers PDF and VCE file from: 100% Passing Guarantee

More information

Border Provisioning Use Case in VXLAN BGP EVPN Fabrics - Multi-Site

Border Provisioning Use Case in VXLAN BGP EVPN Fabrics - Multi-Site Border Provisioning Use Case in VXLAN BGP EVPN Fabrics - Multi-Site This chapter explains LAN Fabric border provisioning using EVPN Multi-Site feature. Overview, page 1 Prerequisites, page 1 Limitations,

More information

Cisco Configuring Cisco Nexus 7000 Switches v3.1 (DCNX7K)

Cisco Configuring Cisco Nexus 7000 Switches v3.1 (DCNX7K) Course Overview View Course Dates & Register Today This course is designed for systems and field engineers who configure the Cisco Nexus 7000 Switch. This course covers the key components and procedures

More information

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches Migration Guide Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches Migration Guide November 2013 2013 Cisco and/or its affiliates. All rights reserved. This document is

More information

Frequently Asked Questions for HP EVI and MDC

Frequently Asked Questions for HP EVI and MDC Frequently Asked Questions for HP EVI and MDC Q. What are we announcing at VMworld? A. HP will be expanding Virtual Application Networks with new FlexFabric innovations that simplify the interconnection

More information

Networking solution for consolidated IT infrastructure

Networking solution for consolidated IT infrastructure Networking solution for consolidated IT infrastructure Timo Lonka timo@extremenetworks.com Topics 1.The New Extreme 2. IP Storage and HCI Networking 3. Agile Data Center Architecture 4. Case study: Ficolo

More information

Evolution with End-to-End Data Center Virtualization

Evolution with End-to-End Data Center Virtualization Evolution with End-to-End Data Center Virtualization Yves Louis DC Virtualisation Technical Solution Architect Agenda Data Center Virtualization Overview Front-End Data Center Virtualization Core Layer

More information

IP fabrics - reloaded

IP fabrics - reloaded IP fabrics - reloaded Joerg Ammon Senior Principal Systems Engineer 2017-11-09 2017 Extreme Networks, Inc. All rights reserved Extreme Networks Acquisition update Oct 30, 2017:

More information

Arista 7160 series: Q&A

Arista 7160 series: Q&A Arista 7160 series: Q&A Product Overview What are the 7160 Series? Highly dynamic cloud data center networks continue to evolve with the introduction of new protocols and server technologies such as containers

More information

H3C S7500E-XS Switch Series FAQ

H3C S7500E-XS Switch Series FAQ H3C S7500E-XS Switch Series FAQ Copyright 2016 Hangzhou H3C Technologies Co., Ltd. All rights reserved. No part of this manual may be reproduced or transmitted in any form or by any means without prior

More information

Configuring VXLAN EVPN Multi-Site

Configuring VXLAN EVPN Multi-Site This chapter contains the following sections: About VXLAN EVPN Multi-Site, page 1 Guidelines and Limitations for VXLAN EVPN Multi-Site, page 2 Enabling VXLAN EVPN Multi-Site, page 2 Configuring VNI Dual

More information

February Connectrix VDX-6740B IP Storage Switch Deployment Guide for VxRail Appliance

February Connectrix VDX-6740B IP Storage Switch Deployment Guide for VxRail Appliance 21 February 2016 Connectrix VDX-6740B IP Storage Switch Deployment Guide for VxRail Appliance 2016, Brocade Communications Systems, Inc. All Rights Reserved. ADX, Brocade, Brocade Assurance, the B-wing

More information

Introduction to Segment Routing

Introduction to Segment Routing Segment Routing (SR) is a flexible, scalable way of doing source routing. Overview of Segment Routing, page 1 How Segment Routing Works, page 2 Examples for Segment Routing, page 3 Benefits of Segment

More information

Cisco Intelligent Traffic Director Deployment Guide with Cisco ASA

Cisco Intelligent Traffic Director Deployment Guide with Cisco ASA Cisco Intelligent Traffic Director with Cisco ASA Cisco Intelligent Traffic Director Deployment Guide with Cisco ASA 2016 Cisco and/or its affiliates. All rights reserved. 1 Cisco Intelligent Traffic Director

More information