Scale-Out Architectures with Brocade DCX 8510 UltraScale Inter-Chassis Links

Similar documents
Brocade SAN Scalability Guidelines: Brocade Fabric OS v7.x

The Benefits of Brocade Gen 5 Fibre Channel

Accelerate Your Digital Transformation

BROCADE ICX 6610 SWITCHES FREQUENTLY ASKED QUESTIONS

BROCADE ICX 6610 SWITCHES FREQUENTLY ASKED QUESTIONS

Cloud Optimized Performance: I/O-Intensive Workloads Using Flash-Based Storage

BROCADE 8000 SWITCH FREQUENTLY ASKED QUESTIONS

BROCADE FEDERAL SOLUTIONS. Simplified, Innovative Networks for Federal Agencies

Brocade Fabric OS Software Licensing Guide

BROCADE ALLIANCE PARTNER

THE DATA CENTER TRANSFORMATION IS UNDERWAY IS YOUR STORAGE AREA NETWORK READY?

Brocade ICX 7250 Switch Frequently Asked Questions

BROCADE ICX 6430 AND 6450 SWITCHES FREQUENTLY ASKED QUESTIONS

BROCADE 6505 SWITCH. Flexible, Easy-to-Use Entry-Level SAN Switch for Private Cloud Storage DATA CENTER HIGHLIGHTS

B-DAY TECH UPDATE. Jose Luis D Antonio Systems Engineer Brocade Spanish South America March, 2014

Brocade G610, G620, and G630 Switches Frequently Asked Questions

BROCADE FEDERAL SOLUTIONS. Simplified, Innovative Networks for Federal Agencies

SOFTWARE INSTALLATION GUIDE. EZSwitchSetup Software Installation Guide for Brocade G620

Brocade Fabric OS v7.2.0a Release Notes v1.0

Connect Security Day 2016

Brocade 20-port 8Gb SAN Switch Modules for BladeCenter

The IBM Systems Storage SAN768B announces native Fibre Channel routing

Brocade X6 Directors. Network Innovation for the Virtualized, All-Flash Data Center DATA SHEET

BROCADE 6510 SWITCH. Flexible, Easy-to-Use Enterprise-Class SAN Switch for Private Cloud Storage DATA CENTER HIGHLIGHTS

April Brocade VDX 6740 Deployment Guide for VMware EVO:RAIL

Brocade X6 Director Frequently Asked Questions

WIE VERÄNDERT SOFTWARE NETWORKING DAS DATA CENTER

The Routing Solution for the Digital Era 2016 BROCADE COMMUNICATIONS SYSTEMS, INC.

BROCADE SAN PRODUCTS OVERVIEW

Brocade ICX 6610 Switch

IBM TotalStorage SAN Switch M12

NetApp AFF A300 Gen 6 Fibre Channel

January Network Advisor. Features Support Matrix. Supporting Brocade Network Advisor

Brocade Fabric OS v7.3.1 Release Notes v1.0

Brocade Services Director for Enterprise

Brocade Technology Conference Call: Data Center Infrastructure Business Unit Breakthrough Capabilities for the Evolving Data Center Network

Power Your Path to the Software-Defined Data Center

VIRTUAL CLUSTER SWITCHING SWITCHES AS A CLOUD FOR THE VIRTUAL DATA CENTER. Emil Kacperek Systems Engineer Brocade Communication Systems.

Brocade FastIron Flexible Authentication

Architecting the High Performance Storage Network

IBM TotalStorage SAN Switch F32

Large SAN Design Best Practices Using Cisco MDS 9700 and MDS 9500 Multilayer Directors

Brocade s Mainframe and FICON Presentations and Seminars that are available to Customers and Partners

The Business Case for Software-Defined Networking

Brocade Fabric OS DATA CENTER. Target Path Selection Guide October 17, 2017

QuickSpecs. HPE StoreFabric B-series SAN Director Switch Family. Overview. HPE StoreFabric SN8600B 32Gb SAN Directors

High Speed Migration: Choosing the right multimode multi-fiber push on (MPO) system for your data center

PrepAwayExam. High-efficient Exam Materials are the best high pass-rate Exam Dumps

IBM TotalStorage SAN Switch F08

IBM and BROCADE Building the Data Center of the Future with IBM Systems Storage, DCF and IBM System Storage SAN768B Fabric Backbone

February Connectrix VDX-6740B IP Storage Switch Deployment Guide for VxRail Appliance

Datasheet Brocade DCX 8510 backbone family

BROCADE NETWORK ADVISOR

Brocade Fabric OS Software Licensing Guide, 8.1.0

Brocade Fabric OS Software Licensing Guide

Fibre Channel Gateway Overview

Introducing Brocade VCS Technology

Brocade Ethernet Optics Family

8Gbps FICON Directors or 16Gbps FICON Directors What s Right For You?

Brocade FC32-64 Port Blade

BROCADE OPTICS FAMILY

Brocade Fabric OS v7.3.0b Release Notes v4.0

The Virtual Machine Aware SAN

Gen 6 Fibre Channel Evaluation of Products from Emulex and Brocade

SAN Design and Best Practices

Data Sheet. OceanStor SNS5604/SNS5608 FC Storage Switches. Product Features HIGHLIGHTS. Network Innovation for the Virtualized, All-Flash Data Center

Brocade Application Delivery

Data Sheet. OceanStor SNS2624/SNS3664 FC Storage Switches. Product Features HIGHLIGHTS. OceanStor SNS2624/SNS3664 FC Storage Switches

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740

As enterprise organizations face the major

EMC Support Matrix Interoperability Results. September 7, Copyright 2016 EMC Corporation. All Rights Reserved.

16GFC Sets The Pace For Storage Networks

IBM System Storage SAN768B-2 and SAN384B-2

IBM System Storage SAN768B

Product Overview. Send documentation comments to CHAPTER

Brocade Flow Optimizer

Understanding FICON Performance

IBM System Storage SAN 384B and IBM System Storage SAN 768B add features

NETWORK ARCHITECTURES AND CONVERGED CLOUD COMPUTING. Wim van Laarhoven September 2010

IBM Virtual Fabric Architecture

IBM To Resell Cisco Systems MDS 9000 Multilayer Switch and Director Family of Intelligent Storage Networking Products

IBM Europe Announcement ZG , dated February 13, 2007

Brocade Application Delivery

Brocade Ethernet Fabrics

SAN Design Best Practices for the Dell PowerEdge M1000e Blade Enclosure and EqualLogic PS Series Storage (1GbE) A Dell Technical Whitepaper

EMC VSPEX with Brocade Networking Solutions for PRIVATE CLOUD

Brocade Fabric Vision Technology

Microsoft SharePoint Server 2010 on Dell Systems

Brocade approved solutions for 16/10/8G FC SAN connectivity

Guide to Brocade 5600 vrouter Documentation

Cisco I/O Accelerator Deployment Guide

As IT organizations look for better ways to build clouds and virtualized data

Data Replication. Replication can be done at many levels. Replication can be done at many levels. Application. Application. Database.

Datacenter Networking

SOLUTION GUIDE START NOW

Brocade ICX 7450, Brocade ICX 7750, and Brocade ICX 7250 Stacking

Cisco Systems 4 Gb 20-port and 10-port Fibre Channel Switch Modules for BladeCenter

Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision

Brocade Network Advisor

IBM System Storage SAN768B and SAN384B

Transcription:

Scale-Out Architectures with Brocade DCX 8510 UltraScale Inter-Chassis Links The Brocade DCX 8510 Backbone with Gen 5 Fibre Channel offers unique optical UltraScale Inter-Chassis Link (ICL) connectivity, enabling massive fabric scalability while simplifying network topologies. This paper provides guidelines for the proper configuration and implementation of Brocade QSFP-based optical UltraScale ICLs.

CONTENTS Ultrascale ICL Overview...3 Ultrascale ICL Licensing...3 Supported Topologies...4 Core/Edge Topology... 4 Mesh Topology... 4 QSFP-Based UltraScale ICL Connection Requirements...5 Ultrascale ICL Trunking and Trunk Groups...6 (Brocade CR16-8) Port Numbering Layout...8 (BROCADE CR16-4) Port Numbering Layout...9 UltraScale ICL Diagnostics... 10 UltraScale ICL Routing... 10 Summary... 10 Scale-Out Architectures with Brocade DCX 8510 Inter-Chassis Links 2 of 10

ULTRASCALE ICL OVERVIEW Brocade UltraScale Inter-Chassis Links (ICLs) are high-performance ports for interconnecting multiple Brocade DCX Backbones, enabling industry-leading scalability while preserving ports for server and storage connections. Brocade optical UltraScale ICLs based on Quad Small Form Factor Pluggable (QSFP) technology connect the core routing blades of two Brocade DCX 8510 Backbone chassis. Each QSFP-based UltraScale ICL port combines four 16 Gbps links, providing up to 64 Gbps of throughput within a single cable. Available with Brocade Fabric OS (FOS) version 7.0 and later, Brocade offers up to 32 QSFP UltraScale ICL ports on the Brocade DCX 8510-8 and up to 16 QSFP UltraScale ICL ports on the Brocade DCX 8510-4. The optical form factor of the Brocade QSFP-based UltraScale ICL technology offers several advantages over the copper-based ICL design in the original Brocade DCX platforms. First, Brocade has increased the supported ICL cable distance from 2 meters to 50 meters (or 100 meters with Brocade FOS v7.1, select QSFPs, and OM4 fiber), providing greater architectural design flexibility. Second, the combination of four cables into a single QSFP provides incredible flexibility for deploying a variety of different topologies, including a massive 9-chassis full-mesh design with only a single hop between any two points within the fabric. In addition to these significant advances in ICL technology, the Brocade DCX 8510 UltraScale ICL capability still provides dramatic reduction in the number of Inter-Switch Link (ISL) cables required a four to one reduction compared to traditional ISLs with the same amount of interconnect bandwidth. And since the QSFP-based UltraScale ICL connections reside on the core routing blades instead of consuming traditional ports on the port blades, up to 33 percent more FC ports are available for server and storage connectivity. ULTRASCALE ICL LICENSING An ICL POD (Ports on Demand) license is applicable to both the Brocade DCX 8510-8 and DCX 8510-4. Descriptions of applicable licensing for the Brocade DCX 8510 with Brocade FOS v7.0 are noted below. (Please note that the licensing of copper-based ICLs on the original Brocade DCX platforms is different, and the following information does not apply to the Brocade DCX or DCX-4S Backbones.) ICL POD License: Brocade DCX 8510-8 with Gen 5 Fibre Channel One ICL POD license on the Brocade DCX 8510-8 enables the first 16 QSFP UltraScale ICL ports (enabling ICL ports 0 7 on each core blade). This is equivalent to 16 64 Gbps, or 1 Tbps of bandwidth. Two ICL POD licenses enable the remaining 16 QSFP UltraScale ICL ports (enabling ICL ports 8 15 on each core blade), so all 32 QSFP ports across both core routing blades are enabled. This is equivalent to 32 64 Gbps, or 2 Tbps of bandwidth. ICL POD License: Brocade DCX 8510-4 with Gen 5 Fibre Channel Only one ICL POD license is required to enable all 16 QSFP UltraScale ICL ports available on the two core blades of the Brocade DCX 8510-4. This is equivalent to 16 64 Gbps, or 1 Tbps of bandwidth. Enterprise ICL (EICL) License: Brocade DCX 8510-8 and Brocade DCX 8510-4 with Gen 5 Fibre Channel The EICL license is required on each Brocade DCX 8510 chassis that connects to four or more Brocade DCX 8510 chassis via UltraScale ICLs. This license requirement does not depend upon the total number of Brocade DCX 8510 chassis that exist in a fabric, but only on how many chassis are directly connected via ICLs. This license is in addition to the ICL POD license requirements noted above, which enable the actual ICL ports. Scale-Out Architectures with Brocade DCX 8510 Inter-Chassis Links 3 of 10

SUPPORTED TOPOLOGIES Two network topologies are supported with the Brocade DCX 8510 Backbone platforms and optical UltraScale ICLs: core/edge and mesh. Both topologies deliver unprecedented scalability while dramatically reducing ISL cables. Note: Always refer to the Brocade SAN Scalability Guidelines for Brocade FOS v7.x for current supported UltraScale ICL topology scalability limits. Core/Edge Topology A core/edge topology, also known as CE, is an evolution of the well established and popular star topology often used in data networks. CE designs have dominated Storage Area Network (SAN) architecture for many reasons, including the fact that they are well tested, well balanced, and economical. Figure 1 shows how a customer could deploy two Brocade DCX 8510s at the core and eight at the edge for a highly scalable, costeffective topology. In most environments, servers are attached to the edge chassis, with storage being attached to the core. By connecting each edge chassis to each core, all hosts/targets are separated by a maximum of one hop, regardless of where they are attached to the Brocade DCX 8510s. (A variety of different CE designs can be implemented, with varying ratios of core versus edge chassis being used to meet the needs of any environment.) Fabric Edge Fabric Core Figure 1. Ten-chassis core/edge topology supported with Brocade DCX 8510 and FOS v7.0.1 and higher. Mesh Topology Mesh was a common design philosophy when SAN fabrics were first being built, as they were simple and easy to manage. But as larger fabrics became more common, the cabling infrastructure to support such a topology became impossible to manage. Without direct connections between every pair of chassis, knowing where each storage and server port is located in order to provide ideal fabric routes can quickly become an operational nightmare. Brocade optical UltraScale ICL technology solves these issues by easily allowing each Brocade DCX 8510 to connect directly to every other DCX 8510 in the fabric. This drastically simplifies design and operational issues associated with deployment. Figure 2 shows a nine-chassis active-active mesh topology using UltraScale ICLs. Scale-Out Architectures with Brocade DCX 8510 Inter-Chassis Links 4 of 10

Figure 2. Nine-chassis mesh topology supported with Brocade DCX 8510 and FOS v7.0.1 and higher. QSFP-BASED ULTRASCALE ICL CONNECTION REQUIREMENTS To connect multiple Brocade DCX 8510 chassis via UltraScale ICLs, a minimum of four ICL ports (two on each core blade) must be connected between each chassis pair. With 32 UltraScale ICL ports available on the Brocade DCX 8510-8 (with both ICL POD licenses installed), this supports ICL connectivity with up to eight other chassis and at least 256 Gbps of bandwidth to each connected Brocade DCX 8510. Figure 3 shows a diagram of the minimum connectivity between a pair of Brocade DCX 8510-8 chassis. (Note: The physical location of UltraScale ICL connections may be different from what is shown here, as long as there are at least two connections per core blade.) #1 #2 #1 #2 UltraScale ICLs fig03_dcx-8510-icl Domain 1 Brocade DCX 8510-8 Domain 2 Brocade DCX 8510-8 Figure 3. Minimum connections needed between a pair of Brocade 8510 chassis. Scale-Out Architectures with Brocade DCX 8510 Inter-Chassis Links 5 of 10

The dual connections on each core blade must reside within the same UltraScale ICL trunk boundary on the core blades. UltraScale ICL trunk boundaries are described in detail in the next section. If more than four UltraScale ICL connections are required between a pair of Brocade DCX 8510 chassis, additional UltraScale ICL connections should be added in pairs (one on each core blade). A maximum of 16 UltraScale ICL connections or ICL trunk groups between any pair of Brocade DCX 8510 chassis is supported, unless they are deployed using Virtual Fabrics, where a maximum of 16 UltraScale ICL connections or trunks can be assigned to a single Logical Switch. This limitation is due to the maximum supported number of connections for Fabric Shortest Path First (FSPF) routing. Effectively, this means that there should never be more than 16 UltraScale ICL connections or trunks between a pair of Brocade DCX 8510 chassis, unless Virtual Fabrics is enabled, and the ICLs are assigned to two or more Logical Switches. The exception to this is if eight port trunks are created between a pair of Brocade DCX 8510-8 chassis. Details on this configuration are described in the next section. QSFP-based UltraScale ICLs and traditional ISLs are not concurrently supported between a single pair of Brocade DCX 8510 chassis. All inter-chassis connectivity between any pair of Brocade DCX 8510 chassis must be done by using either ISLs or UltraScale ICLs. The final layout and design of UltraScale ICL interconnectivity is determined by the customer s unique requirements and needs, which dictate the ideal number and placement of ICL connections between Brocade DCX 8510 chassis. Brocade Professional Services can assist in designing complex ICL-based designs. ULTRASCALE ICL TRUNKING AND TRUNK GROUPS Trunking involves taking multiple physical connections between a chassis or switch pair and forming a single virtual connection, aggregating the bandwidth for traffic to traverse across. Brocade offers a number of hardware-based trunking solutions, including Brocade ISL Trunking for traditional ISLs, trunking for Integrated Routing (FCR connectivity), trunking for Access Gateway, and also trunking for UltraScale ICLs. This section describes the trunking capability used with the QSFP-based UltraScale ICL ports on the Brocade DCX 8510 platforms. (Note that trunking is enabled automatically for UltraScale ICL ports, and it cannot be disabled by the user.) As previously described, each QSFP-based UltraScale ICL port actually has four independent 16-Gbps links, each of which terminates on one of four Application-Specific Integrated Circuits (ASICs) located on each Brocade DCX 8510-8 core blade, or two ASICs on each DCX 8510-4 core blade. Trunk groups can be formed using any of the ports that make up contiguous groups of eight links on each ASIC. Figure 4 shows that each core blade has groups of eight UltraScale ICL ports (indicated by the blue box around the groups of ports) that connect to common ASICs in such a way that their four links can participate in common trunk groups with links from the other ports in the group. Each Brocade DCX 8510-4 core blade has one group of eight UltraScale ICL ports, and each Brocade DCX 8510-8 core blade has two groups of eight UltraScale ICL ports. Scale-Out Architectures with Brocade DCX 8510 Inter-Chassis Links 6 of 10

Trunk Groups of 8 QSFPs Brocade DCX 8510-4 Brocade DCX 8510-8 Figure 4. Core blade trunk groups. Since there are four separate links for each QSFP-based UltraScale ICL connection, each of these ICL port groups can create up to four trunks, with up to eight links in each trunk. A trunk can never be formed by links within the same QSFP ICL port. This is because each of the four links within the ICL port terminate on a different ASIC for the Brocade DCX 8510-8 core blade, or on either different ASICs or different trunk groups within the same ASIC for the DCX 8510-4 core blade. Thus, each of the four links from an individual ICL is always part of independent trunk groups. When connecting UltraScale ICLs between a Brocade DCX 8510-8 and a DCX 8510-4, the maximum number of links in a single trunk group is four. This is due to the different number of ASICs on each product s core blades, as well as the mapping of the ICL links to the ASIC trunk groups. To form trunks with up to eight links, UltraScale ICL ports must be deployed within the trunk group boundaries indicated in Figure 4, and they can be created only when deploying ICLs between a pair of Brocade DCX 8510-8 chassis or DCX 8510-4 chassis. It is not possible to create trunks with more than four links when connecting UltraScale ICLs between a Brocade DCX 8510-8 and DCX 8510-4 chassis. As a best practice, it is recommended that you deploy trunk groups in groups of up to four links by ensuring that the UltraScale ICL ports intended to form trunks all reside within the groups indicated by the red boxes in Figure 5. Scale-Out Architectures with Brocade DCX 8510 Inter-Chassis Links 7 of 10

A A C A Trunk Group of 4 QSFPs B B D Brocade DCX 8510-4 Brocade DCX 8510-8 Figure 5. Core blade recommended trunk groups. By following this recommendation, trunks can be easily formed using UltraScale ICL ports, whether you are connecting two Brocade DCX 8510-8 chassis, two Brocade DCX 8510-4 chassis, or a DCX 8510-8 and a DCX 8510-4. Any time additional UltraScale ICL connections are added to a chassis, they should be added in pairs by including at least one additional UltraScale ICL on each core blade. It is also highly recommended that trunks on a core blade always be comprised of equal numbers of links, and that you deploy connections in an identical fashion on both core blades within a chassis. As an example, if you deploy two UltraScale ICLs within the group of four ICL ports in Trunk Group A in Figure 5, you can add a single additional ICL to Trunk Group A, or you can add a pair of ICLs to any of the other Trunk Groups on the core blade. This ensures that no trunks are formed that have a different total bandwidth from other trunks on the same blade. Deploying a single additional UltraScale ICL to Trunk Group B could result in four trunks with 32 Gbps of capacity (those created from the ICLs in Trunk Group A) and four trunks with only 16 Gbps (those from the single ICL in Group B). The port mapping information shown in Figure 6 and Figure 7 also indicates the recommended UltraScale ICL Trunk Groups by showing ports in the same recommended Trunk Group with the same color. CORE BLADE (BROCADE CR16-8) PORT NUMBERING LAYOUT Figure 6 shows the layout of ports 0 15 on the Brocade DCX 8510-8 CR16-8 line card. You can also see what the switchshow output would be if you executed a switchshow command within Brocade FOS using the CLI. The colored groups of external UltraScale ICL ports indicate those ports that belong to common recommended trunk groups. For example, ports 0 3 (shown in blue in Figure 6) forms four trunk groups, with one link being added to each trunk group from each of the four external ICL ports. For the Brocade DCX 8510-8, you can create up to 16 trunk groups on each of the two core blades. The first ICL POD license enables ICL ports 0 7. Adding a second ICL POD license enables the remaining eight ICL ports, ports 8 15. This applies to ports on both core blades. Scale-Out Architectures with Brocade DCX 8510 Inter-Chassis Links 8 of 10

External ICL Port # Switchshow Port # External ICL Port # Switchshow Port # 7 28 31 15 60 63 6 24 27 14 56 59 5 20 23 13 52 55 4 16 19 12 48 51 3 12 15 11 44 47 2 8 11 10 40 43 1 4 7 9 36 39 0 0 3 8 32 35 Figure 6. Brocade DCX 8510-8 CR16-8 core blade: External UltraScale ICL port numbering to switchshow (internal) port numbering. Note: To disable ICL port 0, you need to issue the portdisable command on all four internal ports associated with that ICL port. CORE BLADE (BROCADE CR16-4) PORT NUMBERING LAYOUT Figure 7 shows the layout of ports 0 7 on the Brocade DCX 8510-4 CR16-4 line card. You can also see what the switchshow output would be if you executed a switchshow command within Brocade FOS using the CLI. The colored groups of external UltraScale ICL ports indicate those ports that belong to a common recommended trunk group. For example, ports 0 3 (shown in blue in Figure 7) form four trunk groups, with one link being added to each trunk group from each of the four external ICL ports. For the Brocade DCX 8510-4, you can create up to eight trunk groups on each of the two core blades. A single ICL POD license enables all eight ICL ports on the Brocade DCX 8510-4 core blades. This applies to ports on both core blades. External ICL Port # Switchshow Port # 7 28 31 6 24 27 5 20 23 4 16 19 3 12 15 2 8 11 1 4 7 0 0 3 Figure 7. Brocade DCX 8510-4 CR16-4 core blade: External UltraScale ICL port numbering to switchshow (internal) port numbering. Note: To disable ICL port 0, you need to issue the portdisable command on all four internal ports associated with that ICL port. Scale-Out Architectures with Brocade DCX 8510 Inter-Chassis Links 9 of 10

ULTRASCALE ICL DIAGNOSTICS Brocade FOS v7.1 provides ClearLink diagnostic port (D_Port) support for UltraScale ICLs, helping administrators quickly identify and isolate ICL optics and cable problems. ClearLink diagnostics on UltraScale ICLs measures link distance and performs link traffic tests; it skips the electrical loopback and optical loopback tests, because the QSFP does not support those functions. In addition, Brocade FOS v7.1 offers ClearLink D_Port test CLI enhancements for increased flexibility and control. ULTRASCALE ICL ROUTING For Virtual Fabrics enabled environments, Brocade FOS v7.2 adds the ability to configure EX_Ports on the UltraScale ICLs of Brocade DCX 8510 platforms that are connected to other DCX 8510 platforms, utilizing the ICL bandwidth to route traffic across different fabrics. This new capability allows users to build very high performance Inter-Fabric Links (IFLs) using UltraScale ICLs, while simplifying cabling. SUMMARY The Brocade QSFP-based optical UltraScale ICLs enable simpler, flatter, low-latency chassis topologies, spanning up to a 100-meter distance with off-the-shelf cables. These UltraScale ICLs dramatically reduce inter-switch cabling requirements and provide up to 33 percent more front-end ports for servers and storage, giving more usable ports in a smaller footprint with no loss in connectivity. To find out more about the Brocade DCX 8510 family and UltraScale ICL features and benefits, talk to your sales representative or visit www.brocade.com/dcx8510. 2013 Brocade Communications Systems, Inc. All Rights Reserved. 09/13 GA-TB-427-03 ADX, AnyIO, Brocade, Brocade Assurance, the B-wing symbol, DCX, Fabric OS, ICX, MLX, MyBrocade, OpenScript, VCS, VDX, and Vyatta are registered trademarks, and HyperEdge, The Effortless Network, and The On-Demand Data Center are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. Other brands, products, or service names mentioned may be trademarks of their respective owners. Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document at any time, without notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade sales office for information on feature and product availability. Export of technical data contained in this document may require an export license from the United States government.