Fibre Channel Storage Area Network Design BRKSAN-2701

Similar documents
Large SAN Design Best Practices Using Cisco MDS 9700 and MDS 9500 Multilayer Directors

Cisco I/O Accelerator Deployment Guide

Storage Access Network Design Using the Cisco MDS 9124 Multilayer Fabric Switch

UCS Engineering Details for the SAN Administrator

The Virtual Machine Aware SAN

FIBRE CHANNEL SAN DESIGN

Designing SAN Using Cisco MDS 9000 Series Fabric Switches

IBM Europe Announcement ZG , dated February 13, 2007

Cisco MDS 9710 Multilayer Director for IBM System Networking IBM Redbooks Product Guide

IBM TotalStorage SAN Switch M12

Interoperability Limitations

Advanced Storage Area Network Design

Cisco MDS 9000 Series Switches

Cisco MDS 9000 Series Switches

Cisco MDS 9000 Family Blade Switch Solutions Guide

Intermixing Best Practices

CCIE Data Center Storage Networking. Fibre Channel Switching

zseries FICON and FCP Fabrics -

CCIE Data Center Storage Networking. Fibre Channel Switching Configuration. Nexus 5500UP FC Initialization Allocate interfaces as type FC

IBM Storage Networking SAN384C-6 Product Guide IBM Redbooks Product Guide

N-Port Virtualization in the Data Center

Advanced Storage Area Network Design Blaise Pangalos Solutions Architect

Cisco Storage Media Encryption for Tape

This appendix covers the following topics: Understanding the Concepts of Storage Networking Understanding the Storage Networking Protocols

As enterprise organizations face the major

IBM and BROCADE Building the Data Center of the Future with IBM Systems Storage, DCF and IBM System Storage SAN768B Fabric Backbone

Configuring PortChannels

Storage Media Encryption Overview

Designing and Deploying a Cisco Unified Computing System SAN Using Cisco MDS 9000 Family Switches

access addresses/addressing advantages agents allocation analysis

Fibre Channel Gateway Overview

Inter-VSAN Routing Configuration

Brocade Technology Conference Call: Data Center Infrastructure Business Unit Breakthrough Capabilities for the Evolving Data Center Network

IBM Storage Networking SAN192C-6 Product Guide IBM Redbooks Product Guide

PASS4TEST. IT Certification Guaranteed, The Easy Way! We offer free update service for one year

Module 2 Storage Network Architecture

Cisco Storage Media Encryption Design Guide for Cisco MDS 9000 NX-OS Software Release 5.2(6)

Advanced Fibre Channel Features

Architecture. SAN architecture is presented in these chapters: SAN design overview on page 16. SAN fabric topologies on page 24

IBM TotalStorage SAN Switch F08

Advanced Storage Area Network Design

Advanced Storage Area Network Design

IBM TotalStorage SAN Switch F32

Design and Implementations of FCoE for the DataCenter. Mike Frase, Cisco Systems

S SNIA Storage Networking Management & Administration

Quick Reference Guide

Five Reasons Why You Should Choose Cisco MDS 9000 Family Directors Cisco and/or its affiliates. All rights reserved.

FCoE Cookbook for HP Virtual Connect

Virtualizing SAN Connectivity with VMware Infrastructure 3 and Brocade Data Center Fabric Services

IBM To Resell Cisco Systems MDS 9000 Multilayer Switch and Director Family of Intelligent Storage Networking Products

Fibre Channel Networking for the IP Network Engineer and SAN Core Edge Design Best Practices Chad Hintz Technical Solutions Architect BRKVIR-1121

Configuring Fibre Channel Interfaces

Configuring and Managing Zones

Cisco MDS 9100 Series Fabric Switches

Fabric Manager Client

CCIE Data Center Written Exam ( ) version 1.0

Vendor: EMC. Exam Code: E Exam Name: Cloud Infrastructure and Services Exam. Version: Demo

Configuring Fabric Congestion Control and QoS

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

FC Cookbook for HP Virtual Connect

Cisco MDS 9000 Family 4-Gbps Fibre Channel Switching Modules

Cisco MDS /4-Port Multiservice Module

Cisco MDS NX-OS Release 6.2Configuration Limits 2

Exam : S Title : Snia Storage Network Management/Administration. Version : Demo

Host and storage system rules

Fibre Channel E_Port Compatibility for IP Storage Networks

Obtaining and Installing Licenses

Symbols. Numerics INDEX

UCS-ABC. Cisco Unified Computing System Accelerated Boot Camp. Length: 5 Days. Format: Lecture/Lab. Course Version: 5.0. Product Version: 2.

45 10.C. 1 The switch should have The switch should have G SFP+ Ports from Day1, populated with all

DCNX5K: Configuring Cisco Nexus 5000 Switches

Cisco MDS Port 10-Gbps Fibre Channel over Ethernet Module

Cisco MDS 9000 Series Fabric Configuration Guide, Release 8.x

Configuring Fabric Congestion Control and QoS

Configuring and Managing Zones

Brocade 20-port 8Gb SAN Switch Modules for BladeCenter

Cisco MDS Gbps 24-Port Fibre Channel over Ethernet Module

Cisco UCS Virtual Interface Card 1225

S S SNIA Storage Networking Foundations

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

4 Gbps and 10 Gbps switching modules available for Cisco MDS 9000 family of products

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

SAN Virtuosity Fibre Channel over Ethernet

FIBRE CHANNEL OVER ETHERNET

VIRTUALIZING SERVER CONNECTIVITY IN THE CLOUD

ECE Enterprise Storage Architecture. Fall 2016

Cisco Nexus 5000 Series NX-OS SAN Switching Configuration Guide

Snia S Storage Networking Management/Administration.

Evolution with End-to-End Data Center Virtualization

Send documentation comments to

Organizations are experiencing an explosive

Interoperability Guidelines for Non-Cisco Switches

Cisco Nexus 4000 Series Switches for IBM BladeCenter

CONNECTRIX MDS-9250I SWITCH

Storage Area Network (SAN)

Storage Area Network (SAN)

CCIE Data Center Lab Exam Version 1.0

SAN Design Reference Guide

vsphere 6.0 with HP ProLiant Gen9 Servers, OneView, 3PAR, Cisco Nexus 5600 and Brocade 6510 Deployment Guide

MDS 9000 Core with Brocade 3900/12000 Edge Topology

Transcription:

Fibre Channel Storage Area Network Design BRKSAN-2701

Agenda Brief SAN Technology Overview Fibre Channel Protocol Virtual SAN (VSAN), Zoning Port Channels NPV and FlexAttach F-Port Port Channel and F-Port Trunking SAN Design Principles and Considerations Design Factors Design Types Design Optimization SAN Security Design Considerations Intelligent Fabric Applications Interoperability Design Considerations 2

SAN Technology Overview Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 3

SAN Technology Overview Agenda Fibre Channel Protocol FC Communications Addressing, Framing Port types, ISL BB_Credit FSPF Virtual SAN (VSAN), Zoning Port Channels Virtual Output Queuing (VOQ) NPV and FlexAttach F-Port Port Channel and F-Port Trunking 4

The SCSI I/O Transaction The SCSI protocol forms the basis of an I/O transaction The point-to-point communication provides reliable connectivity between communicating devices in a SCSI transaction The following shows two sample SCSI exchanges: Host (Initiator) DATA STATUS DATA DATA SCSI READ OPERATION SCSI I/O Channel Disk (Target) READ STATUS READY Host (Initiator) SCSI WRITE OPERATION SCSI I/O Channel Disk (Target) DATA WRITE DATA DATA 5

Fibre Channel Communications Point-to-point oriented Facilitated through device login N_Port-to-N_Port connection Logical node connection point Flow controlled Buffer-to-buffer credits and end-to-end basis Acknowledged For certain classes of traffic, none for others Multiple connections allowed per device Node Node N_Port Transmitter Receiver Transmitter Receiver N_Port Link 6

Fabric Name and Addressing: WWN Every Fibre Channel port and node has a hardcoded address called World Wide Name (WWN) Allocated to manufacturer by IEEE Coded into each device when manufactured 64 or 128 bits Switch Name Server maps WWNs to FCID WWNN uniquely identify devices WWPN uniquely identify each port in a device 7

FC_ID Address Model FC_ID address models help speed up routing Switches assign FC_ID addresses to N_Ports Some addresses are reserved for fabric services Private loop devices only understand 8-bit address (0x0000xx) FL_Port can provide proxy service for public address translation Maximum switch domains = 239 (based on standard) 8 Bits 8 Bits 8 Bits Switch Topology Model Switch Domain Area Device Private Loop Device Address Model Public Loop Device Address Model 00 00 Switch Domain Area Arbitrated Loop Physical Address (AL_PA) Arbitrated Loop Physical Address (AL_PA) 8

Fibre Channel FC-2 Hierarchy Multiple exchanges are initiated between initiators (hosts) and targets (disks) Each exchange consists of one or more bidirectional sequences Each sequence consists of one or more frames For the SCSI3 ULP, each exchange maps to a SCSI command OX_ID and RX_ID Exchange SEQ_ID Sequence Sequence Sequence SEQ_CNT Frame Fields Frame Frame Frame ULP Information Unit 9

Fibre Channel Port Types Fibre Channel Switch Input Port Fabric X Output Port Node NL_Port FL_Port E_Port E_Port Fabric Switch Node NL_Port G_Port F_Port N_Port Node Node NL_Port G_Port F_Port N_Port Node G_Port F_Port N_Port Node 10

Inter-Switch Link (ISL) EISL The interconnection between switches is called the ISL E_Port to E_Port ( Expansion port) FC-PH permits consecutive frames of a sequence to be routed over different ISL links for maximum throughput Cisco s implementation is to dedicate an FC_ID pair and/or a given exchange to an ISL bundle member to guarantee in-order delivery for exchange/sequence frames Cisco Extended ISL (EISL, TE port) 11

Extending Optical SAN Extension BB_Credits and Distance 1 Gbps FC ~2 km per Frame 2 Gbps FC ~1 km per Frame 4 Gbps FC 8 Gbps FC ~½ km per Frame ~¼ km per Frame 16 Km BB_Credits are used to ensure enough FC frames in flight A full (2112 byte) FC frame is approx 2 km long @ 1 Gbps, 1 km long @ 2 Gbps and ½ km long at 4 Gbps As distance increases, the number of available BB_Credits need to increase as well Insufficient BB_Credits will throttle performance no data will be transmitted until R_RDY is returned 12

FSPF Protocol FSPF stands for Fabric Shortest Path First Path selection protocol used in Fibre Channel Based on link state protocol Fibre Channel standard defined in FC-SW2 Conceptually based on Open Shortest Path First (OSPF) Internet routing protocol 13

VSANs, Zoning, IVR Zones Fabric virtualization VSAN Provide independent ( virtual ) fabric services on a single physical switch VSAN Design Foundation Zoning Fabric routing (Inter-VSAN Routing IVR) Ability to provide selected connectivity between virtual fabrics without merging them Virtual Fabric Trunking (VSAN Trunking) Ability to transport multiple virtual fabrics over a single ISL or common group of ISLs IVR zones 14

VSANs, Zoning, IVR Zones Fabric virtualization VSAN Provide independent ( virtual ) fabric services on a single physical switch VSAN Design Foundation Zoning Fabric routing (Inter-VSAN Routing IVR) Ability to provide selected connectivity between virtual fabrics without merging them Virtual Fabric Trunking (VSAN Trunking) Ability to transport multiple virtual fabrics over a single ISL or common group of ISLs IVR zones 15

SAN Islands Before VSANs Production SAN Tape SAN Test SAN SAN A DomainID=1 SAN B DomainID=2 SAN C DomainID=3 SAN D DomainID=4 SAN E DomainID=5 SAN F Domain ID=6 DomainID=7 DomainID=8 16

SAN Islands with Virtual SANs Production SAN Tape SAN Test SAN SAN A DomainID=1 SAN B DomainID=2 SAN C DomainID=3 SAN D DomainID=4 SAN E DomainID=5 SAN F Domain ID=6 17

Zoning and VSANs Hierarchical relationship First assign physical ports to VSANs Then configure independent zones per VSAN VSANs only change when ports needed per virtual fabric Zones can change frequently (e.g., backup) Zones provide added security and allow sharing of device ports Zone membership is configured by: Port World Wide Name (pwwn) device Fabric World Wide Name (fwwn) fabric Fibre Channel Identifier (FCID) Fibre Channel Alias (FC_Alias) IP address Domain ID/port number Interface Zones and VSANs Are Complementary VSAN 2 Default Zone VSAN 7 Default Zone Physical Topology Active Zoneset A Host1 Host3 ZoneA Disk3 ZoneB Disk4 ZoneA Disk6 Disk1 Disk2 Active Zoneset D ZoneC Host2 Host4 ZoneD Disk5 One Active Zoneset per VSAN 18

VSANs, Zoning, IVR Zones Fabric virtualization VSAN Provide independent ( virtual ) fabric services on a single physical switch VSAN Design foundation Zoning Fabric routing (Inter-VSAN Routing IVR) Ability to provide selected connectivity between virtual fabrics without merging them Virtual fabric trunking (VSAN Trunking) Ability to transport multiple virtual fabrics over a single ISL or common group of ISLs IVR zones 19

Inter-VSAN Routing IVR Enables devices in different VSANs to communicate Allows selective routing between specific members of two or more VSANs Traffic flow between selective devices Resource sharing, i.e., tape libraries and disks IVR Zone VSAN 10 VSAN 20 Media Server Tape Library 20

IVR Zones IVR zone A container or access control, containing two or more devices in different VSANs Standard zones are still used to provide intra-vsan access IVR Zoneset A collection of IVR zones that must be activated to be operational VSAN 2 VSAN 3 Physical Topology Disk2 ZoneA Host1 Disk4 ZoneB ZoneD ZoneA Host3 Disk6 Disk3 Disk1 Host2 Host4 Disk5 ZoneC Inter-VSAN Zone 21

Port Channels Port Aggregation Feature Used to Create a Single Logical ISL from 1 16 Physical ISLs Increases bandwidth and availability Very granular load balancing per exchange/src/dst or per src/dst (policy on a per VSAN basis) Interfaces can both be added and removed in a nondisruptive manner in production environments Preserved FC guarantee of in-order delivery (IOD) 4-Link Port Channel EISL 22

VSANs, Trunking, Port Channels Hierarchical relationship Port Channels provide link aggregation to yield virtual ISL (E_Port) Single-link ISL or Port Channel ISL can be configured to become EISL (TE_Port) VSANs can be selective grafted or pruned from EISL trunks All member links of a port channel must have same configuration prior to creating channel (e.g., TE_Port or E_Port, VSANsenabled, etc.) Port Channel technology provides high availability and fast recovery for VSAN trunk (EISL) Multiple Port Channels yield multiple paths for custom traffic engineering Trunking E_Port (TE_Port) Trunking E_Port (TE_Port) VSAN Metric 10 100 20 50 VSAN Metric 10 50 8 Gbps Port Channel E_Port E_Port 4-Link (8 Gbps) Port Channel Configured as EISL 23

Virtual Output Queuing (VOQ) B C A C C C C C A B Input Port Output Port A Switch with no VOQ Input Port Output Port B HOL blocking Input port Output Port C -------------------------------------------------------------------------------------------------------------------------------------------------------------- B C A C C C C C A B C C C C B Input Port Output Port A A ARB Switch with VOQ support Input Port Output Port B No HOL blocking VOQ alleviates HOL Input Port Output Port C 24

N-Port Virtualizer (NPV) Simplify Large-Scale Blade Server Deployments Deployment Model FC Switch Mode Blade System Blade System Blade N Blade 2 Blade 1 Blade N Blade 2 Blade 1 Blade Switch Configured as NPV (i.e., HBA Mode) Deployment Model HBA Mode Blade System Blade System Blade N Blade 2 Blade 1 Blade N Blade 2 Blade 1 FC Switch FC Switch NPV NPV E-Port NPV Enables Large Scale Blade Server Deployments By: N-Port E-Port SAN Reducing Domain ID usage Addressing switch interop issues F-Port SAN Simplifying management Storage Storage Blade Switch Attribute FC Switch Mode (E-Port) Deployment Model HBA Mode (N-Port) One per FC Blade Switch # of Domain IDs Used None (Uses Domain ID of Core Switch Yes Interoperability Issues with Multivendor Core SAN Switch No Medium Level of Management Coordination Between Server and SAN Administrators Low 25

FlexAttach Flexibility for Adds, Moves, and Changes Blade 1 No Blade Switch Config Change No Switch Zoning Change No Array Configuration Change Blade Server New Blade NPV. SAN Blade N Storage Flex Attach FlexAttach Each Blade Switch F-Port assigned a virtual WWN Blade Switch performs NAT operations on real WWN of attached server Lock vpwwn to FCID or follow the blade Benefits No SAN re-configuration required when new Blade Server attaches to Blade Switch port Provides flexibility for server administrator, by eliminating need for coordinating change management with networking team Reduces downtime when replacing failed Blade Servers 26

F-Port Port Channel and F-Port Trunking Enhanced Blade Switch Resiliency Blade System Blade System Blade N Blade 2 Blade 1 Blade N Blade 2 Blade 1 N-Port N-Port F-Port Port Channel F-Port Port Channel F-Port Core Director F-Port Trunking F-Port Trunking F-Port Core Director VSAN 1 VSAN 2 VSAN 3 SAN SAN Storage Storage F-Port PortChannel w/ NPV Bundle multiple ports in to 1 logical link Any port, any module High-Availability (HA) Blade Servers are transparent if a cable, port, or line cards fails Traffic Management Higher aggregate bandwidth Hardware-based load balancing F-Port Trunking w/ NPV Partition F-Port to carry traffic for multiple VSANs Extend VSAN benefits to Blade Servers Separate management domains Separate fault isolation domains Differentiated services: QoS, Security 27

FCoE Connectivity Extends FC SANs FC FC FICON NPV, NPIV, FlexAttach SAN Extension Block Virtualization w/ Invista SANTap w/ RecoverPoint FC iscsi SAN Security SME w/ RKM VSANs, QoS, PortChannels FICON FCoE MDS 9000 FCIP Preserves FC investments Increases SAN-attach rate of servers 28

Network stack comparison SCSI SCSI SCSI SCSI SCSI iscsi FCP FCP FCP FC FC FC FCIP TCP TCP IP IP FCoE Ethernet Ethernet Lossless Ethernet PHYSICAL WIRE 29

Fibre Channel Basics Summary Fibre Channel is a very robust, hierarchical standard Fibre Channel utilizes a Point-to-Point communications model irrespective of the topology Fibre Channel includes a full set of services for naming, addressing, building, and managing fabrics Fibre Channel utilizes FSPF, an OSPF like routing protocol to route traffic Fibre Channel Zoning, VSAN and IVR are the methods of logically grouping devices within a given fabric NPV, FlexAttach, F-Port Port Channel and F-Port Trunking create Blade Server-Aware SANs FCoE extends FC SAN over Ethernet 30

SAN Design Principles and Considerations Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 31

Agenda Brief SAN Technology Overview Fibre Channel Protocol Zoning, Virtual SAN (VSAN) Port Channels NPV and FlexAttach F-Port Port Channel and F-Port Trunking SAN Design Principles and Considerations Design Factors Design Types Design Optimization SAN Security Design Considerations Intelligent Fabric Applications Interoperability Design Considerations 32

SAN Design Principles and Considerations Determine components to be used and how they will fit into your overall strategy Creation of technical infrastructure and how the pieces will fit together Determine how existing infrastructure and new one will be integrated Creation of the processes and procedures that will guide personnel in how the infrastructure is to be used Data Center Servers Storage Network Storage Disk and Tape 33

Design Factors Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 34

Early SAN Designs 1. First SANs hardly qualified as networks SAN islands of two to four switches Fixed 8 16 port switches limited SAN growth Low Traffic Across ISLs 4 SAN Islands Host 2. No fabric segmentation such as VSANs 1 Few Switches 3. Limited enhancements to FSPF No Port Channeling No equal cost load balancing 4. Traffic management was not needed No QoS because bandwidth was over provisioned 5. Management tools focus on element management Not network management Switches Managed Separately 3 5 Single Routes Faults Impact all Devices Host 2 35

SAN Major Design Factors 1. Port density How many now, how many later? topology 2. Network performance What is acceptable? Unavoidable? 3. Traffic management Preferential routing or resource allocation 4. Fault isolation Consolidation while maintaining isolation 5. Management Secure, simplified management High Performance Crossbar 2 QoS, Congestion Control, Reduce FSPF Routes 4 8 8 8 8 8 8 8 8 8 8 8 8 Large Port Count Directors 1 3 Failure of One Device Has No Impact on Others 36

1. Scalability Port Density, Topology Requirements Number of ports for end devices How many ports are needed now? What is the expected life of the SAN? How many will be needed in the future? Hierarchical SAN design Best Practice Design to cater for future requirements High Performance Crossbar 2 QoS, Congestion Control, Reduce FSPF Routes 4 8 8 8 8 8 8 8 8 8 8 8 8 Large Port Count Directors 1 Doesn t imply build it all now, but means cater for it and avoids costly retrofits tomorrow 3 Failure of One Device Has No Impact on Others 37

2. Network Performance Oversubscription Design Considerations All SAN Designs Have Some Degree of Oversubscription Without oversubscription, SANs would be too costly Oversubscription is introduced at multiple points Switches are rarely the bottleneck in SAN implementations Disk Oversubscription Disk Do Not Sustain Wire-Rate I/O with Realistic I/O Mixtures Most Major Vendors Promote 12:1 host:disk Fan-Out 70 MBps Max/Port (Common) Tape Oversubscription Low Sustained I/O Rates LTO-2 Native Transfer Rate ~ 60 MBps 60 MBps Max/Port (Common) Device capabilities (peak and sustained) must be considered along with network oversubscription Must consider oversubscription during a network failure event Remember, all traffic flows towards targets main bottlenecks Typical Oversubscription in Two-Tier Design Can Approach 8:1, Sometimes Even Higher 7:1 O.S. (Common) Port Channels Help Reduce Oversubscription While Maintaining HA Requirements 40 MBps Max/HBA (Common) Host Oversubscription Most Hosts Suffer from PCI Bus Limitations, OS, and Application Limitations Thereby Limiting Maximum I/O and Bandwidth Rate 38

3. Traffic Management Do different apps/servers have different performance requirements? Should bandwidth be reserved for specific applications? Is preferential treatment/ QoS necessary? Given two alternate paths for traffic between data centers, should traffic use one path in preference to the other? Preferential routes High Performance Crossbar 2 QoS, Congestion Control, Reduce FSPF Routes 8 4 8 8 8 8 8 8 8 8 8 8 8 Large Port Count Directors 1 3 Failure of One Device Has No Impact on Others 39

4. Fault Isolation Consolidation of Storage into a Single Fabric = Increased Storage Utilization + Reduced Administration Overhead Major Drawback Is That Faults Are No Longer Isolated Technologies such as VSANs enable consolidation and scalability while maintaining security and stability Physical SAN Islands Are Virtualized onto Common SAN Infrastructure VSANs constrain fault impacts Faults in one virtual fabric (VSAN) are contained and do not impact other virtual fabrics Fabric #1 Fabric #2 Fabric #3 40

5. Management FM Client FMS FM Clients SNMP SNMP SNMP SAN Security RBAC on per VSAN basis FC-SP for switch-to-switch or device-to-switch security Traffic Monitoring Fabric Manager Server Traffic monitoring through Device Manager and Performance Manager 41

Fabric Topologies Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 42

Design Types Core-Edge Top-of-Rack Collapsed Core-Edge 43

Core-Edge Traditional SAN design for growing SANs High density directors in core and fabric switches, directors or blade switches on edge A B Predictable performance Scalable growth up to core and ISL capacity A B A B A B 44

Large Core-Edge End-of-Row Design Large Core/Edge (2104 Usable Ports per Fabric) Traditional Core-Edge Design Is Ideal for Centralized Services and Consistent Host-Disk Performance Regardless of Location Massive Cabling for EoR 2x 18+4 ports LC for FCIP to remote DC 120 Storage Ports at 4 G 12 ISL to core at 4G 12 ISL to Storage Edge 96 ISL to Host Edge A Fabric Shown, Repeat for B Fabric 12 ports LC for dedicated 4G ISL and storage ports 48 ports LC for shared 4G host ports 48 ISL to Core at 4 G 336 Host Ports at 4 G Ports Deployed: Used Ports: Storage Ports (4G Dedicated): Host Ports (4 G Shared): Host ISL Oversubscription: End to End Oversubscription : 2104 2024 240 1344 7 : 1 (336/48) 5.6 : 1 (1344/240) 45

Top of Rack Design - 9134 and 10G ISLs Top of Rack (1200 Usable Ports per Fabric) Top of Rack Design Utilizing 10 Gb ISLs; High Bandwidth ISLs Provides Ample Performance and Reduces Cabling 96 Storage Ports at 2 G 28 ISL to Edge at 10 G A B 2 ISL to Core at 10 G 32 Host Ports at 4 G A B Ports Deployed: 1200 Used Ports: 1200 Storage Ports (2 G Dedicated): Host Ports (4 G Shared): Host ISL Oversubscription: End-to-End Oversubscription: 192 896 6.4 : 1 (32*4/20) 9.3 : 1 (896*4/(192*2)) 14 Racks 32 Dual Attached Servers per Rack 46

Top-of-Rack Design - Blade Centers Top of Rack (804 Usable Ports per Fabric) Blade Server Design Using 2 x 4 G ISL per Blade Switch; Oversubscription Can Be reduced for individual Blade Centers by adding additional ISLs as needed Need to manage more SAN Edge switches/blade Switches 120 Storage Ports at 2 G 60 ISL to Edge at 4 G A B A B Ports Deployed: Used Ports: Storage Ports (2 G Dedicated): Host Ports (4 G Shared): Host ISL Oversubscription: End to End Oversubscription: 1608 1440 240 960 8: 1 (16/2) 8: 1 2 ISL to Core at 4G 16 Host Ports at 4G Five Racks 96 Dual Attached Blade Servers per Rack (960*4/(240*2)) 47

FCoE for Data Consolidation LAN 8 SAN A SAN B LAN SAN A SAN B 4 2 2 Nearly twice the Cables 10 Servers Enet FC Total Adapters 20 20 40 Switches 2 2 4 Cables 40 40 80 Mgmt Pts 2 2 4 10 Servers Enet FC Total Adapters 20 0 20 Switches 2 0 2 Cables 40 0 40 Mgmt Pts 2 0 2 48

Top-of-Rack Consolidated I/O I/O Consolidation at Access SAN-A LAN Core SAN-B 8 Distribution MDS 9500 8 POD 1 POD N Access Nexus 5000 10 GE/FCoE CNA Server Cabinet Pair 1 Server Cabinet Pair N Server Cabinet Pair 1 Server Cabinet Pair N 49

Collapsed Core/Edge Design Traditional Core-Edge Design Cisco MDS 9500 Director with Collapsed-Core Configuration Full Performance (Non-Oversubscribed, Non-Blocking) Host Optimized (Oversubscribed, Non-Blocking) Collapsed Core Typically a lower oversubscription ratio Room to grow empty slots = future port count growth While Director ports are more expensive than Fabric switch ports, Collapsed Core design has no wasted ports for ISLs similar cost/usable port 50

Medium Scale Dual Fabric Collapsed Core Design Dual Director Switches (Up to 528 Ports per Fabric) Medium Scale Design Leveraging 48-Port Modules with Port Bandwidth Reservations to Provide High-Density Solution VSAN support Port bandwidth reservations guarantee performance for those devices that require it Port Channels with HA to other switches for future growth, scaling from a collapsed core to core/edge design Ports Deployed: Usable Ports: Usable (Available) Ports: Design Efficiency: End-to-End Oversubscription 528 528 0 100% 10 : 1 (480 : 48) 96 Storage Ports.. 960 Host Ports Within Each Port Group: One or Two Ports to Storage 11 or 10 Ports to Hosts Port Bandwidth Reservations Used Storage (1x or 2x Dedicated) Hosts (11x or 10x Shared) 11 X 48-Port Modules 528 Ports Total 48 Ports for Storage 480 Ports for Hosts 10:1 Oversubscription A Fabric Shown, Repeat for B Fabric 51

Design Optimization Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 52

Blocking Impact on Design Performance Performance can be adversely affected across an entire multiswitch FC Fabric by a single blocking port HOL is a transitory event (until some BB_Credits are returned on the blocked port) To help alleviate the blocking problem and enhance the design performance Virtual Output Queuing (VoQ) on all ports Deep Buffers 255 BB_Credits per port for Generation 1 line card, 6000 for Generation 2 and 3 line cards 53

Advanced Traffic Management Port bandwidth reservation Dedicated mode ports can act at any dedicated rate including line rate Shared mode Enhance utilization Oversubscription Round robin fairness Assured fairness Port Channel to scale connectivity Bundle ISLs between switches Additional resiliency In-order frame delivery guarantee (IOD) Potential Bottlenecks Department/ Customer A (Low Priority) Department/ Customer B (High Priority) VSAN- Enabled Fabric VSAN Trunks Shared Storage 54

Advanced Traffic Management QoS allows traffic to be intelligently managed Minimizes impact of oversubscription Allows more economical topologies Prioritize traffic by flow VOQ for switch performance Slow flows to not disrupt fast flows Non-blocking frame forwarding Potential Bottlenecks Department/ Customer A (Low Priority) Department/ Customer B (High Priority) VSAN- Enabled Fabric VSAN Trunks Shared Storage 55

Enhanced Quality of Service (QoS) Arbiter-aware QoS Requires Supervisor 2 Enabled within a switch or across the network Allows QoS in a single switch configuration User definable DiffServ DWRR Weight Priority Absolute Queue 2 60 Queue 3 10 Queue 4 30 PQ DWRR 2 Transmit Queue DWRR 3 DWRR 4 56

SAN Extension Design Considerations Transport Optical or IP WAN/MAN High availability Application availability, IVR Optimal performance: latency and throughput Application performance, tape and write acceleration Resilience to WAN problems WAN bandwidth: optimal use and lowest cost WAN bandwidth utilization, hardware compression QoS: maintain and assure service Traffic management, IVR, QoX, TCP tuning, IPv6 Data security in transit FCIP IPSec and FC TrustSec encryption and FC-SP auth Primary Data Center WAN/MAN Backup Data Center 57

Enhancing SAN Extension Design Extending the effective distance for remote applications SAN IO acceleration Write acceleration Tape acceleration Reduces WAN-induced latency Improves application performance over distance Increased distance Improved performance SAN Extension with IOA 58

SAN Security Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 59

SAN Design Security Challenges SAN design security is often overlooked as an area of concern Application integrity and security is addressed, but not back-end storage network carrying actual data SAN extension solutions now push SANs outside datacenter boundaries Not all compromises are intentional Accidental breaches can still have the same consequences SAN design security is only one part of complete data center solution Host access security one-time passwords, auditing, VPNs Storage security data-at-rest encryption, LUN security Data center physical security FC External Dos or Other Intrusion Privilege Escalation/ Unintended Privilege Application Tampering (Trojans, etc.) Unauthorized Connections (Internal) Theft Data Tampering SAN LAN 60

SAN Security Design Considerations Protecting data Data integrity and encryption in transit or at rest Securing against unauthorized user and device access User/device authorization and authentication Server and target access controls Guarding against malicious management misconfiguration Management access controls Securing the SAN management information Host Fabric Access Security Data Integrity and Secrecy SAN Fabric SAN Fabric Protocol Security iscsi Target Target Access Security SAN Management Security IP Storage Security (iscsi/fcip) 61

SAN Security Solution Secure management access Role-based access control CLI, SNMP, and web access Secure management protocols SSH, SFTP, and SNMPv3 Secure switch control protocols FC-SP (DH-CHAP) RADIUS AAA and TACACS+ User, switch and iscsi host authentication Device/SAN Management Security Via SSH, SFTP, SNMPv3, and User Roles SAN Protocol Security (FC-SP) VSANs Provide Secure Isolation RADIUS or TACACS+ Server for Authentication iscsi- Attached Servers Hardware-Based Zoning Via Port and WWN Shared Physical Storage 62

Intelligent Fabric Applications Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 63

Intelligent Storage Applications Delivered as a Transparent Fabric Service Application Servers Extend storage services to any device in the SAN SME - Data Encryption DMM Data Migration SANTap Data Replication Transparent to applications MSM-18/4 MSM-18/4 Nondisruptive deployment No SAN reconfiguration No rewiring to insert appliances Highly scalable performance Automatic load balance Storage Array Reliable, highly available service Wizard-based provisioning 64

Cisco Storage Media Encryption (SME) SME Name: XYZ SSN: 1234567890 Amount: $123,456 Status: Gold Application Server SME @!$%!%!%!%%^& *&^%$#&%$#$%*!^ @*%$*^^^^%$@*) %#*@(*$%%%%#@ Key Management Center IP Encrypts storage media (data at rest) IEEE compliant AES-256 encryption Integrated as transparent fabric service Transparent Fabric Service Supports heterogeneous storage arrays, tape devices, and VTLs Compresses tape data Offers secure, comprehensive key management Allows offline media recovery Built upon FIPS Level 3 system architecture Storage Array Tape Library 65

Cisco Data Mobility Manager (DMM) Application I/O Application Servers Data Mobility Manager Data Migration Migrates data between storage arrays for Technology refreshes Workload balancing Storage consolidation DMM offers Online migration of heterogeneous arrays Simultaneous migration of multiple LUNs Unequal size LUN migration Rate adjusted migration Verification of migrated data Secure erase Dual fabric support CLI and wizard-based management with Cisco Fabric Manager Requires no SAN reconfiguration or rewiring Old Array New Array Utilizes Multi-Service Modules (MSM) 66

Network Assisted Storage Applications SAN Tap SAN Initiator Initiator Target I/O Copy of Primary I/O Appliance Enables appliance-based storage applications without compromising SAN integrity About SAN Tap MDS delivers a copy of primary I/O to an appliance Appliance provides the storage application Examples of applications include Continuous Data Protection (CDP), replication, etc. Key customer benefits Preserve integrity, availability, and performance of primary I/O No service disruption Investment protection Target = SAN Tap 67

Interoperability Design Considerations Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 68

Standard Fibre Channel Interoperability Switch Interoperability Is Available Between the Cisco MDS Platforms and Non-Cisco Standards Compliant Switches Provides way to redeploy smaller edge switches Both McDATA and Brocade switches must be in interoperability mode On both products this results in the loss of some functionality No trunking No port-based zoning No full zone-set exchanges Restricted # of domains and their ID Must also choose fabric timer values that are the same and available across all vendors must be the same fabric-wide Must enable interoperability mode on MDS 9000 R_A_TOV E_D_TOV TE Cisco Brocade McDATA 5000-100000 ms 1000- DS_TOV ms E Non-Cisco Switch in Interoperability Mode 4000-120000 ms 1000-20000 ms 1000-120000 ms 200-60000 ms v2.4.1 and v3.0.1a v04.01.00 12 69

Third-Party Switch Native Mode Enables MDS9000 family to interoperate with legacy fabric switches in Native mode Brocade and McDATA Reuse existing legacy fabric switches as edge devices No impairment to Cisco fabric all advanced services available No change required on legacy switches simply connect switches Three additional modes Interop mode 2 Interop mode 3 Interop mode 4 VSAN 50 Legacy Brocade Switches Cisco MDS 9000 Family VSAN 40 Legacy McDATA Switches Configurable on a VSAN-by- VSAN Basis on MDS 9000 70

MDS Operating Modes The MDS Operating Modes MDS Native Mode (Default mode of all VSAN) MDS also has 4 different modes of interoperability Interop Mode 1 (all vendor switches need to be in their respective interop modes) Interop mode 2 (Brocade switches operating in Core PID 0) Interop mode 3 (Brocade switches operating in Core PID1) Interop mode 4 (McDATA switches) 71

Native MDS Mode No Interop settings required on MDS MDS native mode is fully standard compliant (FC-SW-2, FC-SW-3 and FC- SW-4) Cisco MDS 9000 Advanced features (trunking, port channels and VSAN) are not supported on third party switches or MDS ports connected to third party switches Inrange and Qlogic do not need specific interop settings Only the active zone set is distributed to other switches Ensures smooth integration of blade servers with embedded Qlogic FC interconnect FC FC FC FC FC FC FC FC FC Qlogic and Inrange Blade Servers with Qlogic Fibre Channel Interconnect 72

Interoperability Mode 1 The interoperability mode is VSAN specific Enables the MDS 9000 switch to interoperate with Brocade and McDATA switches configured for interoperability, Qlogic and Inrange switches in native mode Cisco MDS 9000 Domain IDs are restricted to the 97 to 127 range The MDS 9000 switch can still operate with full functionality in other non-interop mode VSANs Only the active zone set is distributed to other switches Advanced features (trunking, port channels and VSAN) are not supported on third party switches or MDS ports connected to third party switches Switching from native mode to interop mode is disruptive for both Brocade and McDATA FC FC FC FC FC FC FC FC FC McDATA, Brocade, Qlogic and Inrange Stand Alone Switches Blade Servers with Qlogic Fibre Channel Interconnect 73

Interoperability Mode 1 Caveats Only 31 domain ID available (97 to 127) The following Brocade features are lost in interop mode 1: Port zoning Trunking (port channels) Quickloop, fabric assist Secure fabric OS VC (virtual Channel) flow control Cisco MDS 9000 The following McDATA features are lost in standard interop mode: Port zoning Open trunking (port channels) FC FC FC FC FC FC FC FC FC McDATA, Brocade, Qlogic and Inrange Stand Alone Switches Blade Servers with Qlogic Fibre Channel Interconnect 74

Brocade Legacy Interoperability Modes Brocade Core PID (Port Identification) Brocade could originally only support a switches with maximum of 16 ports The first nibble of the area ID in the fibre channel identifier is hard set to 1 FCID is in the format of XX1YZZ where Y was a hexadecimal number that specified a particular port on a switch To accommodate larger port counts, a new PID format was adopted FCID in the new format is XXYYZZ where byte YY still represents a specific port The old format is named Core PID 0 (or off) and uses only one nibble to identify a port on the switch, and the second nibble is always set to 1 The new format is named Core PID 1 (or on) and use a full area ID byte to identify a port on the switch 75

Interoperability Mode 2 Enables the MDS 9000 switch to interoperate with Brocade switches in native mode and core PID 0 Cisco MDS 9000 Some configuration changes may be needed (VC_RDY disable) on the Brocade switches No need to disable the Brocade switches No traffic disruption or outage required adding MDS 9000 switches to existing Brocade fabric Enables re-use of existing legacy fabric switches as edge devices Brocade 2100, 2400, 2800 and 3800 Switches FC FC FC Blade Servers with Qlogic Fibre Channel Interconnect 76

Interoperability Mode 3 Brocade native, not standard based interop mode Enables the MDS 9000 switch to interoperate with Brocade switches in native mode and Core PID 1 (i.e. Brocade switches with more than 16 ports) Some configuration changes (VC_RDY disable) may be needed on the Brocade switches. No need to disable the brocade switches Brocade Fabrics with High Port Count Switches Cisco MDS 9000 FC FC FC Brocade 3900 Switches FC FC FC Brocade Fabrics with High Port Count Switches 77

Interoperability Modes 2 and 3 Caveats MDS 9000 can still operate with full functionality when connected to another MDS switch regardless the interop mode Interop mode only affects the configured VSAN; all other VSANs are unaffected Advanced features (trunking, port channels and VSAN) are not supported on third party switches or MDS ports connected to third party switches MDS 9000 switch TE ports can carry VSANs that are running any or all interop modes, along with MDS 9000 switch native mode, simultaneously Zone sets can be activated from any Brocade or any MDS 9000 within the fabric 78

Interoperability Modes 2 and 3 Caveats Zoning supported on Fabric Members attached to MDS must be by pwwn or Alias Members attached to Brocade can be by Alias, pwwn or Domain and Port Alias information is not distributed Brocade web tools can not see MDS information Zoning configuration is possible from MDS or Brocade VC (virtual channel) flow control not supported between MDS and Brocade Quick Loop can be used in Fabric, but MDS can not be part of Quick Loop Fabric Assisted Zones is not supported on nodes connected to MDS Secure Fabric OS is not supported in an interop fabric 79

Interoperability Mode 4 Supports all McDATA platforms operating in McDATA Fabric Mode (Introduced with SAN-OS 3.0.1) MDS switches can be added to McDATA fabric with no disruption Allowed domain range is 1 31 FCID allocated will have domain identifier equal to base domain ID + 96 This mode requires setting the VSAN WWN Only pwwn and domain, port base zones are supported Default zone policy is distributed in the fabric IVR allocated virtual domain will be always from 1 to 31, but FCIDs will start from 97-127 FC FC FC Cisco MDS 9000 FC FC FC McDATA Fabric FC FC FC 80

Summary Various interoperability modes on the MDS 9000 allow building mixed fabrics with Brocade, McDATA, Qlogic and Inrange switches both in native and interop mode Interop Mode 1 (all vendor switches need to be in their respective interop modes) Interop mode 2 (Brocade switches operating in Core PID 0) Interop mode 3 ( Brocade switches operating in Core PID1) Interop mode 4 (McDATA switches) NPV mode to reduce interoperability between MDS switches and 3 rd party SAN switches 81

Closing Remarks Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 82

Closing Remarks SAN Design Simple SAN design Small port count Dual fabric Minimal SAN/OS feature use Scalable SAN design Core/edge Top-of-Rack Collapsed core/edge Design Optimization Blocking Traffic management Security Core director SAN with blade server and edge switches can leverage ToR design and NPV feature for simplified management MDS Interoperability with 3 rd party SAN switches 83

BRKSAN-1032: Design and Implementation of FICON Networks BRKSAN-2047: FCoE Design, Operations and Best Practices BRKSAN-2704: SAN Extension Design and Operation BRKSAN-2892: Implementing Security for SANs BRKSAN-3707: Advanced SAN Services BRKDCT-1044: Intro to FCoE Presentation_ID 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 84

Additional Information Cisco Storage Networking http://www.cisco.com/go/storagenetworking Cisco Data Center Networking http://www.cisco.com/go/datacenter Storage Network Industry Association (SNIA) http://www.snia.org Internet Engineering Task Force IP Storage http://www.ietf.org/html.charters/ips-charter.html ANSI T11 Fibre Channel http://www.t11.org/index.htm 85

Recommended Reading Continue your Cisco Live learning experience with further reading from Cisco Press Check the Recommended Reading flyer for suggested books Available Onsite at the Cisco Company Store 86

Complete Your Online Session Evaluation Give us your feedback and you could win fabulous prizes. Winners announced daily. Receive 20 Cisco Preferred Access points for each session evaluation you complete. Complete your session evaluation online now (open a browser through our wireless network to access our portal) or visit one of the Internet stations throughout the Convention Center. Don t forget to activate your Cisco Live and Networkers Virtual account for access to all session materials, communities, and on-demand and live activities throughout the year. Activate your account at any internet station or visit www.ciscolivevirtual.com. 87