Fibre Channel Networking for the IP Network Engineer and SAN Core Edge Design Best Practices Chad Hintz Technical Solutions Architect BRKVIR-1121

Similar documents
UCS Engineering Details for the SAN Administrator

Design and Implementations of FCoE for the DataCenter. Mike Frase, Cisco Systems

Unified Storage Access Design

Evolution with End-to-End Data Center Virtualization

CCIE Data Center Storage Networking. Fibre Channel Switching

Configuring Fibre Channel Interfaces

FCoE Cookbook for HP Virtual Connect

HP 6125XLG Blade Switch

DCNX5K: Configuring Cisco Nexus 5000 Switches

Comparing Server I/O Consolidation Solutions: iscsi, InfiniBand and FCoE. Gilles Chekroun Errol Roberts

CCIE Data Center Storage Networking. Fibre Channel Switching Configuration. Nexus 5500UP FC Initialization Allocate interfaces as type FC

UCS Networking 201 Deep Dive

Large SAN Design Best Practices Using Cisco MDS 9700 and MDS 9500 Multilayer Directors

FIBRE CHANNEL OVER ETHERNET

Cisco MDS 9000 Family Blade Switch Solutions Guide

Cisco Nexus 4000 Series Switches for IBM BladeCenter

N-Port Virtualization in the Data Center

Configuring FCoE NPV. Information About FCoE NPV. This chapter contains the following sections:

ECE Enterprise Storage Architecture. Fall 2016

"Charting the Course... Implementing Cisco Data Center Infrastructure (DCII) Course Summary

Implementing Cisco Data Center Infrastructure v6.0 (DCII)

Configuring PortChannels

CCIE Data Center Written Exam ( ) version 1.0

UM DIA NA VIDA DE UM PACOTE CEE

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Intermixing Best Practices

Designing and Deploying a Cisco Unified Computing System SAN Using Cisco MDS 9000 Family Switches

UCS-ABC. Cisco Unified Computing System Accelerated Boot Camp. Length: 5 Days. Format: Lecture/Lab. Course Version: 5.0. Product Version: 2.

Cisco Cisco Data Center Associate Level Accelerated - v1.0 (DCAA)

Fibre Channel Gateway Overview

UCS Management Deep Dive

Cisco UCS Virtual Interface Card 1225

zseries FICON and FCP Fabrics -

Advanced Storage Area Network Design Blaise Pangalos Solutions Architect

SAN Virtuosity Fibre Channel over Ethernet

FCoE Design, Operations and Management Best Practices

ETHERNET ENHANCEMENTS FOR STORAGE. Sunil Ahluwalia, Intel Corporation

Cisco I/O Accelerator Deployment Guide

Cisco Exam Questions & Answers

Jake Howering. Director, Product Management

Lossless 10 Gigabit Ethernet: The Unifying Infrastructure for SAN and LAN Consolidation

iscsi : A loss-less Ethernet fabric with DCB Jason Blosil, NetApp Gary Gumanow, Dell

Hálózatok üzleti tervezése

This appendix covers the following topics: Understanding the Concepts of Storage Networking Understanding the Storage Networking Protocols

Cisco.Actualtests v by.Dragan.81q

Cisco MDS NX-OS Release 6.2Configuration Limits 2

FCoE Configuration Between VIC Adapter on UCS Rack Server and Nexus 5500 Switch

FCoE Design, Implementation and Management Best Practices

Vendor: EMC. Exam Code: E Exam Name: Cloud Infrastructure and Services Exam. Version: Demo

The confusion arising from Converged Multihop Topologies as to the roles and capabilities of FCFs, FCoE-FC Gateways, and FCoE Transit Switches

CloudEngine 8800&7800&6800 Series Switches. FCoE and DCB Technology White Paper. Issue 03 Date HUAWEI TECHNOLOGIES CO., LTD.

Advanced Storage Area Network Design

Cisco Certdumps Questions & Answers - Testing Engine

AVANTUS TRAINING PTE LTD

Vendor: Cisco. Exam Code: Exam Name: Introducing Cisco Data Center Technologies. Version: Demo

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation

S S SNIA Storage Networking Foundations

UCS Technical Deep Dive: Getting to the Heart of the Matter

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

The Virtual Machine Aware SAN

Contents. FCoE commands 0

UCS Networking Deep Dive. Neehal Dass - Customer Support Engineer

"Charting the Course... Troubleshooting Cisco Data Center Infrastructure v6.0 (DCIT) Course Summary

vsphere 6.0 with HP ProLiant Gen9 Servers, OneView, 3PAR, Cisco Nexus 5600 and Brocade 6510 Deployment Guide

Troubleshooting N-Port Virtualization

Deploying FCoE (FIP Snooping) on Dell PowerConnect 10G Switches: M8024-k, 8024 and 8024F. Dell Networking Solutions Engineering March 2012

Using Advanced Features on Cisco UCS Dan Hanson, Technical Marketing Manager, Data Center Group

UCS Fabric Fundamentals

Pass-Through Technology

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

CCIE Data Center Lab Exam Version 1.0

UCS Networking Deep Dive

Dell PS Series DCB Configuration Best Practices

Question No : 1 Which three options are basic design principles of the Cisco Nexus 7000 Series for data center virtualization? (Choose three.

IBM Europe Announcement ZG , dated February 13, 2007

Storage Area Network (SAN)

Unified Storage Networking. Dennis Martin President, Demartek

Rack-Level I/O Consolidation with Cisco Nexus 5000 Series Switches

UCS Fabric Fundamentals

VIRTUAL CLUSTER SWITCHING SWITCHES AS A CLOUD FOR THE VIRTUAL DATA CENTER. Emil Kacperek Systems Engineer Brocade Communication Systems.

Fibre Channel Storage Area Network Design BRKSAN-2701

ETHERNET ENHANCEMENTS FOR STORAGE. Sunil Ahluwalia, Intel Corporation Errol Roberts, Cisco Systems Inc.

Selftestengine questions. Cisco Designing Cisco Data Center Unified Fabric

Industry Standards for the Exponential Growth of Data Center Bandwidth and Management. Craig W. Carlson

EqualLogic iscsi SAN Concepts for the Experienced Fibre Channel Storage Professional A Dell Technical Whitepaper

UCS Management Architecture Deep Dive

Nexus 7000 F3 or Mx/F2e VDC Migration Use Cases

Cisco UCS Virtual Interface Card 1227

UCS Management Deep Dive

"Charting the Course... Designing Cisco Data Center Infrastructure (DCID) Course Summary

Exam Questions

The Confusion Arising from Converged Multi Hop FCoE Topologies

Configuring FCoE NPV. FCoE NPV Overview. FCoE NPV Benefits

Vendor: Cisco. Exam Code: Exam Name: DCID Designing Cisco Data Center Infrastructure. Version: Demo

Big Data Processing Technologies. Chentao Wu Associate Professor Dept. of Computer Science and Engineering

Designing SAN Using Cisco MDS 9000 Series Fabric Switches

examcollection.premium.exam.186q

Inter-VSAN Routing Configuration

Configuring Cisco Unified FEX Nexus 2348UPQ with Fiber Channel Interfaces

LAN Ports and Port Channels

Transcription:

Fibre Channel Networking for the IP Network Engineer and SAN Core Edge Design Best Practices Chad Hintz Technical Solutions Architect BRKVIR-1121

Session BRKVIR-1121 Abstract Fibre Channel Networking for the IP Network Engineer and SAN Core-Edge Design Best Practices This session gives non-storage-networking professionals the fundamentals to understand and implement storage area networks (SANs). This curriculum is intended to prepare attendees for involvement in SAN projects and I/O Consolidation of Ethernet & Fibre Channel networking. You will be exposed to the introduction of Storage Networking terminology & Designs. Specific topics covered include Fibre Channel (FC), FCoE, FC services, FC addressing, fabric routing, zoning, virtual SANs (VSANs). The session includes discussions on Designing Core-Edge Fibre Channel Networks and the best practice recommendations around them. This is an introductory session and attendees are encouraged to follow up with other SAN breakout sessions and labs to learn more about specific advanced topics.

Goals of This Session

Session Agenda History of Storage Area Networks Fibre Channel Technology Designing Fibre Channel Networks Storage Topologies Introduction to NPIV/NPV Core-Edge Design Review Recommended Core-Edge Designs for Scale and Availability Fibre Channel over Ethernet Next Generation Core-Edge Designs

History of Storage Area Networks

The #1 rule It starts with the relationship... The #1 Rule

Not all storage is local Must maintain that 1:1 relationship, especially if there's a network in-between All storage networking is based on this fundamental principle

SCSI When things go wrong... Must maintain that 1:1 relationship, especially if there's a network in-between Otherwise, you get data loss Very Bad Things happen

This Is Why Storage People Are Paranoid Losing data means: Losing job Running afoul of government regulations Possibly SEC filings Lost revenue

Direct Attached Storage (DAS) Hosts directly access storage as block-level devices As Hosts need more disk externally connected storage gets added Storage is captive behind the server, limited mobility Limited scalability due to limited devices No efficient storage sharing possible Costly to scale; complex to manage

Network Attached Storage (NAS) Clients connect to a network-capable storage server Serves files to other devices on the network Called "File-Based" storage Ethernet Switch SMB- successor of CIFS protocol with enhanced functionality, Microsoft NFS Network File System, Different from DAS, which uses block-based storage (a block may only contain a part of a single file) TCP or UDP based Almost all NFS clients perform local caching of both file and directory data

Storage Area Networks Extending the the SCSI SCSI Relationship across a a network Same SCSI protocol carried over a network transport via serial implementation Transport must not jeopardize SCSI payload (security, integrity, latency) Two primary transports to choose from today, namely IP and Fibre Channel Fibre Channel provides high-speed transport for SCSI payload via Host Bus Adapter (HBA) Fibre Channel overcomes many shortcomings of parallel I/O and combines best attributes of a channel and a network together FC Switch

Storage Area Networks iscsi A SCSI transport protocol that operates on top of TCP Encapsulates SCSI commands and data into TCP/IP byte-streams Ethernet Switch Applications TCP/IP Stack iscsi Block Device SCSI Generic File System NIC Driver NIC Adapter Adapter Driver SCSI Adapter

SAN Protocols for Block I/O Fibre Channel A gigabit-speed network technology primarily used for storage networking Fibre Channel over Ethernet (FCoE) An encapsulation of FibreChannel frames over Ethernet networks. This allows Fibre Channel to use 10 Gigabit Ethernet networks while preserving the Fibre Channel protocol iscsi A TCP/IP-based protocol for establishing and managing connections between IP-based storage devices, hosts and clients FCIP A provides a standard way of encapsulating FC frames within TCP/IP, allowing islands of FC SANs to be interconnected over an IP-based network

Network Stack Comparison SCSI SCSI SCSI SCSI SCSI iscsi FCP FCP FCP FC FC FC FCIP TCP TCP IP IP FCoE Ethernet Ethernet Ethernet PHYSICAL WIRE SCSI iscsi FCIP FCoE FC

Session Agenda History of Storage Area Networks Fibre Channel Technology Designing Fibre Channel Networks Storage Topologies Introduction to NPIV/NPV Core-Edge Design Review Recommended Core-Edge Designs for Scale and Availability Fibre Channel over Ethernet Next Generation Core-Edge Designs

Fibre Channel Technology

Fibre Channel Basic Building Blocks Components Naming Conventions Connection Fundamentals

Fibre Channel SAN Components Servers with host bus adapters Storage systems RAID JBOD Tape Switches SAN management software

Types of MDS/Nexus DC Switches SAN LAN LAN/SAN Cisco MDS 9100 Series Cisco MDS 9200 Series Cisco MDS 9500,9700 Series Cisco Nexus 3000 Cisco Nexus 1010 Cisco Nexus 2000 Cisco Nexus 1000V Cisco Nexus 4000 Cisco Nexus 5500/ 5600 Cisco Nexus 6000 Cisco Nexus 7000 CISCO NX-OS: From Hypervisor to Core CISCO DCNM: Single Pane of Management

FC Switch Port Types "N" for Node Port Host or target port facing a switch N_Port Node N_Port Node N_Port Node

FC Switch Port Types "N" for Node Port Host or target port facing a switch "F" for Fabric Switch port facing the end Nodes F_Port N_Port Node F_Port N_Port Node F_Port N_Port Node

FC Switch Port Types "N" for Node Port Host or target port facing a switch "F" for Fabric Switch port facing the end Nodes "E" for "Extender" Switch port facing another switch F_Port N_Port Node F_Port N_Port Node FC Switch E_Port E_Port F_Port N_Port Node

FC Switch Port Types "N" for Node Port Host or target port facing a switch "F" for Fabric Switch port facing the end Nodes "E" for "Extender" Switch port facing another switch These names are very important! F_Port N_Port Node (we will be talking about them a lot) F_Port N_Port Node FC Switch E_Port E_Port F_Port N_Port Node

Fabric Name and Addressing, Part 1: WWN-Like a MAC address Every Fibre Channel port and node has a hard-coded address called World Wide Name (WWN) During FLOGI the switch identifies the WWN in the service parameters of the accept frame and assigns a Fibre Channel ID (FCID) Switch Name Server maps WWNs to FCID WWNN uniquely identify devices WWPN uniquely identify each port in a device

Fibre Channel Addressing, Part 2: Domain ID FC Fabric Domain ID 10 FCID 11.00.01 Domain ID 11 Domain ID is native to a single FC switch Limitation of domain IDs in a single fabric Forwarding decisions made on domain ID found in first 8 bits of FCID FCID 10.00.01 Switch Topology Model Switch Domain Area Device

Start from the Beginning Start with the host and a target that need to communicate Host has 2 HBAs (one per fabric) each with a WWN Target has multiple ports to connect to fabric Connect to a FC Switch Port Type Negotiation Speed Negotiation FC Switch is part of the SAN fabric Most commonly, dual fabrics are deployed for redundancy FABRIC A Core Edge HBA Initiator Target FC

FLOGI My Port Is Up Can I Talk Now? FLOGIs/PLOGIs Step 1: Fabric Login (FLOGI) Determines the presence or absence of Fabric Exchanges Service Parameters with Fabric Switch identifies the WWN in service parameters of the accept frame and assigns a Fibre Channel ID (FCID) Initializes the buffer-to-buffer credits Step 2: Port Login (PLOGI) Required between nodes that want to communicate Similar to FLOGI transports a PLOGI frame to the designation node port In p2p topology (no fabric present), initializes buffer-tobuffer credits FCID FC Fabric Initiator FC Target E_Port F_Port N_Port HBA

Buffer to Buffer Credits Fibre Channel Flow Control Fibre Channel Switch B2B Credits used to ensure that FC transport is lossless # of credits negotiated between ports when link is brought up # Credits decremented with each packet placed on the wire Independent of packet size If # credits == 0, no more packet transmission 16 R_RDY # of credits incremented with each transfer ready received B2B Credits need to be taken into consideration as distance and/or bandwidth increases 15 16 Packet Host 16

Fabric Channel ID (FCID)-Like an IP address Fibre Channel Addressing Scheme FCID assigned to every WWPN corresponding to an N_Port FCID made up of switch domain, area and device Fabric A FC Fabric Domain ID is native to a single FC switch limitation of domain IDs in a single fabric Forwarding decisions made on domain ID found in first 8 bits of FCID Switch Topology Model Switch Domain Area Device

Session Agenda History of Storage Area Networks Fibre Channel Technology Designing Fibre Channel Networks Storage Topologies Introduction to NPIV/NPV Core-Edge Design Review Recommended Core-Edge Designs for Scale and Availability Fibre Channel over Ethernet Next Generation Core-Edge Designs

Designing Fibre Channel Networks

Storage Criteria Themes Fibre Channel Fabric Name Server Frame Forwarding Security/Segmentation Load Balancing Bandwidth Considerations HA and Reliability

Fabric Shortest Path First (like OSPF) Fibre Channel Forwarding FSPF routes traffic based on destination domain ID found in the destination FCID For FSPF a domain ID identifies a single switch The number of Domains IDs are limited to 239/75 (theoretical limited/tested and qualified) within the same fabric (VSAN) FSPF performs hop-by-hop routing FSPF uses total cost as the metric to determine most efficient path FSPF supports equal cost load balancing across links

FC vs. IP routing Fabric A FC Fabric

Directory Server/Name Server (Like DNS) Repository of information regarding the components that make up the Fibre Channel network Located at address FF FF FC (some readings call this the name server) Components can register their characteristics with the directory server An N_Port can query the directory server for specific information Query can be the address identifier, WWN and volume names for all SCSI targets show fcns database -------------------------------------------------------------------------- FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE -------------------------------------------------------------------------- 0xc70000 N 20:47:00:0d:ec:f0:85:40 (Cisco) npv 0xc70001 N 20:fa:00:25:b5:aa:00:01 scsi-fcp:init fc-gs 0xc70002 N 20:fa:00:25:b5:aa:00:00 scsi-fcp:init fc-gs 0xc70008 N 20:fe:00:25:b5:aa:00:01 scsi-fcp:init fc-gs 0xc7000f N 20:fe:00:25:b5:aa:00:00 scsi-fcp:init fc-gs 0xc70010 N 20:fe:00:25:b5:aa:00:02 scsi-fcp:init fc-gs 0xc70029 N 20:00:00:25:b5:20:00:0e scsi-fcp:init fc-gs 0xc70100 N 50:06:0b:00:00:65:e2:0e (HP) scsi-fcp:target 0xc70200 N 20:20:54:7f:ee:2a:14:40 (Cisco) npv 0xc70201 N 20:00:54:78:1a:86:c6:65 scsi-fcp:init fc-gs

Login Complete Almost There Fabric Zoning Zones are the basic form of data path security Zone members can only see and talk to other members of the zone Devices can be members of more than one zone Default zoning is deny Zones belong to a zoneset Zoneset must be active to enforce zoning Only one active zoneset per fabric or per VSAN FC Fabric FC Target pwwn 50:06:01:61:3c:e0:1a:f6 Initiator pwwn 10:00:00:00:c9:76:fd:31 zone name EXAMPLE_Zone fcid 0x10.00.01 [pwwn 10:00:00:00:c9:76:fd:31] [Initiator] fcid 0x11.00.01 [pwwn 50:06:01:61:3c:e0:1a:f6] [target]

SAN Islands Production SAN Tape SAN Test SAN SAN A DomainID=1 SAN B DomainID=2 SAN C DomainID=3 SAN D DomainID=4 SAN E DomainID=5 SAN F Domain ID=6 DomainID=7 DomainID=8

Virtual SANs (VSANs)-Where have I heard this before? Analogous to VLANs in Ethernet Virtual fabrics created from larger cost-effective redundant physical fabric Reduces wasted ports of a SAN island approach Fabric events are isolated per VSAN which gives further isolation for High Availability Statistics can be gathered per VSAN Each VSAN provides Separate Fabric Services FSPF, Zones/Zoneset, DNS, RSCN VSANs supported on MDS and Nexus Product lines Physical SAN Islands Are Virtualized onto Common SAN Infrastructure

SAN Islands with Virtual SANs Production SAN Tape SAN Test SAN SAN A DomainID=1 SAN B DomainID=2 SAN C DomainID=3 SAN D DomainID=4 SAN E DomainID=5 SAN F Domain ID=6

VSANs and Zones Complimentary Virtual SANs and Fabric Zoning Are Very Complementary Hierarchical relationship First assign physical ports to VSANs Then configure independent zones per VSAN VSANs divide the physical infrastructure Zones provide added security and allow sharing of device ports VSANs provide traffic statistics VSANs only changed when ports needed per virtual fabric Zones can change frequently (e.g. backup) Relationship of VSANs to Zones Physical Topology VSAN 2 Disk2 Disk3 Host1 Disk1 ZoneA ZoneC ZoneB Disk4 Host2 VSAN 3 ZoneD Host4 ZoneA Host3 Disk5 Disk6 Ports are added/removed non-disruptively to VSANs

Over-Subscription FAN-OUT Ratio Over-subscription (or fan-out) ratio for sizing ports and links Factors used Speed of Host HBA interfaces Speed of Array interfaces Type of server and application Storage vendors provide guidance in the process Ratios range between 4:1-20:1 3 x 8G ISL ports 6 x 4G Array ports Example: 10:1 O/S ratio 60 Servers with 4 Gb HBAs FC 240 G 24 G 24 G

Inter-Switch Link PortChanneling A PortChannel Is a Logical Bundling of Identical Links Criteria for forming a PortChannel Same speed links Same modes (auto, E, etc.) and states Between same two switches Same VSAN membership Treated as one logical ISL by upper layer protocols (FSPF) Can use up to 16 links in a PortChannel Can be formed from any ports on any modules HA enabled Exchange-based in-order load balancing Mode one: based on src/dst FC_IDs Mode two: based on src/dst FC_ID/OX_ID E.g., 4-Gbps PortChannel (Two x 2 Gbps) E.g., 8-Gbps PortChannel (Four x 2 Gbps) Much faster recovery than FSPF-based balancing Given logical interface name with aggregated bandwidth and derived routing metric

PortChannel vs. Trunking ISL = inter-switch link PortChannel = E_Ports and ISLs Trunk = ISLs that support VSANs Trunking = TE_Ports and EISLs

Session Agenda History of Storage Area Networks Fibre Channel Technology Designing Fibre Channel Networks Storage Topologies Introduction to NPIV/NPV Core-Edge Design Review Recommended Core-Edge Designs for Scale and Availability Fibre Channel over Ethernet Next Generation Core-Edge Designs

Storage Topologies

SAN Design Key Requirements High Availability - Providing a Dual Fabric (current best practice) Meeting oversubscription ratios established by disk vendors Effective zoning Providing Business Function/Operating System Fabric Segmentation and Security Fabric scalability (FLOGI and domain-id scaling) Providing connectivity for virtualized servers Providing connectivity for diverse server placement and form factors

The Design Requirements Classical Fibre Channel Fibre Channel SAN Transport and Services are on the same layer in the same devices Well defined end device relationships (initiators and targets) Does not tolerate packet drop requires lossless transport Only north-south traffic, east-west traffic mostly irrelevant Network designs optimized for Scale and Availability High availability of network services provided through dual fabric architecture Edge/Core vs Edge/Core/Edge Service deployment I0 I1 T0 DNS FSPF Switch Zone RSCN I(c) I(c) I2 T1 FSPF Switch Zone DNS RSCN T2 FSPF I3 T(s) Switch DNS Zone RSCN I4 Fabric topology, services and traffic flows are structured I5 Client/Server Relationships are pre-defined

SAN Design Single Tier Topology Collapsed Core Design Servers connect to the Core switches Storage devices connect to one or more core switches Core switches provide storage services Large amount of blades to support Initiator (Host) and Target (Storage) ports Single Management per Fabric Normal for Small SAN environments HA achieved in two physically separate, but identical, redundant SAN fabrics Core FC Core

How Do We Avoid This?

SAN Design Two Tier Topology Core-Edge Topology- Most Common Servers connect to the edge switches Storage devices connect to one or more core switches Core switches provide storage services to one or more edge switches, thus servicing more servers in the fabric ISLs have to be designed so that overall fan-in ratio of servers to storage and overall end-to-end oversubscription are maintained HA achieved in two physically separate, but identical, redundant SAN fabrics Core Edge FC Core Edge

SAN Design Three Tier Topology Edge-Core-Edge Topology Servers connect to the edge switches Storage devices connect to one or more edge switches Core switches provide storage services to one or more edge switches, thus servicing more servers and storage in the fabric ISLs have to be designed so that overall fan-in ratio of servers to storage and overall end-to-end oversubscription are maintained HA achieved in two physically separate, but identical, redundant SAN fabrics Edge Core Edge FC Edge Core Edge

Session Agenda History of Storage Area Networks Fibre Channel Technology Designing Fibre Channel Networks Storage Topologies Introduction to NPIV/NPV Core-Edge Design Review Recommended Core-Edge Designs for Scale and Availability Fibre Channel over Ethernet Next Generation Core-Edge Designs

Introduction to NPIV/NPV

What Is NPIV? and Why? N-Port ID Virtualization (NPIV) provides a means to assign multiple FCIDs to a single N_Port Limitation exists in FC where only a single FCID can be handed out per F-port. Therefore an F-Port can only accept a single FLOGI Allows multiple applications to share the same Fiber Channel adapter port Usage applies to applications such as Virtualization Application Server FC NPIV Core Switch Email Email I/O N_Port_ID 1 F_Port Note: you may hear of it as the NPIV upstream switch Web Web I/O N_Port_ID 2 F_Port File Services File Services I/O N_Port_ID 3 N_Port

What Is NPV? and Why? N-Port Virtualizer (NPV) utilizes NPIV functionality to allow a switch to act like a server performing multiple logins through a single physical link Physical servers connected to the NPV switch login to the upstream NPIV core switch No local switching is done on an FC switch in NPV mode FC edge switch in NPV mode does not take up a domain ID Helps to alleviate domain ID exhaustion in large fabrics NPV Switch FC NPIV Core Switch F-Port FC1/1 Server1 N_Port_ID 1 NP-Port F-Port FC1/2 Server2 N_Port_ID 2 F_Port N-Port FC1/3 Server3 N_Port_ID 3

NPV Auto Load Balancing Uniform balancing of server loads on NP links Server loads are not tied to any uplink Benefit Optimal uplink bandwidth utilization Blade Server Chassis Blade 4 Blade 3 Blade 2 Blade 1 Balanced load on NP links 1 3 2 4 SAN

NPV Auto Load Balancing Automatically moves the failed servers to other available NP links Servers is forced to re-login immediately after experiencing short traffic disruption. Benefit Downtime greatly reduced Automatic failover of loads on NP links Disrupted servers relogin on other uplink Blade 1 Blade Server Chassis Blade 2 1 2 3 4 Blade 3 X SAN Blade 4

BladeSystem F-Port Port Channel F-Port PortChannels Bundle multiple ports into 1 logical link Similar to ISL portchannels in FC and EtherChannels in Ethernet Benefits High-Availability- no disruption if cable, port, or line cards fail Optimal bandwidth utilization & higher aggregate bandwidth with load balancing Enhance NPV uplink Resiliency Blade N Blade 2 Blade 1 NP-Port F-Port Port Channel F-Port Core Director SAN Storage Interface port-channel 1 no shut Interface fc1/1 channel-group 1 Interface fc1/2 channel-group 1

NPV F-Port Port Channel Link failures do not affect the server connectivity No application disruption No traffic disruption Blade 1 Blade Server Chassis Blade 2 Blade 3 Blade 4 X SAN

Blade System F-Port Trunking F-Port Trunking Uplinks carry multiple VSANs Benefits Extend VSAN benefits to Blade servers Separate management domains Traffic Isolation and ability to host differentiated services on blades Blade N Blade 2 Blade 1 NPV NP-Port F-Port Trunking on F-Port Channel F-Port Core Director VSAN 1 VSAN 2 VSAN 3 SAN Storage Extend VSAN Benefits to Blades Interface fc1/1 trunk mode on trunk allowed-vsan 1-3 Interface port-channel 1 trunk mode on trunk allowed-vsan 1-3

Session Agenda History of Storage Area Networks Fibre Channel Technology Designing Fibre Channel Networks Storage Topologies Introduction to NPIV/NPV Core-Edge Design Review Recommended Core-Edge Designs for Scale and Availability Fibre Channel over Ethernet Next Generation Core-Edge Designs

Core-Edge Design Review

SAN Design Two Tier Topology Edge-Core Topology- Most Common Servers connect to the edge switches Storage devices connect to one or more core switches Core switches provide storage services to one or more edge switches, thus servicing more servers in the fabric ISLs have to be designed so that overall fan-in ratio of servers to storage and overall end-to-end oversubscription are maintained HA achieved in two physically separate, but identical, redundant SAN fabrics Core Edge FC Core Edge

Blade Switch Explosion Issues Scalability Each Blade Switch uses a single Domain ID Theoretical maximum number of Domain IDs is 239 per VSAN Supported number of domains is quite smaller (depends on OSM) EMC: 80 domains Cisco Tested: 90 Manageability More switches to manage Shared management of blade switches between storage and server administrators

Server Consolidation with Top-of-Rack Fabric Switches Top of Rack 96 Storage Ports at 2 G A B Top of Rack Design 28 ISL to Edge at 10 G Ports Deployed: Storage Ports (4 G Dedicated): Host Ports (4 G Shared): Disk Oversubscription (Ports): Number of FC switches in the fabric 1200 192 896 9.3 : 1 30 2X ISL to Core at 10 G 32 Host Ports at 4 G A B MDS 91XXs 14 Racks 32 Dual Attached Servers per Rack

Server Consolidation with Blade servers Blade Servers 120 Storage Ports at 2 G 60 ISL to Edge at 4 G A B Blade Server Design Using 2 x 4 G ISL per Blade Switch; A B Ports Deployed: Storage Less Ports cables/power (4 G Dedicated): Host Ports (4 G Shared): Disk Oversubscription (Ports): Number of FC switches in the fabric 1608 240 480 8 : 1 62 2X ISL to Core at 4G 16 Host Ports at 4G Five Racks 96 Dual Attached Blade Servers per Rack

Session Agenda History of Storage Area Networks Fibre Channel Technology Designing Fibre Channel Networks Storage Topologies Introduction to NPIV/NPV Core-Edge Design Review Recommended Core-Edge Designs for Scale and Availability Fibre Channel over Ethernet Next Generation Core-Edge Designs

Recommended Core- Edge Designs for Scale and Availability

SAN Design Key Requirements High Availability - Providing a Dual Fabric (current best practice) Fabric scalability (FLOGI and domain-id scaling) Providing connectivity for diverse server placement and form factors Meeting oversubscription ratios established by disk vendors Effective zoning Providing Business Function/Operating System Fabric Segmentation and Security Providing connectivity for virtualized servers

N-Port Virtualizer (NPV) Reduces Number of FC Domain IDs Top of Rack Top of Rack Design Fabric Switches in NPV mode 2 ISL to Core at 10 G 32 Host Ports at 4 G 96 Storage Ports at 2 G 28 ISL to Edge at 10 G A B NPIV Core MDS 91xx in NPV mode A Ports Deployed: 1200 B Storage Ports (4 G Dedicated): 192 Host Ports (4 G Shared): 896 Disk Oversubscription (Ports): Number of FC switches in the fabric 9.3 : 1 2 14 Racks 32 Dual Attached Servers per Rack

NPV Blade Switch Blade Servers Blade Server Design Using 2 x 4 G ISL per Blade Switch; Less cables/power 120 Storage Ports at 2 G 60 ISL to Edge at 4 G A B NPIV Core A MDS 91xx in NPV mode B Ports Deployed: Storage Ports (4 G Dedicated): Host Ports (4 G Shared): Disk Oversubscription (Ports): Number of fabric switches to manage 1608 240 480 8 : 1 2 2 ISL to Core at 4G 16 Host Ports at 4G Five Racks 96 Dual Attached Blade Servers per Rack

BladeSystem F-Port Port Channel Enhance NPV Uplink Resiliency F-Port PortChannels Bundle multiple ports into 1 logical link Similar to ISL portchannels in FC and EtherChannels in Ethernet Benefits High-Availability- no disruption if cable, port, or line cards fail Optimal bandwidth utilization & higher aggregate bandwidth with load balancing Blade N Blade 2 Blade 1 N-Port F-Port Port Channel F-Port Interface port-channel 1 no shut Core Director SAN Storage Interface fc1/1 channel-group 1 Interface fc1/2 channel-group 1

Control and monitor VMs in the SAN using NPIV NPIV gives Virtual Servers SAN identity Designed for virtual server environments Allows SAN control of VMs Zoning and LUN Masking at VM level Multiple applications on the same port can use different IDs Better utilization of the server connectivity FC Virtual Server s Email Web Print vpwwn1 FCID=1.1.1 vpwwn2 FCID=1.1.2 vpwwn2 FCID=1.1.3 Zone_Email vpwwn1 pwwnd1 Zone_Web vpwwn1 pwwnd1 Zone_Print vpwwn1 pwwnd1 F_Port Contr ol and monit or VMs in the SAN FC LUN1(pwwnD1 LUN2 (pwwnd LUN3(pwwnD3 N_Port Controller HBA

Summary of Recommendations High Availability - Provide a Dual Fabric Use of Port-Channels and F-Port Channels with NPIV to provide the bandwidth to meet oversubscription ratios Use NPIV/NPV to provide Domain ID scaling and ease of management Use of host level NPIV and Nested NPV to provide visibility to Virtualized servers

Session Agenda History of Storage Area Networks Fibre Channel Technology Designing Fibre Channel Networks Storage Topologies Introduction to NPIV/NPV Core-Edge Design Review Recommended Core-Edge Designs for Scale and Availability Fibre Channel over Ethernet Next Generation Core-Edge Designs

FCoE

Traditional Data Center Design Ethernet LAN and Fibre Channel SAN Physical and Logical separation of LAN and SAN traffic Ethernet FC Additional Physical and Logical separation of SAN fabrics L3 L2 Fabric A FC Fabric B FC Native Switch NIC HBA

Network Behavior Ethernet is non-deterministic Flow control is destination-based Relies on TCP drop-retransmission / sliding window Fibre-Channel is deterministic Flow control is source-based (B2B credits) Services are fabric integrated (no loop concept)

Fabric vs. Network or Fabric & Network SAN Dual Fabric Design SAN and LAN have different design assumptions SAN leverages dual fabric with multi-pathing failover initiated by the client LAN leverages single fully meshed fabric with higher levels of component redundancy FC FC Fabric A Core Core Fabric B? Edge Edge FC

Ethernet Header FCoE Header FC Header CRC EOF FCS Fibre Channel over Ethernet What Enables It? 10Gbps Ethernet Lossless Ethernet Matches the lossless behavior guaranteed in FC by B2B credits Ethernet jumbo frames Max FC frame payload = 2112 bytes Normal ethernet frame, ethertype = FCoE Same as a physical FC frame FC Payload Control information: version, ordered sets (SOF, EOF)

Unified Fabric Why? Fewer CNAs (Converged Network adapters) instead of NICs, HBAs and HCAs Limited number of interfaces for Blade Servers FC HBA FC Traffic FC HBA NIC NIC NIC NIC FC Traffic LAN Traffic LAN Traffic Mgmt Traffic Backup Traffic CNA CNA All traffic goes over 10GE HCA IPC Traffic

Standards for I/O Consolidation FCoE www.t11.org Fibre Channel on network media FC-BB-5 Completed June 2009 Published by ANSI in May 2010 PFC IEEE 802.1Qbb Priority-based Flow Control IEEE 802.1 DCB ETS IEEE 802.1Qaz Priority Grouping Enhanced Transmission Selection DCBx IEEE 802.1Qaz Configuration Verification Completed March 2011 Forwarded to RevCom for publication

FCoE, Same Model as FC Same host to target communication Host has 2 CNA s (one per fabric) Target has multiple ports to connect to fabric Connect to a DCB capable switch Port Type Negotiation Speed Negotiation DCBX Negotiation Access switch is a Fibre Channel over Ethernet Forwarder (FCF) in this directly connected model Dual fabrics are still deployed for redundancy Ethernet Fabric ENode FC Fabric CNA Target DCB capable Switch acting as an FCF

IEEE 802.1Qbb Priority Flow Control PFC enables Flow Control on a Per- Priority basis Therefore, we have the ability to have lossless and lossy priorities at the same time on the same wire Allows FCoE to operate over a lossless priority independent of other priorities Other traffic assigned to other CoS will continue to transmit and rely on upper layer protocols for retransmission Not only for FCoE traffic Ethernet Wire FCoE

Fibre Channel over Ethernet Forwarder (FCF) E E FC FCoE FCF VE VE FCF Fibre Channel over Ethernet Forwarders (FCF) Allows switching of FCoE frames across multiple hops Creates Standards based FCoE ISL Necessary for Multi-Hop FCoE Nothing Further Required (No TRILL or Spanning Tree) E-Ports w/ FC VE-Ports w/ FCoE

Introducing FCoE Initialization Protocol "FIP" discovers other FCoE capable devices within the lossless Ethernet Network Enables FCoE adapters (CNAs) to discover FCoE switches (FCFs) on the FCoE VLAN Establishes a virtual link between the adapter and FCF or between two FCFs "FIP" uses a different Ethertype from FCoE: allows for "FIP"-Snooping by DCB capable Ethernet bridges Building foundation for future Multitier FCoE topologies

"FIP" - Login Flow Ladder ENode VLAN Discovery FCF Discovery FLOGI/FDISC FCoE Switch VLAN Discovery FCF Discovery FLOGI/FDISC Accept "FIP": FCoE Initialization Protocol FC Command FC Command responses SAN Protocols

FCoE Behind the Scenes Data Center Bridging exchange (DCBX): DCBX, is a protocol that extends the Link Layer Discovery Protocol (LLDP) defined in IEEE802.1Qaz. DCBX allows the FCF to provide Link Layer configuration information to the CNA and allows both the CNA and FCF to exchange status. FCoE Protocols: FCoE Data plane protocol. FCoE data protocol requires lossless Ethernet and is typically implemented in hardware, and is used to carry the FC frames with SCSI. "FIP" (FCoE Initialization Protocol) Used to discover FCoE capable devices connected to an Ethernet network and to negotiate capabilities.

Session Agenda History of Storage Area Networks Fibre Channel Technology Designing Fibre Channel Networks Storage Topologies Introduction to NPIV/NPV Core-Edge Design Review Recommended Core-Edge Designs for Scale and Availability Fibre Channel over Ethernet Next Generation Core-Edge Designs

Next Generation Core- Edge Designs

Single Hop FCoE Design Single Hop (Directly Connected) FCoE from the CNA to FCF, then broken out to Native FC LAN Fabric SAN Fabric A SAN Fabric B A VLAN can be dedicated for every Virtual Fabric in the SAN "FIP" discovers the FCoE VLAN and signals it to the hosts Trunking is not required on the host driver all FCoE frames are tagged by the CNA FCoE VLANs can be pruned from Ethernet links that are not designate for FCoE Maintains isolated edge switches for SAN A and B and separate LAN switches for NIC 1 and NIC 2 (standard NIC teaming or Link Aggregation) FCF-A VLAN 10,20 Virtual Fabric 2 VLAN 10,30 Virtual Fabric 3 FCF-B Using NPV from Edge

FCoE and VPC Together VPC with FCoE are ONLY supported between hosts and FCF pairs AND they must follow specific rules A vfc interface can only be associated with a single-link channel bundle While the link aggregation configurations are the same on FCF-A and FCF-B, the FCoE VLANs can be different FCoE VLANs are not carried on the VPC peer-link FCoE and "FIP" ethertypes are not forwarded over the VPC peer link either VPC carrying FCoE between two FCF s is NOT recommended to keep SAN A/B separation VLAN 10,20 LAN Fabric FCF-A VLAN 20=Fabric A VLAN 30=Fabric B Fabric A VLAN 10 ONLY HERE! VLAN 10,30 Fabric B VPC contains only 2 X 10GE links one to each Switch FCF-B STP Edge Trunk

UCS Core-Edge Design N-Port Virtualization Forwarding with MDS SAN A SAN B F_Port Channeling and Trunking from MDS to UCS FC Port Channel can carry all VSANs (Trunk) UCS Fabric Interconnects remains in NPV end host mode Server vhba pinned to an FC Port Channel Server vhba has access to bandwidth on any link member of the FC Port Channel Load balancing based on FC Exchange_ID Per Flow F_ Port Channel & Trunk vfc 1 vfc 2 vfc 1 vfc 2 F_Proxy N_Port NPIV VSAN 1,2 F_Port N_Proxy 6100-A 6100-B NPIV VSAN 1,3 vhba 0 vhba 1 vhba 0 vhba 1 Loss of Port Channel member link has no effect on Server vhba (hides the failure) Affected flows to remaining member links No FLOGI required Server 1 VSAN 1 Server 2 VSAN 2

Extending FCoE with "FIP" Snooping What does a FIP Snooping device do FIP solicitations (VLAN Discovery, FCF Discovery and FLOGI) sent out from the CNA and FIP responses from the FCF are snooped How does a FIP Snooping device work? The FIP Snooping device will be able to know which FCFs hosts are logged into Will dynamically create an ACL to make sure that the host to FCF path is kept secure V E VF V E V E V E FCF FCF MAC 0E.FC.00.DD.EE.FF A FIP Snooping device has NO intelligence or impact on FCoE traffic/path selection/load balancing/login selection/etc Mentioned in the Annex of the FC-BB-5 (FCoE) standard as a way to provide security in FCoE environments Spoofed MAC 0E.FC.00.DD.EE.FF VN FIP Snooping ENode MAC 0E.FC.00.07.08.09 ENode

Fibre Channel Aware Device FCoE NPV FCoE NPV bridge" improves over a "FIP snooping bridge" by intelligently proxying FIP functions between a CNA and an FCF Active Fibre Channel forwarding and security element FCoE-NPV load balances logins from the CNAs evenly across the available FCF uplink ports FCoE NPV will take VSAN into account when mapping or pinning logins from a CNA to an FCF uplink Emulates existing Fibre Channel Topology (same mgmt, security, HA) Fibre Channel Configuration and Control Applied at the Edge Port Proxy FCoE VLAN Discovery Proxy FCoE FCF Discovery VF VNP FCF FCoE NPV

Multi-Tier with FCoE-NPV NPV Considerations Considered Multi Hop because of intelligence of edge. Designed the Same way as FC NPV FCF VE Ports FCF No Local Switching on a NPV edge device Usual traffic is North South from Initiator to Target, but east to west traffic could be an issue in this design Still want to use FCoE dedicated uplinks to meet or allocate enough bandwidth for oversubscription rates at 10G VF VN V F VNP FCF NPIV Core FCoE NPV

Recommended Reading NX-OS and Cisco Nexus Switching, Second Edition (ISBN:158714-304-6), by David Jansen, Ron Fuller, Matthew McPherson. Cisco Press 2013. Storage Networking Fundamentals(ISBN-10:1-58705-162-1; ISBN-13: 978-11-58705-162-3), by Marc Farley. Cisco Press. 2007. Storage Networking Protocol Fundamentals (ISBN: 1-58705- 160-5), by James Long, Cisco Press. 2006. CCNA Data Center Official Cert Guide 640-911(ISBN-10: 1-57820-454-1, ISBN-13: 978-1-58720-454-8) by Chad Hintz, Wendell Odom Cisco Press 2014 Available Onsite at the Cisco Company Store

Participate in the My Favorite Speaker Contest Promote Your Favorite Speaker and You Could be a Winner Promote your favorite speaker through Twitter and you could win $200 of Cisco Press products (@CiscoPress) Send a tweet and include Your favorite speaker s Twitter handle @chadh0517 Two hashtags: #CLUS #MyFavoriteSpeaker You can submit an entry for more than one of your favorite speakers Don t forget to follow @CiscoLive and @CiscoPress View the official rules at http://bit.ly/cluswin

Complete Your Online Session Evaluation Give us your feedback to be entered into a Daily Survey Drawing. A daily winner will receive a $750 Amazon gift card. Complete your session surveys though the Cisco Live mobile app or your computer on Cisco Live Connect. Don t forget: Cisco Live sessions will be available for viewing on-demand after the event at CiscoLive.com/Online

Continue Your Education Demos in the Cisco Campus Walk-in Self-Paced Labs Table Topics Meet the Engineer 1:1 meetings

Thank you