UCS Supported Storage Architectures and Best Practices with Storage

Similar documents
UCS Engineering Details for the SAN Administrator

UCS SAN Deployment Models and Best Practices

UCS Technical Deep Dive: Getting to the Heart of the Matter

UCS iscsi Boot Configuration Example

Using Advanced Features on Cisco UCS Dan Hanson, Technical Marketing Manager, Data Center Group

UCS Direct Attached Storage and FC Zoning Configuration Example

UCS Networking 201 Deep Dive

Troubleshooting the Cisco UCS Compute Deployment BRKCOM Cisco and/or its affiliates. All rights reserved.

UCS-ABC. Cisco Unified Computing System Accelerated Boot Camp. Length: 5 Days. Format: Lecture/Lab. Course Version: 5.0. Product Version: 2.

Design and Implementations of FCoE for the DataCenter. Mike Frase, Cisco Systems

FCoE Cookbook for HP Virtual Connect

UCS Networking Deep Dive. Neehal Dass - Customer Support Engineer

UCS Management Architecture Deep Dive

UCS Fabric Fundamentals

Direct Attached Storage

UCS Fabric Fundamentals

Using native UCSM and CLI tools for Troubleshooting and Configuration

Cisco UCS Local Zoning

FCoE Configuration Between VIC Adapter on UCS Rack Server and Nexus 5500 Switch

UCS Management Deep Dive

UCS deployment guide for Nimble Storage

UCS Networking Deep Dive

Designing and Deploying a Cisco Unified Computing System SAN Using Cisco MDS 9000 Family Switches

Configuring Cisco UCS Server Pools and Policies

Configuring Cisco UCS Server Pools and Policies

Windows 2012 NPIV on UCS Configuration Example

Nexus 5000 NPIV FCoE with FCoE NPV Attached UCS Configuration Example

CCIE Data Center Written Exam ( ) version 1.0

Question: 1 You have a Cisco UCS cluster and you must recover a lost admin password. In which order must you power cycle the fabric interconnects?

Configuring Server Boot

DCNX5K: Configuring Cisco Nexus 5000 Switches

UCS Networking Deep Dive

Cisco UCS Manager Storage Management Guide, Release 3.1

Cisco Exam Questions & Answers

CCIE Data Center Storage Networking. Fibre Channel Switching

LAN Ports and Port Channels

"Charting the Course... Troubleshooting Cisco Data Center Infrastructure v6.0 (DCIT) Course Summary

Fibre Channel Zoning

Configuring Fibre Channel Interfaces

Cisco CCIE Data Center Written (Beta) Download Full Version :

Q&As. Troubleshooting Cisco Data Center Infrastructure. Pass Cisco Exam with 100% Guarantee

CCIE Data Center Lab Exam Version 1.0

Cisco Actualtests Questions & Answers

Troubleshooting N-Port Virtualization

Configuring FCoE NPV. Information About FCoE NPV. This chapter contains the following sections:

Configuring PortChannels

Configuring Server Boot

Deploying FCoE (FIP Snooping) on Dell PowerConnect 10G Switches: M8024-k, 8024 and 8024F. Dell Networking Solutions Engineering March 2012

Deployment Guide: Network Convergence with Emulex OneConnect FCoE CNA and Windows Server Platform

Configuring Service Profiles

UCS Management Deep Dive

Inter-VSAN Routing Configuration

N-Port Virtualization in the Data Center

Next Generation Computing Architectures for Cloud Scale Applications

vsphere 6.0 with HP ProLiant Gen9 Servers, OneView, 3PAR, Cisco Nexus 5600 and Brocade 6510 Deployment Guide

Traffic Monitoring. Traffic Monitoring

Host and storage system rules

Fibre Channel Gateway Overview

Configuring Cisco Unified FEX Nexus 2348UPQ with Fiber Channel Interfaces

UCS SAN Troubleshooting

Nimble Storage SmartStack Getting Started Guide Cisco UCS and VMware ESXi5

UCS Fundamentals Aaron Kerr, Consulting Systems Engineer

UCS Fundamentals. Conor Murphy Partner Systems Engineer. #clmel BRKCOM-1001

SAN Configuration Guide

UCS Networking Deep Dive

CISCO EXAM QUESTIONS & ANSWERS

Cisco Nexus B22 Blade Fabric Extender for IBM

Service Profiles and Templates


Cisco MDS NX-OS Release 6.2Configuration Limits 2

Cisco UCS Manager Network Management Guide, Release 3.2

CCIE Data Center Storage Networking. Fibre Channel Switching Configuration. Nexus 5500UP FC Initialization Allocate interfaces as type FC

MDS 9020 Switch Interoperability

CISCO EXAM QUESTIONS & ANSWERS

Cisco Cisco Data Center Associate Level Accelerated - v1.0 (DCAA)

ITBraindumps. Latest IT Braindumps study guide

Questions & Answers

UCS Fundamentals. Conor Murphy Partner Systems Engineer. #clmel BRKCOM-1001

Deploying Applications in Today s Network Infrastructure

Number: Passing Score: 800 Time Limit: 120 min File Version: 1.0. Cisco

Data Center/Virtualization and the Cloud: Impact on the Evolution of Training and Certification

Course Description. Boot Camp Date and Hours: The class starts on Saturday 2 nd of April, 2016 Saturdays Only 8:00 AM 4:00 PM Central Standard Time

UCS Firmware Management Architecture

Unified Storage Access Design

Large SAN Design Best Practices Using Cisco MDS 9700 and MDS 9500 Multilayer Directors

Vendor: Cisco. Exam Code: Exam Name: Introducing Cisco Data Center Technologies. Version: Demo

Midmarket Data Center Architecture: Cisco Unified Computing System with the Cisco Nexus 1000V Switch

ECE Enterprise Storage Architecture. Fall 2016

UCS Manager Configuration Best Practices (and Quick Start)

Cisco MDS 9000 Family Blade Switch Solutions Guide

Evolution with End-to-End Data Center Virtualization

Vendor: EMC. Exam Code: E Exam Name: Cloud Infrastructure and Services Exam. Version: Demo

Active System Manager Release 8.2 Compatibility Matrix

Managing Network Adapters

Cisco.Actualtests v by.Dragan.81q

Fabric Failover Scenarios in the Cisco Unified Computing System

The Virtual Machine Aware SAN

VersaStack for Data Center Design & Implementation (VDCDI) 1.0

Exam Questions

HP 6125XLG Blade Switch

Transcription:

UCS Supported Storage Architectures and Best Practices with Storage

First Things First: Debunking a Myth Today (June 2012 UCS 2.01x) there is no FCoE northbound of UCS unless you really, really, really want to That s right, there is no FCoE on external Ethernet uplinks by default In other words, FCoE is used only at the access layer (Server to Fabric Interconnect) in a totally transparent manner You don t need to know anything about FCoE to understand how storage works with UCS Everything works as usual. Servers have HBAs that FLOGI into the SAN fabric, you use zoning and LUN masking just like you do with rackmounts today UCS release 1.4 introduced support for Direct Connect FCoE targets Today, this is the only case where you ll see FCoE traffic You ll have to go through explicit configuration steps to make this happen 3

Exec Summary: UCS 1.4 Storage Features New Direct Connect Topologies Introduced Allows lower cost point for small UCS Pod like deployments Remote Location scenarios Use of FCoE Storage Targets with UCS FC Port Channeling and VSAN Trunking More flexibility in engineering FC traffic vs. 1 VSAN per uplink Aggregate Uplinks transparent to host Multi-path drivers Requires MDS or N5K to Work (both features) 4

Exec Summary: UCS 2.0 Storage Features iscsi Boot Support Integrated boot policies, stateless support iscsi HBA modeling (identifiers, equipment view, etc) M81KR (Palo) and Broadcom 57711 Support Hard Disk Drive (HDD) Monitoring without an OS agent Use of LSI interfaces and exposed metrics 5

Topics Introduction of Converged Network Adapters (CNAs) CNAs and Port WWNs Considerations UCS Storage modes of operation and recommendations End Host Mode - NPV (N_Port Virtualization) FC Switching Mode Upstream Connectivity F_Port Trunking / F_Port Channeling Direct Connectivity of Storage Port Types What works and what doesn t IP-Based Storage Appliance Ports iscsi Troubleshooting 6

Introduction of Converged Network Adapters CNAs and Port WWNs Considerations

Unified Fabric with FCoE CNA: Converged Network Adapter Nexus Edge participates in both distinct FC and IP Core topologies CNA presents multiple PCI addresses to the Operating System (OS) OS loads two unique sets of drivers and manages two unique application topologies Server participates in both topologies since it has two stacks and thus two views of the same unified wire Host FC Multi-Pathing driver provides failover between two fabrics (SAN A and SAN B ) UCS hardware based Fabric Failover (preferred) or OS NIC Teaming provides Ethernet traffic failover FCF Unified Wire shared by both FC and IP topologies Ethernet Driver bound to Ethernet NIC PCI address 10GbE Ethernet Drivers Fibre Channel 10GbE Link PCIe Operating System Ethernet Fibre Channel Drivers FCF Nexus Unified Edge supports both FC and IP topologies FC Driver bound to FC HBA PCI address 8

CNA: Converged Network Adapter Operating System View Emulex / Qlogic Standard drivers Same management Operating System sees: N port or Dual port (depending on hardware) 10 Gigabit Ethernet adapter N port or Dual Port (depending on hardware) Fibre Channel HBAs 9 9

CNA: Converged Network Adapter Operating System View Cisco M81KR (Palo) Standard drivers Same management Operating System sees: N port or Dual port (depending on hardware) 10 Gigabit Ethernet adapter N port or Dual Port (depending on hardware) Fibre Channel HBAs 10

vhbas: WWN Assignment WWN assignment: just like Ethernet MAC addresses Either inherited from burnt-in WWN (not with M81KR!) Or manually set Or borrowed from a pool (recommended) Up to two HBAs (called vhbas) per server with non-m81kr CNAs Backplane path failover does not exist for HBAs! A vhba either goes through switch A or B at any given time OS-level multipathing provides path resiliency Same dynamic pinning concept as with 10GE NICs Manual override allowed by using SAN pin-groups 11

A word on World-Wide Name Formats Avoid interop issues by choosing IEEE Extended Type 2 WW Names inside pools: Section 1 Section 2 Section 3 Section 4 2 0:00 00:25:B5 XX:XX:XX Section 1 identifies the WWN as an extended format WWN Section 2 is a vendor specific code and can be used to identify specific ports on a node or to extend the serial number (section 4) of the WWN Section 3 identifies the vendor (00:25:B5 = Cisco) Section 4 is the serial number for the device 12

Best Practice for WWN Pools Cisco MDS switches will not let just any random port WWN FLOGI! Diagnostic is not trivial. If your vfc (server-side) interface does not come up, check for malformed WWNs on the upstream MDS using show flogi internal eventhistory errors Event:E_DEBUG, length:146, at 154805 usecs after Fri Sep 4 17:55:13 2009 [102] Err(NAA=5 and IEEE Company ID is zero)invalid node name 50:00:00:00:00:00:00:07 from interface fc1/9; nport name is 20:00:00:00:00:00:04:02. As stated on the previous slide, prefer IEEE Type 2 WWNs 13

Quick Tip How Do I Determine My Port WWNs? Determine the server s pwwn Assigned through the service profile Verify on the host it will match: Check local FLOGI for that pwwn on UCS and MDS: On Windows, use free fcinfo.exe utility 14

Fabric Interconnects Modes of Operation

What is NPV? N-Port Virtualizer (NPV) utilizes NPIV functionality to allow a switch to act like a server performing multiple logins through a single physical link Physical servers connected to the NPV switch login to the upstream NPIV core switch Physical uplink from NPV switch to FC NPIV core switch does actual FLOGI Subsequent logins are converted (proxy) to FDISC to login to upstream FC switch No local switching is done on an FC switch in NPV mode FC edge switch in NPV mode Does not take up a domain ID UCS Fabric Interconnect FC NPIV Core Switch F-Port Eth1/1 Server1 N_Port_ID 1 NP-Port F-Port Eth1/2 Server2 N_Port_ID 2 F_Port Eth1/3 Server3 N_Port_ID 3 N-Port 16

N-Port Virtualization (NPV) Mode UCS FI works in NPV mode by default Server-facing ports are regular F ports Uplinks toward SAN core fabric are NP ports UCS distributes (relays) FCIDs to attached devices No domain ID to maintain locally Zoning, FSPF, DPVM, etc are not configured on the UCS Fabrics Domain Mgr, FSPF, Zone Server, Fabric Login Server, Name Server They do not run on UCS Fabrics No local switching All FC traffic routed via the core SAN switches 17

N-Port Virtualization (NPV): An Overview 1) FI FLOGIs to NPV Core 2) Server FLOGIs to FI 3) NPV Converts Server FLOGIs to FDISCs NPV-Core Switch (MDS or 3 rd party switch with NPIV support) FC F-port NP-port F-ports N-ports chassis chassis chassis Can have multiple uplinks one VSAN per uplink Two uplinks can be in the same VSAN Port channel or trunking optional UCS FI A & B Relays FCIDs to servers no domain ID to configure on UCS! Servers log in (FLOGI) locally 18

NPV: Internal Logins (FLOGIs) When an NP port comes up, UCS itself first FLOGIs into the core Pod5-9124-123 UCS rtp-6100-a F VSAN 100 NP fc2/1 P1 P2 fwwn of Port P1 19

NPV: Logins from Servers (FDISCs) Server HBAs log into the npv-core as follows: Pod5-9124-123 F P1 UCS rtp-6100-a VSAN 100 NP fc2/1 vfc688 P2 F P3 fwwn of Port P2 fwwn of Port P1 N P4 Server 1/3 NPV converts FLOGI to FDISC CSCO8000 FCID 0xe70011 20

UCS Storage Defaults and Recommendations for FC Default (recommended) - N_Port Virtualization (NPV) Mode End Host Mode for FC, UCS functions as Node Port (initiator) Small to Large Scale Deployments of homogeneous or heterogeneous operating systems Extensive interoperability with SAN and array ecosystem Option - FC Switching Mode UCS FI has limited FC switching features, No Zoning Configuration Must still have upstream MDS or Nexus FC switch via FC Uplink Direct Connect from Fabric Interconnect to Storage Array FC target Designed for POD or Small scale Limited interoperability with Storage ecosystem 21

UCS Storage Defaults and Recommendations for FCoE Default FCoE Traffic is internal to UCS system (blade to Fabric Interconnect) FCoE packets are terminated at the fabric interconnect, no northbound FCoE packets transmitted from UCS Options FC Switching Mode UCS FI has limited FC switching features, no zoning configuration Must still have upstream MDS or Nexus FC switch via FC Uplink Direct Connect from Fabric Interconnect to Storage Array(s) FCoE target Use of FCoE Storage Targets with UCS Limited interoperability with Storage ecosystem 22

UCS Storage Defaults and Recommendations for NAS Default (recommended) - End Host Mode Superior traffic engineering - Native L2 multipathing; no spanning-tree Easier integration into network 1.4 Introduced Appliance Ports which allow direct connect NAS filers Options - Ethernet Switching Mode As of 1.4 no storage based reasons to use this mode Previous releases required switching mode for direct connect NAS 23

UCS Connectivity Summary LAN CLOUD SAN Switch Upstream Switch in NPIV Mode FC STORAGE NAS STORAGE LAN Switch UCS Ethernet in EHM UCS FC in NPV Mode UCS Fabric Interconnect Access Layer LAN & SAN Unified Fabric (FCoE) 24

Upstream Connectivity SAN Uplinks, Port Channels and Trunks

FC Uplink VSAN Membership Default SAN integration is very straightforward: Connect UCS to external SAN switch (MDS / Brocade) Make FC uplink member of one and only one VSAN No F_Port or NP_Port trunking by default 26

FC Port Trunking (Multiple VSANs per Link) Provide isolation to SAN traffic over the same physical FC link Help consolidate FC infrastructure vhbas can be on different VSANs Uplink FC ports will be in NPV mode (N- Port) by default All VSANs will be trunked on every uplink FC port vfcs Selecting a subset of VSANs for individual uplink ports not supported Scalability: Max of 32 VSANs per UCS system VSAN trunking supported in FC switch mode as well VSAN 100 VSAN 200 VSAN 300 VSAN 400 VSAN Trunking is not available for direct connect FC (or FCoE) Storage Port types SAN A SAN B 27

FC Port Channels Aggregate and maximize available bandwidth while maintaining isolation Increases resiliency and guard against port failures Up to 16 FC ports can be aggregated together for a single port channel Different combination of FC ports from different expansion modules on the FI can be placed on the same port channel vfcs In case of port speed mismatch port channel forces port speed to highest commonly supported speed VSANs can be trunked over the port channel Port Channel VSAN 100 VSAN 200 VSAN 300 VSAN trunking and port channel supported for both NPV and switch mode FI operation FC Port channeling is not available for direct connect FC Storage Port types SAN A SAN B VSAN 400 28

How FC Trunking is Enabled (Global) Global Setting for both FIs Default is Not enabled 29

Creation and Management of FC Port Channels 30

FC Port Channeling Requirements and Defaults Must activate the channel (like Ethernet) Can work in NPV or FC switch mode MDS and N5K only Load Balancing algorithm same as N5K Load balancing is based on the source ID, destination ID, and exchange ID (OX ID). Storage FC ports cannot be channeled or trunked SJ2-151-B26-A(nxos)# show interface fc 2/1 fc2/1 is up Hardware is Fibre Channel, SFP is short wave laser w/o OFC (SN) Port WWN is 20:41:00:05:73:a2:b3:00 Admin port mode is F, trunk mode is off 31

FC Port Channel and Trunking Interoperability Caveats If the UCS system has VSAN configured in the range of 3840-4079 and an attempt is made to enable Port Channel or Trunk, a dialog will indicate configured VSANs in that range will go disabled. This is due to those VSANs being reserved starting with NX-OS 4.1 in order to support port-channel/trunk features. This will cause those VSANs to stop switching traffic on those VSANs. NOTE: MDS and UCS FI restricted VSAN ranges are not identical. 32

Fabric Interconnects FC and Direct Storage Connectivity Options

FC SAN Boot Create Service Profile Configure Storage (simple or expert) Create vhba s (1, 2, or more (M81KR)) Assign Initiator WWPN (manual or pool) Assign VSAN to vhba s Assign vhba Placement Select or Create Boot Policy Assign Target WWPN and LUNID Associate Server Zone Mask Boot to vmedia Install Drivers and OS 34

Enable Direct Connection of FC/FCOE Storage Customer benefits Support to directly connect FC/ FCoE storage to 6100 End to end FCoE topologies possible Lower cost point for small deployments (no access layer FC switches) FCoE Storage FC Storage Feature details Support for NetApp and EMC direct attached storage Ability to turn On/Off Fibre Channel switching mode Zoning configuration not exposed with UCS Manager Zoning must be inherited from upstream switch Ethernet and FC switching modes are independent UCS 6100 UCS 6100 UCS B-Series 35

Key Considerations of FC Direct Connect. No FC Zoning without upstream connection to MDS or N5K Upstream MDS/N5K allows zoneset merges but no zoning configuration exposed on UCS Software. As of April 2011 Cisco requires the use of upstream MDS/N5K to allow zoneset merges External SAN Management Software N5K MIBs are all exposed but READ ONLY 6xxx can be added to Fabric Manager (FM) but the manageability through FM is limited 36

UCS Direct Connect Storage Three UCS port types related to direct storage capability Storage FC Port direct connect this port to FC port on array Storage FCoE Port direct connect this port to FCoE port on array Appliance Port - direct connect this port to 10G Ethernet port on array 37

UCS Manager View of New Ports 38

Port Types vs. FI Operating Mode (Green Means Valid in Mode of Operation) 39

Hybrid Topology with Direct-Attach and SAN SAN Fabric Storage Arrays Fibre Channel Direct Attach Fabric A Core Fabric B Ethernet Unified I/O FC Storage FCoE Storage FCoE SAN Edge A SAN Edge B Security via Zoneset Merge UCS 6100 UCS 6100 AND Security via LUN Masking LUN Masking Always Used to Some Extent UCS B-Series 40

Unsupported Storage Topologies in UCS As of June 2012 Direct connect without zoning (no upstream MDS/N5K) Multi-Hop FCoE or FCoE northbound on Ethernet Uplinks 6100 to N5K for FC to FCoE bridging is still valid with 1.4 in NPV mode FC connections between the two Fabric Interconnects running in FC switch mode (ISL) Connecting any other L2 based object off of the Appliance Ports Untested, unsupported until further notice 41

How to Place FI into FC Switching Mode This operation results in both FIs going into reboot cycles which takes 15 min or so, system wide downtime, by design When FC switch mode is turned on, all the FC ports come up in TE mode 42

Configuring an FC Storage Port To connect an FC storage device directly into one of the 6100 s FC port, the user must configure the port as an FC storage port The VSAN is configured under the Storage Cloud object Internally, the port is configured as follows: As a F_Port As an access port The speed is kept as auto The user is also allowed to select a named VSAN for that port. 43

Configuring an FC Storage Port (GUI) 44

Configuring a VSAN on an FC Port (GUI) 45

Configuring an FCoE Storage Port (GUI) 46

Configuring an FCoE Storage Port (Cont.) Since the Ethernet port is configured as trunk, a native VLAN must be configured on that port. The FCoE VLAN ID can not be the same as the native VLAN ID User is not requested to provide a native VLAN value. An arbitrary VLAN (4048) was chosen and is always used as the native VLAN for FCoE storage ports unless changed 47

FCoE Port Must Be Assigned to a VSAN 48

IP-based Storage Appliance Ports and iscsi

UCS Manager Appliance Ports What is an Ethernet Appliance A specialized device for use on a Ethernet network, for example Network Attached Storage, iscsi, security appliances, Nexus 1010 What qualifies to be an appliance An Appliance is a specialized external host, does not run STP (today only NetApp and EMC IP storage devices qualified) Purpose of Appliance Port Connect Ethernet appliances only! Do Not to use this port type for switch connectivity to avoid traffic loops 50

Appliance Ports How to Configure 51

Appliance Port Exposed Settings QoS per port settings, normal UCS QoS constructs Manual (static) pinning using pin groups for border port selection Select which VLANs can traverse this port Optionally specify the destination MAC address of the filer. Some Filers do not broadcast their MAC address 52

VLANs and Appliance Ports Similar to VSAN concept, there are two scopes Traditional, LAN Cloud Appliance Cloud with scope restricted to appliance ports and associated VLANs Use the same VLAN ID in both scopes 53

Appliance Ports Have Uplink Ports Appliance ports like server ports have an uplink or border port assigned either via static or dynamic pinning Loss of last uplink port will result in the appliance port being taken down in UCS Same behavior as current network control policy 54

Storage Controller Ethernet Port High Availability Both NetApp and EMC have similar technologies for aggregating ports on their controllers and providing failure handling. An In-depth comparison of these technologies is beyond the scope of this document Storage Vendor documentation should be consulted for details of their feature set It is important to understand the core behavior of these technologies as they relate to UCS Appliance Ports in failure cases For additional information: http://www.cisco.com/en/us/prod/collateral/ps10265/ps10276/whitepaper_c11-702584.html 55

Storage Vendor Ethernet Port Aggregation and Failover Terminology EMC Fail Safe Network (FSN), layered availability of interfaces UCS needs active/passive Member links can be individual links or consist of port channels Etherchannel is Port channel but ON only Link Aggregation is LACP based NetApp Multi-Level Virtual Interface (VIF) interface groups UCS needs active/passive Member links can be individual links or consist of port channels Multi-mode Static is Port channel but ON only Multi-mode dynamic is LACP based. 56

iscsi Boot 1.3: Not supported 1.4: Supported via manual configuration of Adapter Broadcom 55771 Mezzanine Card Only Not productized to any extent by UCSM Not stateless, user configures manually, and UCS Manager is totally unaware of changes to mezzanine card configuration. 2.0 (June/July 2011) Fully Supported in UCS Manager Fully productized by UCS Manager boot polices Support for Broadcom and Cisco M81KR VIC 57

iscsi Feature Overview Primary purpose is to support booting via iscsi iscsi Hardware Offload is not a requirement to support booting, only supporting iscsi Boot Firmware Table (ibft) in the option ROM First release that UCS manager represents an iscsi device in the model and GUI New object called an iscsi vnic is created as a child object to the standard parent vnic New pools and policies to support iscsi vnic attributes in the LAN tab 58

iscsi Boot Flow Create iscsi vnics Create iscsi boot policy Provide UCSM with iscsi boot information Target ip, iqn Initiator ip/mask/gw, iqn vmedia map the OS and drivers Adapter successfully initializes Install OS and Drivers (if required) 59

iscsi and Appliance Port Redundancy Host Multi-pathing drivers are used in lieu of link aggregation network technology MS does not support using s/w iscsi and port channels for iscsi failover Best practice is to use MPIO drivers Failure semantics look like FC in this regard 60

Storage Vendor Support UCS 1.4 Direct Connect only support EMC and NetApp for all topologies (FC, FCoE, NAS) We may add more vendors as a function of time given business case justifications iscsi Boot will only support EMC and NetApp Arrays We may add more vendors as a function of time given business case justifications Please Consult the these resources http://www.cisco.com/en/us/products/ps10477/prod_technical_reference_list.html http://www.cisco.com/en/us/docs/unified_computing/ucs/interoperability/matrix/r_hcl_ B_rel2.02.pdf 61

Useful Troubleshooting and Diagnostic Commands

Troubleshooting (Inconsistent Interface) What if a port is configured as a FC/FCoE storage port but the FC mode is end-host? An error message pops up in the GUI (See next slide) The interface is created but its configstate property is set to Inconsistent. The admin state of the associated physical port is also disabled. The interface gets out of this state automatically (its configstate is changed to Applied ) if the mode is changed to FC switch. 63

Troubleshooting (Inconsistent Interface) Error message when a storage port is configured in end-host mode: 64

FC NPV Mode. Are vhbas Logging in and Where, Which FC Uplink? ucstestfi-a(nxos)# show npv flogi-table Which WWPN In Service Profile -------------------------------------------------------------------------------- Which FC Uplink SERVER EXTERNAL INTERFACE VSAN FCID PORT NAME NODE NAME INTERFACE -------------------------------------------------------------------------------- vfc922 80 0x01000e 20:00:00:25:b5:01:00:bf 20:00:00:25:b5:0a:00:8f fc2/5 vfc924 80 0x01000f 20:00:00:25:b5:01:00:df 20:00:00:25:b5:0a:00:8f fc3/5 vfc946 80 0x01000c 20:00:00:25:b5:01:00:9f 20:00:00:25:b5:0a:00:9f fc2/5 vfc948 80 0x01000d 20:00:00:25:b5:01:00:af 20:00:00:25:b5:0a:00:9f fc3/5 vfc1018 80 0x010014 20:00:00:25:b5:01:00:1f 20:00:00:25:b5:0a:00:7f fc2/5 vfc1020 80 0x010015 20:00:00:25:b5:01:00:3f 20:00:00:25:b5:0a:00:7f fc3/5 vfc1030 80 0x010010 20:00:00:25:b5:01:00:be 20:00:00:25:b5:0a:00:4f fc2/5 65

Useful Troubleshooting CLI in FC Switch Mode 66

Check on Zoneset Merge (Hybrid Topology) Checking that the zoning configuration has been merged There is no GUI equivalent to sh zoneset active. You need to run an NXOS CLI: Panther-A(nxos)# show zoneset active vsan 300 zoneset name fabric-a-panther vsan 300 zone name ls1-netapp1-4b vsan 300 pwwn 50:0a:09:83:97:b9:4c:e4 pwwn 20:01:00:25:b5:71:a0:01 zone name ls1-netapp2-4b vsan 300 * fcid 0x170001 [pwwn 50:0a:09:83:87:b9:4c:e4] pwwn 20:01:00:25:b5:71:a0:01 zone name ls2-netapp1-4b vsan 300 pwwn 50:0a:09:83:97:b9:4c:e4 * fcid 0x170004 [pwwn 20:01:00:25:b5:71:a0:02] 67

Troubleshooting CLI Zoneset Merge How to verify that switches were merged in the same fabric Principal vs. Local in domain context Panther-A(nxos)# show fcdomain domain-list vsan 300 Number of domains: 3 Domain ID WWN --------- ----------------------- 0x45(69) 21:2c:00:0d:ec:a3:9c:01 [Principal] 0x17(23) 21:2c:00:0d:ec:d2:ce:01 [Local] 0x7f(127) 21:2c:00:0d:ec:d0:9c:81 68

iscsi Boot Troubleshooting M81KR (Palo) Disable quiet boot for your blade models, This makes troubleshooting so much easier If your SP and iscsi config is correct, you will see this during POST Palo Initialization cae-sj-ca1-a# conn adapter 1/8/1 adapter 1/8/1 # connect adapter 1/8/1 (top):1# attach-mcp adapter 1/8/1 (mcp):1# iscsi_get_config vnic iscsi Configuration: ---------------------------- vnic_id: 5 link_state: Up Initiator Cfg: initiator_state: ISCSI_INITIATOR_READY initiator_error_code: ISCSI_BOOT_NIC_NO_ERROR vlan: 0 dhcp status: false IQN: eui.87654321abcdabcd IP Addr: 172.25.183.142 Subnet Mask: 255.255.255.0 Gateway: 172.25.183.1 Target Cfg: Target Idx: 0 State: ISCSI_TARGET_READY Prev State: ISCSI_TARGET_DISABLED Target Error: ISCSI_TARGET_NO_ERROR IQN:iqn.199208.com.netapp:sn.101202278 IP Addr: 172.25.183.49 Port: 3260 Boot Lun: 0 Ping Stats: Success (9.698ms) Session Info: session_id: 0 host_number: 0 bus_number: 0 target_id: 0 69

FC Boot Troubleshooting M81KR (Palo) Using LUNLIST to Troubleshoot Looking Good Assigned Service Policy can see Target LUN on Boot FIELD-TME# connect adapter 3/1/1 adapter 3/1/1 # connect adapter 3/1/1 (top):1# attach-fls adapter 3/1/1 (fls):1# vnic ---- ---- ---- ------- ------- vnic ecpu type state lif ---- ---- ---- ------- ------- 7 1 fc active 4 8 2 fc active 5 adapter 3/1/1 (fls):2# lunlist 7 vnic : 7 lifid: 4 - FLOGI State : flogi est (fc_id 0x050a02) - PLOGI Sessions - WWNN 26:02:00:01:55:35:0f:0e fc_id 0x050500 - LUN's configured (SCSI Type, Version, Vendor, Serial No.) LUN ID : 0x0001000000000000 (0x0, 0x5, Promise, 49534520000000000000 000043B2D58130F35E1) - REPORT LUNs Query Response - Nameserver Query Response - WWPN : 26:02:00:01:55:35:0f:0e 70

M72KR (E/Q) CNA Considerations Qlogic and Emulex gen2 card, the FCoE ID: cannot be the same as VLAN ID:1. Any vfcs that have their vsan s FCoE ID: 1 is in conflict with VLAN ID:1 they will remain in the DOWN state Solution: Change the default vsan1 s FCoE ID from 1 to 10 (or something else that is not conflict) 71

Complete Your Online Session Evaluation Give us your feedback and you could win fabulous prizes. Winners announced daily. Receive 20 Passport points for each session evaluation you complete. Complete your session evaluation online now (open a browser through our wireless network to access our portal) or visit one of the Internet stations throughout the Convention Center. Don t forget to activate your Cisco Live Virtual account for access to all session material, communities, and on-demand and live activities throughout the year. Activate your account at the Cisco booth in the World of Solutions or visit www.ciscolive.com. 72

Final Thoughts Get hands-on experience with the Walk-in Labs located in World of Solutions, booth 1042 Come see demos of many key solutions and products in the main Cisco booth 2924 Visit www.ciscolive365.com after the event for updated PDFs, ondemand session videos, networking, and more! Follow Cisco Live! using social media: Facebook: https://www.facebook.com/ciscoliveus Twitter: https://twitter.com/#!/ciscolive LinkedIn Group: http://linkd.in/ciscoli 73