Designing and Deploying a Cisco Unified Computing System SAN Using Cisco MDS 9000 Family Switches

Similar documents
Cisco MDS 9000 Family Blade Switch Solutions Guide

Large SAN Design Best Practices Using Cisco MDS 9700 and MDS 9500 Multilayer Directors

Cisco UCS Local Zoning

Overview. Cisco UCS Manager User Documentation

Cisco Nexus 4000 Series Switches for IBM BladeCenter

UCS-ABC. Cisco Unified Computing System Accelerated Boot Camp. Length: 5 Days. Format: Lecture/Lab. Course Version: 5.0. Product Version: 2.

Direct Attached Storage

UCS Engineering Details for the SAN Administrator

Designing SAN Using Cisco MDS 9000 Series Fabric Switches


Oracle Database Consolidation on FlashStack

The Virtual Machine Aware SAN

Cisco Cisco Data Center Associate Level Accelerated - v1.0 (DCAA)

Configuring Cisco UCS Server Pools and Policies

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Midmarket Data Center Architecture: Cisco Unified Computing System with the Cisco Nexus 1000V Switch

Cisco UCS Virtual Interface Card 1225

Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision

Rack-Level I/O Consolidation with Cisco Nexus 5000 Series Switches

UCS Direct Attached Storage and FC Zoning Configuration Example

Cisco Unified Computing System for SAP Landscapes

Product Overview. Send documentation comments to CHAPTER

Cisco UCS Performance Manager

CCIE Data Center Written Exam ( ) version 1.0

Fibre Channel Zoning

VersaStack for Data Center Design & Implementation (VDCDI) 1.0

Cisco I/O Accelerator Deployment Guide

DCNX5K: Configuring Cisco Nexus 5000 Switches

Cisco MDS 9000 Family Highlights: Server Virtualization Series

Overview. About the Cisco UCS S3260 System

IT Agility Delivered: Cisco Unified Computing System

Storage Access Network Design Using the Cisco MDS 9124 Multilayer Fabric Switch

Configuring Cisco UCS Server Pools and Policies

Question: 1 You have a Cisco UCS cluster and you must recover a lost admin password. In which order must you power cycle the fabric interconnects?

Cisco Actualtests Questions & Answers

LAN Ports and Port Channels

Equipment Policies. Chassis/FEX Discovery Policy

Storage Networking Cisco MDS 9148 Beta Testing

IBM Europe Announcement ZG , dated February 13, 2007

Cisco Prime Data Center Network Manager 6.2

Fibre Channel Gateway Overview

UCS Networking 201 Deep Dive

N-Port Virtualization in the Data Center

Deployment Guide: Network Convergence with Emulex OneConnect FCoE CNA and Windows Server Platform

Inter-VSAN Routing Configuration

CCIE Data Center Lab Exam Version 1.0

Configuring Service Profiles

SAN Virtuosity Fibre Channel over Ethernet

Pass-Through Technology

Cisco Exam Questions & Answers

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

"Charting the Course... Troubleshooting Cisco Data Center Infrastructure v6.0 (DCIT) Course Summary

FCoE Cookbook for HP Virtual Connect

Chassis Profiles and Templates

Configuring and Managing Zones

Introduction to Cisco UCS Central

Course Description. Boot Camp Date and Hours: The class starts on Saturday 2 nd of April, 2016 Saturdays Only 8:00 AM 4:00 PM Central Standard Time

UCS Technical Deep Dive: Getting to the Heart of the Matter

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation

Service Profiles. Service Profiles in UCS Manager

UCS deployment guide for Nimble Storage

Configuring Server Boot


Cisco HyperFlex Systems

THE OPEN DATA CENTER FABRIC FOR THE CLOUD

The Cisco HyperFlex Dynamic Data Fabric Advantage

Configuring and Managing Zones

2 to 4 Intel Xeon Processor E v3 Family CPUs. Up to 12 SFF Disk Drives for Appliance Model. Up to 6 TB of Main Memory (with GB LRDIMMs)

Scalability Study for Deploying VMware View on Cisco UCS and EMC Symmetrix V-Max Systems

Technical Brief: How to Configure NPIV on VMware vsphere 4.0

Cisco UCS Virtual Interface Card 1227

Cisco UCS Unified Fabric

Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x

Cisco UCS Mini Software-Defined Storage with StorMagic SvSAN for Remote Offices

Configuring PortChannels

Veeam Availability Suite on Cisco UCS S3260

Fabric Failover Scenarios in the Cisco Unified Computing System

Introducing Cisco UCS Central 1.4

Commvault ScaleProtect with Cisco UCS S3260 Storage Server

Cisco HyperFlex HX220c M4 Node

FCoE Configuration Between VIC Adapter on UCS Rack Server and Nexus 5500 Switch

Real4Test. Real IT Certification Exam Study materials/braindumps

Cisco UCS: Choosing the Best Architecture for Your Citrix XenDesktop and XenApp Implementations

Deploy a Next-Generation Messaging Platform with Microsoft Exchange Server 2010 on Cisco Unified Computing System Powered by Intel Xeon Processors

ORACLE FABRIC MANAGER

Overview. Overview. This chapter includes the following sections:

Cisco UCS B460 M4 Blade Server

Overview. About the Cisco UCS S3260 System

Veritas NetBackup on Cisco UCS S3260 Storage Server

Cisco Nexus 7000 Series Connectivity Solutions for the Cisco Unified Computing System

Introducing Cisco UCS Central 2.0

Overview. Overview. Cisco UCS 6324 Fabric Interconnect with Cisco UCS B-Series Servers and C-Series Servers, which is. Overview 1

ITBraindumps. Latest IT Braindumps study guide

BMC BladeLogic for Cisco UCS. Tore Brynaa Business Solution Manager, BMC Software

Overview of Cisco Unified Computing System

CISCO EXAM QUESTIONS & ANSWERS

Troubleshooting N-Port Virtualization

Evolution with End-to-End Data Center Virtualization

Expert Reference Series of White Papers. Cisco UCS B Series Uplink Strategies

Transcription:

Designing and Deploying a Cisco Unified Computing System SAN Using Cisco MDS 9000 Family Switches What You Will Learn The Cisco Unified Computing System helps address today s business challenges by streamlining data center resources, scaling service delivery, and radically reducing the number of devices requiring setup, management, power and cooling, and cabling. However, these features pose interesting challenges in designing and deploying a SAN. This document discusses these challenges and how they can be addressed using Cisco MDS 9000 Family SAN switches, with their superior architecture and rich feature set. This document discusses how the architecture and features of Cisco MDS 9000 Family switches can be used to build a highly scalable and available SAN for Cisco Unified Computing System deployments. The first section discusses the unique characteristics of the Cisco Unified Computing System. The second section discusses the effect of the Cisco Unified Computing System characteristics on the SAN design and presents considerations for a good SAN design. The next section presents deployment guidelines to address the challenges posed. The final section provides detailed procedures for installing and deploying a Cisco Unified Computing System SAN. Characteristics of Cisco Unified Computing System The Cisco Unified Computing System brings unique capabilities to scale and simplify server deployment, using features that are unique in the industry. Some of the main features that are relevant to a SAN design are discussed here. High-Density Server Chassis Generally, blade systems provide higher server density than traditional rack-mount servers. The Cisco Unified Computing System provides exceptional server density by increasing scalability through a design offering up to 320 discrete servers and thousands of virtual machines in a single highly available management domain. All Servers Are SAN Ready Cisco s unified fabric technology reduces costs by eliminating the need for multiple sets of adapters, cables, LAN and SAN switches, and high-performance computing networks. Hence, every server is capable of connecting to both the LAN and SAN using converged network adapters (CNAs) and unified fabric interconnects. The default SAN enablement of every server (both physical and virtual) provides SAN connectivity without the need to justify incremental investments in host bus adapters (HBAs). Optimized for Server Virtualization With Cisco Extended Memory Technology, the Cisco Unified Computing System provides a larger memory footprint. In addition, the servers offer the high performance of the newest generation of Intel processors. These two features dramatically increase the number of virtual servers hosted per physical server compared to traditional blade servers. In addition, server profiles and the stateless computing architecture of the Cisco Unified Computing System make the system well suited for highly scalable virtualization deployments. 2010 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 24

SAN Design Considerations This section discusses important considerations to address the effects of the Cisco Unified Computing System characteristics on SAN design. A list of decision variables that affect the SAN design is provided at the end of this section. High Density and Unified Fabric Increase SAN Scalability Requirements High density in the server rack means more servers attached to the network. With SAN-ready servers, the SANattach rate will increase with Cisco Unified Computing System deployments. This increase leads to more server logins and demands greater scalability and performance from the SAN infrastructure. Virtualized Servers Need a SAN That Can Provide Predictable High Performance Server virtualization brings agility to servers along with the capability to host multiple virtual servers on physical servers. However, the agility demands predictable behavior and performance from the underlying networks, including the SAN. When servers move from one physical resource to another, SAN connectivity to the storage may change depending on the location and connectivity of the physical servers. Hence, the SAN should perform in a highly predictable way that does not change depending on the location of the virtual machine. Server SAN Connectivity Is Through Cisco UCS 6100 Series Fabric Interconnects Network connectivity to Cisco Unified Computing System servers is provided through Cisco UCS 6100 Series Fabric Interconnects, which separate the LAN and SAN traffic and hand it off to the corresponding upstream LAN and SAN core and aggregator switches (Figure 1). For SAN connectivity, the fabric interconnect operates in the N-Port Virtualizer (NPV) mode. In this mode, the servers are assigned to uplinks connecting to the upstream switch either automatically or using pinning. Figure 1. SAN Connectivity for the Cisco Unified Computing System 2010 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 2 of 24

Oversubscription and Scalability Limits Affect Performance When designing any SAN, it is important to understand the performance requirements of the applications and design the SAN to meet that criteria. Several factors including the number of servers, number targets, type of application (transactional or large block), bandwidth requirements for servers and targets, type of switching modules and switches influence the SAN design. These factors lead to oversubscription, scalability, and performance requirements. In addition to these parameters, for a Cisco Unified Computing System SAN deployment, other factors need to be considered because of the unique features of the environment. The high-density server complex combined with the virtualization-optimized architecture leads to more servers 1 attached to each port on the director, increasing the number of servers serviced by switch ports and modules, as well as potentially to a higher fan-in ratio of servers to storage ports. Hence, the following factors need to be considered as well: Number of virtual servers per blade server Number of blade servers per Cisco Unified Computing System chassis Number of chassis served by each pair of Cisco UCS fabric interconnects Higher Number of Servers Accessing the Storage Ports Requires Management Because of the higher server density, each storage port may be servicing a higher number of servers, including both physical and virtual servers. The physical servers are identified by a login on the storage port, but the virtual servers may not be. Typically, storage arrays have upper limits on the number of logins they can support. However, the number of virtual servers may not be noticed. It is important to keep these numbers to a reasonable level by considering the number of ports on the storage arrays and the bandwidth of each of the connected ports. SAN Design Parameters The preceding design considerations typically lead to the following decision variables, which are considered typical SAN design parameters. Number of Uplinks Between Cisco UCS Fabric Interconnects and Cisco MDS 9000 Family SAN Switch Using hot-pluggable expansion modules, up to eight 2 Fibre Channel uplinks can be deployed. This translates to a maximum uplink bandwidth of 32 Gbps (each link provides a maximum of 4 Gbps 3 ). Each blade server can have two 4-Gbps virtual HBAs (vhbas): one for each of the physical SANs. A Cisco UCS 5100 Series Blade Server Chassis can host up to eight half-width (or four full-width) blades, and hence total maximum bandwidth required for the servers is 32 Gbps, assuming that all the servers are SAN attached and need the maximum permissible bandwidth. For example, for a fully populated eight-chassis Cisco Unified Computing System domain with all the servers needing SAN connectivity, the maximum SAN bandwidth required for the servers is 256 Gbps (8 chassis * 8 blades per server * 4 Gbps). This ratio of required server bandwidth to available uplink bandwidth provides the oversubscription. This ratio determines the number of uplinks needed as well. It also determines the number of servers per chassis possible for a given Cisco UCS 6100 Series Fabric Interconnect. The typical oversubscription ratio ranges from 5 to 12 depending on the application. Type and Number of Cisco MDS 9000 Family Switches Required The number of uplinks and number of Cisco UCS 6100 Series Fabric Interconnects along with number of storage arrays dictates the type and number of Cisco MDS 9000 Family switches required. The number also depends on the 1 Virtual servers log into the fabric only if N-port ID virtualization (NPIV) is enabled on the virtual infrastructure. NPIV is supported in VMware ESX (NPIV using raw device mapping (RDM) mode) and Hyper-V. Note, however, that virtual servers in any form with or without explicit login affect SAN scalability and oversubscription. 2 This number applies to the Cisco UCS 6120XP 20-Port Fabric Interconnect with Cisco Unified Computing System Release 1.1(1j). The value is 16 uplinks on Cisco UCS 6140XP 40-Port Fabric Interconnect. 3 As of Cisco Unified Computing System Release 1.1(1j). 2010 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 3 of 24

high-availability requirements of the overall deployment. Cisco MDS 9500 Series Multilayer Directors provide superior high availability compared to fixed-configuration fabric switches. Type and Number of Switching Modules Required on Each Cisco MDS 9000 Family Switch The type and number of the switching modules depends on the bandwidth requirements of the application and server and storage. The Cisco MDS 9000 Family provides very flexible options in terms of port cost and density to optimize deployment cost. The storage array ports may need higher connectivity speeds because of higher server density in a virtualized system. Number of Dedicated-Bandwidth Ports Needed on Switching Modules The bandwidth requirements on the uplinks, and hence the corresponding ports on the Cisco MDS 9000 Family switch, must be higher in a Cisco Unified Computing System deployment for all the reasons explained previously. Typically, switching modules are optimized for a certain maximum bandwidth depending on the type of module and switch containing them. Depending on applications and connectivity, the capability to provide a certain bandwidth to certain SAN ports is critical. For example, storage ports traditionally need higher, dedicated bandwidth because there are more servers accessing storage ports. Likewise, since there are more servers behind a switch port connected to a Cisco UCS 6100 Series Fabric Interconnect, dedicated bandwidth may be required. The use of dedicated bandwidth will also help optimize available bandwidth across different classes of devices. Deployment Guidelines Like the Cisco Unified Computing System, the Cisco MDS 9000 Family is designed to provide scalability and performance while simplifying management. This section discusses how Cisco MDS 9000 Family design methodologies and features can be used to address the challenges posed in Cisco Unified Computing System SAN deployments. Cisco MDS 9000 Family Virtual SANs for Managing the Large Number of Servers Cisco virtual SANs (VSANs) provide the SAN designer with new tools to highly optimize the scalability, availability, security, and management of SAN deployments. VSANs provide the capability to create completely isolated fabric topologies, each with its own set of fabric services, on top of a scalable common physical infrastructure (Figure 2). VSANs can be used to achieve optimized SAN deployment for the Cisco Unified Computing System. 2010 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 4 of 24

Figure 2. Optimizing Cisco Unified Computing System SAN Deployments Using VSANs Increase Scalability of SAN While Maintaining Application Isolation Because of its higher server density, the Cisco Unified Computing System aids consolidation of different business functions on a smaller physical infrastructure. However, the solution must provide the same level of security and isolation as provided by a solution based on discrete servers or by unconnected networks. While the foundation of fabric security is embedded in the VSAN-capable hardware, the holistic approach to security provided by the Cisco MDS 9000 Family helps ensure security even in a highly consolidated environment by dividing the physical SAN into logical VSANs (Figure 3). 2010 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 5 of 24

Figure 3. Maintaining Application Isolation While Increasing the Scalability of the SAN Enhance Management Security for Servers Using Role-Based Access Control VSANs provide the capability to segregate the servers into different groups along with the SAN services and hence provide security for management. Using role-based access control (RBAC), which is available on both the Cisco MDS 9000 Family and Cisco UCS Manager, end-to-end logical separation can be achieved. Deliver Optimized SAN Performance to Servers Using Cisco MDS 9000 Family Quality of Service The Cisco Unified Computing System helps consolidate many physical servers into a virtualized environment. The challenge is to deliver the right SAN performance to the right group of servers to optimize the overall SAN performance. The Cisco MDS 9000 Family s quality-of-service (QoS) feature can be used to provide the right SAN service level to all groups of servers. The QoS feature also has the flexibly to prioritize a specific flow, as shown in Figure 4, and to rate-limit a specific flow to a user-defined threshold. 2010 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 6 of 24

Figure 4. SAN Traffic Management for Cisco Unified Computing System Servers Cisco Unified Computing System Uplink Management The uplinks for the servers can be configured in two ways depending on the requirements. Although automatic configuration makes deployment easy, manual configuration using pin groups provides more control in assigning certain servers to specific uplinks. Figure 5. Cisco Unified Computing System Servers Can Be Automatically Assigned to Uplinks 2010 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 7 of 24

With autoselection, the vhbas (and hence the Cisco Unified Computing System servers) will be uniformly assigned to the available uplinks depending on the number of logins on each uplink (Figure 5). This approach should work for most deployments and is simple to deploy and manage. However, if you need more control, you should use manual assignment through pin groups. With this approach, you can assign a group of uplinks for a group of servers, thereby dedicating a certain uplink bandwidth for a group of servers. You create a pin group using one Fibre Channel port on the expansion module. A Cisco Unified Computing System server, through its vhba, can be configured to use a certain pin group (only one pin group is allowed) to access the Cisco MDS 9000 Family SAN, as shown in Figure 6. If multiple servers are assigned to the same pin group, they are load-balanced among the ports in the pin group just as in the case of automatic assignment. Figure 6. Uplinks Can Be Assigned Using Pin Groups If a link fails, the servers on the failed uplink experience traffic disruption. However, the servers will be automatically moved to other available links. In the case of a pin group, the automatic move may be limited by the number of available uplinks in a certain pin group. If NPV is used on Cisco MDS 9000 Family fabric switches, the possibility of traffic disruption can be eliminated through the use of F-port PortChannels. Cisco UCS 6100 Series Fabric Interconnects and Cisco MDS 9000 Family fabric switches support the same NPV technologies, and the Cisco Unified Computing System will support of F-port PortChannels in the future. High Availability For high availability, servers are dual-homed onto two parallel and physically separate SANs: SAN A and SAN B. In such a design, it is important to maintain the SAN fabric isolation from the server to the storage to achieve the best high-availability behavior. The same principles should also be followed in Cisco Unified Computing System deployments. Figure 7 shows a typical scenario. In this deployment, a fabric interconnect is connected to both Cisco MDS 9000 Family switches in the core. For a given server, both vhbas could be logged into one Cisco MDS 9000 Family switch in the core, depending on how NPV on the fabric interconnect load-balances the device logins. However, this creates a single of failure if one of the Cisco MDS 9000 Family switches fails. 2010 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 8 of 24

Figure 7. Connecting Fabric Interconnects to Cisco MDS 9000 Family Switches in Both Fabrics Creates a Single Point of Failure It is important to assign the vhbas to the correct Cisco UCS 6100 Series Fabric Interconnects (for example, the SAN A vhba is connected to only the SAN A Cisco UCS fabric interconnect) and to connect the Cisco Unified Computing System uplinks to the corresponding Cisco MDS 9000 Family switch in the core, as shown in Figure 8. Figure 8. Recommended SAN Connectivity for Fabric Interconnects SAN Zoning Zoning is a way to restrict the communication between the devices in a SAN. Although it may seem simplest to create fewer zones, perhaps one zone per application or application cluster, this approach may result in suboptimal scalability of SAN resources. You should follow best practices to get the most out of your SAN. For example, 2010 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 9 of 24

consider two Cisco Unified Computing System servers, S1 and S2 (in Figure 9), that need to communicate with the storage in the following way: S1, S2 > D1, D2, D3 Figure 9. SAN Zoning for Cisco Unified Computing System Servers A simple option is to create a zone with the following contents: Z1: S1, S2, D1, D2, D3 Although this zone may work fine, such a model does not optimize the use of SAN resources. For example, although S1 and S2 do not need to communicate with each other, SAN resources will be provisioned to allow them to communicate. The same thing will happen with D1, D2, and D3. Table 1 shows two of the most widely used approaches to zoning. Table 1. Common SAN Zone Assignments Two-Member Zones (One Initiator and One Target) Z1-1: S1, D1 Z1-2: S1, D2 Z1-3: S1, D3 Z1-4: S2, D1 Z1-5: S2, D2 Z1-6: S2, D3 One-Initiator Zones (One Initiator and Multiple Targets) Z1-2: S2, D1, D2, D3 Z1-1: S1, D1, D2, D3 Although two-member zones increase the use of SAN switch resources, they increase the number of zones needed and there are upper limits to the number of zones supported. One-initiator zones reduce the number of zones needed, but they require more SAN switch resources. Hence, the decision is a trade-off between the size of each zone and the number of zones. 2010 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 10 of 24

SAN Booting to Allow Server Mobility Booting over a network (LAN or SAN) is a mature technology and an important step in moving toward stateless computing, which eliminates the static binding between a physical server and the OS and applications it is supposed to run. The OS and applications are decoupled from the physical hardware and reside on the network. The mapping between the physical server and the OS on the network is performed on demand when the server is deployed. Some of the benefits of booting from a network are: Reduced server footprint because fewer components (no disk) and resources are needed Simplified disaster and server failure recovery Higher availability because of the absence of failure-prone local hard drives Rapid redeployment Centralized image management With SAN booting, the image resides on the SAN, and the server communicates with the SAN through an HBA (Figure 10). The HBA s BIOS contains the instructions that enable the server to find the boot disk. A common practice is to have the boot disk exposed to the server as LUN ID 0. Figure 10. SAN Booting for Cisco Unified Computing System Servers The Cisco UCS M71KR-E Emulex CNA, Cisco UCS M71KR-Q QLogic CNA, and Cisco UCS M81KR Virtual Interface Card (VIC) are all capable of booting from a SAN. Management of Virtual Servers in the SAN Typically, virtual servers do not have an identity in the SAN: they do not log in to the SAN like physical servers do. However, if controlling and monitoring of the virtual servers is required, N-port ID virtualization (NPIV) can be used. This approach requires you to: Have a Fibre Channel adapter and SAN switch that support NPIV Enable NPIV on the virtual infrastructure, such as by using VMware ESX Raw Device Mode (RDM) Assign virtual port worldwide names (pwwns) to the virtual servers 2010 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 11 of 24

Provision the SAN switches and storage to allow access By zoning the virtual pwwns in the SAN to permit access, you can control virtual server SAN access just as with physical servers. In addition, you can monitor virtual servers and provide service levels just as with any physical server. Deployment Procedure This section presents all the steps needed to boot a Cisco Unified Computing System server from a Cisco MDS 9000 Family SAN. The steps include the configuration needed on the Cisco MDS 9000 Family switches using Cisco Fabric Manager and on the Cisco UCS 6100 Series Fabric Interconnects using Cisco UCS Manager. This discussion assumes that the basic provisioning of IP addresses, server profiles, and so on is performed on the Cisco Unified Computing System. For the provisioning of bare-metal servers on the Cisco Unified Computing System, please refer to the Cisco Unified Computing System configuration guide (see For More Information at the end of this document). This discussion also assumes that the physical connections from the Cisco UCS fabric interconnect Fibre Channel ports to the corresponding Cisco MDS 9000 Family switch ports have been set up. 1. Enable NPIV on the Cisco MDS 9000 Family switch. NPIV must be enabled on Cisco MDS 9000 Family switches on a switch-by-switch basis. It must be enabled on all Cisco MDS 9000 Family switches that will connect to the Cisco Unified Computing System. You can enable NPIV in the physical attributes settings in Cisco Fabric Manager. Install the Fibre Channel license on Cisco UCS Manager. Before proceeding with Cisco Unified Computing System provisioning, make sure that the Fibre Channel expansion module has the required licensing to enable the Fibre Channel ports. Use the show license usage command on the Cisco UCS 6100 Series Fabric Interconnect to view the licensing information. (Enter connect nxos from the initial log-in prompt to access the show license usage command.) 2010 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 12 of 24

Cisco UCS16-SWITCH-A(nxos)# show license host-id License hostid: VDH=SSI13180GNW Cisco UCS16-SWITCH-A(nxos)# show license usage Feature Ins Lic Status Expiry Date Comments Count -------------------------------------------------------------------------------------------------------- FM_SERVER_PKG No - Unused - ENTERPRISE_PKG No - Unused - FC_FEATURES_PKG Yes - Unused - ETH_PORT_ACTIVATION_PKG No 8 Unused Never - ETH_MODULE_ACTIVATION_PKG No 0 Unused - Note that if the FC_FEATURES_PKG license is not available (No appears in the Ins column), the Fibre Channel ports on the expansion modules will not be available. 2. Bring up the Cisco UCS fabric interconnect uplinks for SAN connectivity. The Cisco UCS fabric interconnect uplinks must be brought up before Cisco Fabric Manager can discover the Cisco Unified Computing System. a. On the Cisco MDS 9000 Family side, make sure the port mode and the speed are both set to auto. Set the rate mode to dedicated to allow maximum bandwidth for the Cisco Unified Computing System uplink. Set the Admin status to up to bring up the port. b. In Cisco UCS Manager, make sure that the corresponding port is enabled and the VSAN matches the one on the Cisco MDS 9000 Family side. 2010 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 13 of 24

3. Discover the Cisco Unified Computing System chassis from Cisco Fabric Manager. Open Cisco Fabric Manager and point it to the seed switch for the SAN. At this point, Cisco Fabric Manager should be able to discover the Cisco Unified Computing System chassis. However, if the Cisco Unified Computing System does not have the same Simple Network Management Protocol Version 3 (SNMPv3) user as the Cisco MDS 9000 Family switches, it will be shown with a red cross mark 4. The mark indicates that Cisco Fabric Manager could not fully discover the Cisco Unified Computing System and it will not be able to show the servers logged into the Cisco Unified Computing System. 4 This mapping of the Cisco Unified Computing System on the Cisco Fabric Manager is supported from the latest release of the Cisco NX-OS Software. Prior versions showed the Cisco Unified Computing System as an NPV switch with devices hanging off the switch. 2010 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 14 of 24

To fully discover and manage the Cisco Unified Computing System chassis from Cisco Fabric Manager, create the same SNMPv3 users 5 on Cisco UCS Manager and the Cisco MDS 9000 Family switches. Cisco Fabric Manager uses the login information to access the Cisco MDS 9000 Family switches and Cisco UCS fabric interconnects, both must have the same set of users. a. Use Cisco UCS Manager 6 to create the SNMPv3 7 user as follows: On the Admin tab, set Filter to All, select Communication Management, and select Communication Services. In the SNMP Users area, fill in the information for the new SNMP user and then click OK. 5 Note that the default UCS manager admin user does not have SNMP privileges and hence user name other than admin has to be used. 6 Depending on the version of the Cisco Unified Computing System, you may need to explicitly enable SNMP. Refer to the Cisco Unified Computing System configuration guide and release notes. 7 SNMPv3 is supported on Cisco Unified Computing System starting from Release 1.0(2d). 2010 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 15 of 24

Use Cisco Fabric Manager to create the SNMPv3 users for the Cisco MDS 9000 Family switches in the network by navigating through the Users tab. 4. Rediscover the Cisco Unified Computing System from Cisco Fabric Manager. In Cisco Fabric Manager, open the File menu and return to the control panel. Go to the Fabrics tab and remove the current fabric. Go to the Open tab, select the same Cisco MDS 9000 Family switch (not the Cisco Unified Computing System because an NPV switch cannot be the seed switch), and click Discover to rediscover the switch with the 2010 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 16 of 24

newly created username and password. This process should discover both the Cisco MDS 9000 Family and the Cisco UCS 6100 Series Fabric Interconnect. After discovery, the Cisco UCS 6100 Series Fabric Interconnect appears on the fabric as shown here 8 : Note that you can invoke Cisco UCS Manager though Cisco Fabric Manager by right-clicking the Cisco Unified Computing System icon. 8 This feature is not available prior to Cisco NX-OS Software Release 5.0(1a). 2010 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 17 of 24

5. Configure VSANs on the Cisco Unified Computing System and Cisco MDS 9000 Family switches. a. Create a VSAN on the Cisco Unified Computing System using Cisco UCS Manager. b. Create VSANs on the Cisco MDS 9000 Family switches using Cisco Fabric Manager. 2010 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 18 of 24

6. Configure Cisco UCS uplinks for SAN connectivity. At this point, the uplink may be active. However, if you need more control, you can create pin groups to dedicate uplinks to the servers. Create a pin group for each SAN (invoked from the SAN tree in Cisco UCS Manager). SAN connectivity depends on where each of the Fibre Channel ports on the expansion modules are connected. 7. Associate the uplinks with servers. Servers access the uplinks using vhbas. vhba can access all the available uplinks in a certain VSAN or through pin groups. The uplinks are assigned to the servers though vhbas, and this association must be created during vhba creation. Be sure to create vhbas and assign them the correct fabric ID and VSAN. If the pin group setting is left blank, the vhba is associated with all available uplinks in the vhba s VSAN. Note that the VSAN on the vhba and the pin group setting must match for the server to come up. 8. Set up SAN booting for the Cisco Unified Computing System servers. The SAN booting configuration process has three parts: storage array configuration, SAN zoning configuration, and Cisco Unified Computing System service profile configuration. 2010 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 19 of 24

Storage Array Configuration Provision a special LUN with the correct size to install the OS image. This LUN must be LUN 0 and will be used by the server to obtain the OS image. In addition, configure LUN masking so that the server has access to the LUN. This configuration is typically performed using the pwwn of the server: the corresponding vhba s pwwn. The LUN masking procedure is specific to the storage array and is usually performed using the array s device manager or command-line interface (CLI). SAN Zoning Configuration The SAN controls the login of the devices and communication between these devices. On Cisco MDS 9000 Family switches, port security allows users to control the device login, and zoning controls the interdevice communication. By default, port security is disabled. If the default setting is used, you do not need to do anything. If port security is enabled, either add the vhba pwwns to the port security database or configure port security to allow the Cisco UCS server logins. Zoning, however, is enabled by default, and hence Cisco Unified Computing System servers and corresponding targets need to be added to the zoning database so that a Cisco UCS 5100 Series blade s vhba can access the target boot LUN on a SAN fabric. Please refer to step 9 for information about configuring zoning on the Cisco MDS 9000 Family. Service Profile Configuration on Cisco UCS Manager Follow these steps to enable SAN booting on Cisco UCS Manager: a. During service profile configuration, for the first and second interfaces, configure the vhba and assign a WWN 9 to the vhba from a preset WWN pool. Also assign the VSAN for the port. In the example here, vhba1 and vhba2 are created in VSAN 1003. Note on WWN Pool 10 Incorrect choice of WWN format will result in login failure on Cisco MDS 9000 Family switches. The recommended format for the WWN for a vhba is 20:00:00:25:B5:xx:yy:zz. Users can use the last three octets for sequential numbering and other information. For instance, xx = 0 indicates that this is a WWN (not a pwwn) xx = 01 indicates that the vhba is associated with fabric interconnect SAN A xx = 02 indicates that the vhba is associated with fabric interconnect SAN B yy:zz = Sequential numbers for the vhba The above format will also aid in the tracing and troubleshooting of server login problems. b. Create the boot policy by right-clicking Boot Policies under Policies > root. 9 Follow Cisco s recommended virtual WWN format. Refer to the Cisco Unified Computing System configuration guide for more information. 10 Follow Cisco s recommended virtual WWN format. Refer to the Cisco Unified Computing System configuration guide for more information. 2010 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 20 of 24

c. In the Create Boot Policy wizard, enter a name for the boot policy and a description. Then under vhba, select Add SAN Boot. Add the vhba: vhba1, defined in step 1. d. In the Add SAN Boot window, add the boot target WWN. Be sure to type the name correctly, and doublecheck the target boot WWN that the SAN administrator provided. 2010 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 21 of 24

e. Make sure the boot policy configuration looks similar to the following Create Boot Policy wizard screen in Cisco UCS Manager. Note that if high availability is needed, you can add a secondary boot device type for SAN booting. f. Select the required service profile and assign the boot policy as shown in the following screen. g. Because you are changing a service profile that needs to be associated with the blade, the blade will need to be rebooted. Cisco UCS Manager will do that in the background. Check the server status for the progress. 9. Configure the zoning for Cisco Unified Computing System servers. Configure zoning using the Cisco Fabric Manager Zoning wizard, which can be invoked as shown here: The Zonesets database screen shows active zone sets, zones, and also all the end devices. The end devices each are shown with a pwwn prefixed with a switch name. The Cisco Unified Computing System servers each are prefixed with a Cisco UCS 6100 Series switch name. 2010 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 22 of 24

The device list also shows the storage ports that need to be zoned with the Cisco Unified Computing System servers. Create a zone using by choosing Insert Zone from the Create Zone menu and add the zone to the active zone set. The active zone set needs to be activated before the changes are enforced. For More Information Cisco MDS 9000 Family Fibre Channel switches: http://www.cisco.com/en/us/products/hw/ps4159/ps4358/index.html Cisco MDS 9000 Family switching modules: http://www.cisco.com/en/us/products/ps5991/prod_module_series_home.html Bandwidth management on Cisco MDS 9000 Family switches: http://cisco.com/en/us/docs/switches/datacenter/mds9000/sw/4_1/configuration/guides/cli_4_1/gen2.html#w p1699226 Quality of service on Cisco MDS 9000 Family switches: http://www.cisco.com/en/us/prod/collateral/modules/ps5991/prod_white_paper0900aecd8044c7f3.html 2010 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 23 of 24

Cisco Unified Computing System configuration guide: http://www.cisco.com/en/us/products/ps10281/products_installation_and_configuration_guides_list.html Cisco Unified Computing System configuration examples and technical notes: http://www.cisco.com/en/us/products/ps10281/prod_configuration_examples_list.html Printed in USA C11-586100-00 02/10 2010 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 24 of 24