FC Cookbook for HP Virtual Connect

Size: px
Start display at page:

Download "FC Cookbook for HP Virtual Connect"

Transcription

1 Technical white paper FC Cookbook for HP Virtual Connect Version 4.45 Firmware Enhancements January 206 Table of contents Change History 7 Abstract 8 Considerations and concepts 9 VC SAN module descriptions 0 Virtual Connect Fibre Channel support 2 Supported VC SAN Fabric configuration 4 Multi-enclosure stacking configuration 38 Scenario : Simplest scenario with multipathing 4 Overview 4 Benefits 42 Considerations 42 Requirements 42 Installation and configuration 43 Blade Server configuration 45 Verification 45 Summary 45 Scenario 2: VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to the same redundant SAN fabric 46 Overview 46 Benefits 48 Considerations 48 Requirements 48 Installation and configuration 49 Blade Server configuration and verification 52 Summary 52 Click here to verify the latest version of this document

2 Scenario 3: Multiple VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to the same redundant SAN fabric with different priority tiers 53 Overview 53 Benefits 54 Considerations 55 Requirements 55 Installation and configuration 55 Verification of the SAN Fabrics configuration 60 Blade Server configuration 6 Verification 6 Summary 6 Scenario 4: Multiple VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to several redundant SAN fabric with different priority tiers 62 Overview 62 Benefits 63 Considerations 63 Requirements 64 Installation and configuration 64 Verification of the SAN Fabrics configuration 68 Blade Server configuration 69 Verification 69 Summary 69 Scenario 5: Fabric-Attach SAN fabrics connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port module 70 Overview 70 Benefits 70 Considerations 7 Virtual Connect FlexFabric Uplink Port Mappings 72 Requirements 73 Installation and configuration 73 Verification of the SAN Fabrics configuration 77 Blade Server configuration 78 Summary 78 2

3 Scenario 6: Flat SAN connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port modules and HP 3PAR Storage Systems 79 Overview 79 Benefits 80 Considerations 80 Physical view of a Direct-Attach configuration 8 Requirements 8 Installation and configuration 82 Configuration of the 3PAR controller ports 85 Verification of the 3PAR connection 86 Blade Server configuration 87 OS Configuration 89 Summary 89 Scenario 7: Mixed Fabric-Attach and Flat SAN connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port modules and HP 3PAR Storage Systems 90 Overview 90 Considerations 9 Physical view of a mix Flat SAN and Fabric-Attach configuration 9 Requirements 92 Installation and configuration 92 Configuration of the 3PAR controller ports 98 Verification of the 3PAR connection 00 Server Profile configuration 00 OS Configuration 02 Summary 02 Scenario 8: Adding VC fabric uplink ports with Dynamic Login Balancing to an existing VC Fabric-Attach SAN fabric 03 Overview 03 Benefits 03 Initial configuration 03 Adding an additional uplink port 06 Login Redistribution 08 Verification 09 Summary 0 3

4 Scenario 9: Cisco MDS Dynamic Port VSAN Membership Overview Benefits Requirements Installation and configuration Summary 4 Scenario 0: Fabric-Attach SAN fabrics connectivity with HP Virtual Connect FlexFabric-20/40 F8 module 5 Overview 5 Benefits 6 Considerations 6 Virtual Connect FlexFabric-20/40 F8 Uplink Port Mappings 6 Requirements 8 Installation and configuration 8 Verification of the SAN Fabrics configuration 23 Blade Server configuration 24 Summary 24 Scenario : Enhanced N-port Tunking with HP Virtual Connect 6G 24-Port Fibre Channel Module 25 Overview 25 Benefits 25 Considerations 26 Compatibility support 26 Requirements 27 Installation and configuration 3 Verification of the trunking configuration 34 Blade Server configuration 35 Trunking information under VCM 35 Summary 36 Appendix A: Blade Server configuration with Virtual Connect Fibre Channel Modules 37 Defining a Server Profile with FC Connections, using the GUI 37 Defining a Server Profile with FC Connections, via CLI 38 Defining a Boot from SAN Server Profile using the GUI 39 4

5 5 Defining a Boot from SAN Server Profile via CLI 4 Appendix B: Blade Server configuration with Virtual Connect FlexFabric Modules 42 Defining a Server Profile with FCoE Connections, using the GUI 42 Defining a Server Profile with FCoE Connections, via CLI 44 Defining a Boot from SAN Server Profile using the GUI 45 Defining a Boot from SAN Server Profile using the CLI 46 Appendix C: Brocade SAN switch NPIV configuration 47 Enabling NPIV using the GUI 47 Enabling NPIV using the CLI 48 Recommendations 50 Appendix D: Cisco MDS SAN switch NPIV configuration 5 Enabling NPIV using the GUI 5 Enabling NPIV using the CLI 53 Appendix E: Connecting VC FlexFabric to Cisco Nexus 50xx and 55xx series 55 Support information 55 Fibre Channel functions on Nexus 55 Configuration of the VC SAN Fabric 57 Configuration of the Nexus switches 57 Appendix F: Connectivity verification and testing 60 Uplink Port connectivity verification 60 Uplink Port connection issues 63 Server Port connectivity verification 65 Server Port connection issues 66 Connectivity verification from the upstream SAN switch 66 Testing the loss of uplink ports 68 Appendix G: Boot from SAN troubleshooting 72 Verification during POST 72 Troubleshooting 76 Appendix H: Fibre Channel Port Statistics 77 FC Uplink Port statistics 77

6 FC Server Port statistics 8 Acronyms and abbreviations 84 Support and Other Resources 85 Contacting HP 85 Documentation feedback 86 Related documentation 86 6

7 Change History The following Change History log contains a record of changes made to this document: Publish / Revised Version # Section / Nature of change January 206 Edition 3 Rev 7 Added trunking support information with the VC 6Gb 24-port Fiber Channel Module in scenario August 205 Edition 3 Rev 6 Updated some URLs with hpe.com Changed FillWord recommendation for 8Gb connection Other minor changes February 205 Edition 3 Rev 5 Added information on HP VC 6Gb 24-Port FC Module New scenario () : Enhanced N-port trunking with HP Virtual Connect 6G 24-Port Fibre Channel Module Added the HP VC 6Gb 24-Port FC to the Fabric Attach support list Changed the FC statistics support information (VC 8G/6G 24-port FC modules statistics are available with VC 4.40) Added information on FC statistics available through SNMP Other minor changes September 204 Edition 3 Rev 4 Added HP 3PAR Persistent Ports Support Added the VC FlexFabric-20/40 F8 Module to the Flat SAN and Fabric Attach support lists Other minor changes May 204 Edition 3 Rev 3 Added information on VC FlexFabric-20/40 F8 Module New scenario with HP Virtual Connect FlexFabric-20/40 F8 Modules Other minor changes November 203 Edition 3 Rev 2 Added 3PAR StoreServ 7000 series support New section for the maximum number of c-class enclosures connected to a HP 3PAR storage system Added some connection best practices and physical views when multiple FlexFabric uplinks are used to connect to 3PAR controller nodes 7 - Change History

8 Abstract This guide provides concepts and implementation steps for integrating HP Virtual Connect Fibre Channel modules and HP Virtual Connect FlexFabric Modules into an existing SAN Fabric. The scenarios in this guide cover a range of typical building blocks to use when designing a solution. For more information on BladeSystem and Virtual Connect, go to Abstract

9 Considerations and concepts The following concepts apply when using the HP Virtual Connect Fibre Channel or FlexFabric modules: To manage an HP Virtual Connect Fibre Channel Module, you must also install an HP Virtual Connect Ethernet Module. The VC Ethernet module contains the processor on which the Virtual Connect Manager firmware runs. Virtual Connect now supports Direct Storage attachment to reduce storage networking costs and to remove the complexity of FC switch management. NPIV support is required in the FC switches that connect to the Virtual Connect Fibre Channel and FlexFabric Modules. Since VC 4.0, Fibre Channel over Ethernet (FCoE) can be used as an alternative to native Fibre Channel (FC). For more information on configuring FCoE switches and scenarios, see the FCoE Cookbook for HP Virtual Connect in the Virtual Connect Information Library ( 9 - Considerations and concepts

10 VC SAN module descriptions HP Virtual Connect Fibre Channel Modules: HP Virtual Connect 8Gb 20-Port Fibre Channel Module: 4 Uplink ports 8Gb FC [2/4/8 Gb] 6 Downlink ports 8Gb FC [/2/4/8 Gb] 28 NPIV connections per server 255 NPIV connections per uplink port (In other word up to 28 virtual machines running on the same physical server can access separate storage resources) HP Virtual Connect 8Gb 24-Port Fibre Channel Module: 8 Uplink ports 8Gb FC [2/4/8 Gb] 6 Downlink ports 8Gb FC [/2/4/8 Gb] 255 NPIV connections per server 255 NPIV connections per uplink port (In other word up to 255 virtual machines running on the same physical server can access separate storage resources) HP Virtual Connect 6Gb 24-Port Fibre Channel Module: 8 Uplink ports 6Gb FC [4/8/6 Gb] 6 Downlink ports 6Gb FC [8/6 Gb] 255 NPIV connections per server 255 NPIV connections per uplink port (In other word up to 255 virtual machines running on the same physical server can access separate storage resources) 6Gb operation requires a c7000 Platinum Enclosure (SKUs 6XXXXX-B2 and 7XXXXX-B2 except 6866X-B2 2 ). If the module is inserted in a non-platinum enclosure, the maximum downlink speed that will be supported is 8Gb, regardless of the HBA. 2 The enclosure SKU can be found in the OA Rack Overview screen. For more information, see the Rack View section in the OA user guide. Note: The SKU is listed as a part number. 0 - VC SAN module descriptions - HP Virtual Connect Fibre Channel Modules:

11 HP Virtual Connect FlexFabric Modules: HP Virtual Connect FlexFabric 0Gb/24-Port Module: 4 Uplink ports : FC [2/4/8 Gb] or FCoE [0 Gb] 6 Downlink ports [FlexHBA: any speed] 255 NPIV connections per server X X4 Uplink ports available for FC/FCoE/Enet connection [FC: 2/4/8 Gb] [FCoE: 0 Gb] [Enet: /0 Gb] 255 NPIV connections per uplink port (In other word up to 255 virtual machines running on the same physical server to access separate storage resources) HP Virtual Connect FlexFabric-20/40 F8 Module: 8 Uplink ports : FC [2/4/8 Gb] or FCoE [0 Gb] 6 Downlink ports [FlexHBA: any speed] 255 NPIV connections per server 255 NPIV connections per uplink port (In other word up to 255 virtual machines running on the same physical server to access separate storage resources) X5-X6 paired Flexports Can only be configured to carry same traffic types (either FC or Ethernet) X X8 Uplink ports available for FC/FCoE/Enet connection [FC: 2/4/8 Gb] [FCoE: 0 Gb] [Enet: /0 Gb] X7-X8 paired Flexports Can only be configured to carry same traffic types (either FC or Ethernet) - VC SAN module descriptions - HP Virtual Connect FlexFabric Modules:

12 Virtual Connect Fibre Channel support Virtual Connect Connectivity Stream documents that describe the different supported configurations are available from the SPOCK webpage: (Requires HP Passport; if you do not have an HP Passport account, follow the instructions on the webpage). For any specific, supported Fabric OS, SAN-OS, or NX-OS versions for SAN that involve 3rd-party equipment, consult the equipment vendor. Virtual Connect Virtual Connect support matrix documents are available from the following Virtual Connect SPOCK webpage: Fibre Channel connectivity stream documents are available in the Virtual Connect FC Modules section: Since VC 4.0, Virtual Connect provides the ability to pass FCoE to an external FCoE capable network switch. However this guide is only focused on Virtual Connect with Fibre Channel. For FCoE connectivity guidance, see the FCoE Cookbook for HP Virtual Connect in the Virtual Connect Information Library ( 2 - Virtual Connect Fibre Channel support - Virtual Connect

13 FC and FCoE switches The SPOCK Switch page contains the supported configurations of the upstream switch connected to Virtual Connect. Details about firmware and OS versions are usually provided Virtual Connect Fibre Channel support - FC and FCoE switches

14 Supported VC SAN Fabric configuration Beginning with Virtual Connect 3.70, there are two supported VC SAN fabric types, Fabric-Attach fabrics and Direct- Attach fabrics. A Fabric-Attach fabric uses the traditional method of connecting VC-FC and VC FlexFabric modules, which requires an upstream NPIV-enabled SAN switch. A Direct-Attach fabric reduces storage networking costs and removes the complexity of FC switch management by enabling you to directly connect a VC FlexFabric module to a supported HP 3PAR Storage System. A VC SAN fabric can only contain uplink ports of one type, either attached to an external SAN switch or directly connected to a supported storage device. VC isolates ports that do not match the specified fabric type. The isolated port causes the VC SAN fabric status to become degraded, as well as all associated server profiles and the overall VC domain status. Fabric-Attach support Fabric-Attach Fibre Channel (FC) prerequisites The Fabric-Attach Fabric is supported with the HP VC FlexFabric 0Gb/24-Port Module / HP VC FlexFabric-20/40 F8 Module / HP VC 6Gb 24-Port Fibre Channel Module / HP VC 8Gb 24-Port Fibre Channel Module / HP VC 8Gb 20-Port Fibre Channel Module / HP VC 4Gb 20-Port Fibre Channel Module. The Fabric-Attach Fabric is only supported if directly connected to Fibre Channel SAN switches. NPIV support is required in the FC switches that connect to the Virtual Connect Modules. Note: Visit the SPOCK website to get the latest Fabric-Attach support information, see Virtual Connect Fibre Channel support section. Fabric-Attach Fibre Channel (FC) details The Fabric-Attach option is the default Fabric type. Select Fabric-Attach if the FlexFabric module (or VC-FC module) is connected to a Fibre Channel SAN switch. N_Port with NPIV is used to connect to the SAN Fabric. Once a fabric is defined, its type cannot be changed until fabric is deleted and recreated. Virtual Connect Fabric-Attach SAN Fabric support When using the Virtual Connect Fabric-Attach mode, participating uplinks must be connected to the same SAN fabric in order to correctly form a Virtual Connect SAN fabric. Figure : Participating uplinks must connect to the same SAN fabric VC Domain VC SAN A- VC SAN A-2 VC SAN B- VC SAN B-2 VC SAN Fabrics defined in the VC Domain VC-FC Module VC-FC Module Bay 3 Bay 4 Fabric A Fabric 2A Fabric B Fabric 2B External Datacenter SAN Fabrics VC Domain 4 - Supported VC SAN Fabric configuration - Fabric-Attach support

15 2Vdc HP StorageWorks 4/32B SAN Switch Vdc 2Vdc HP StorageWorks 4/32B SAN Switch Vdc 2Vdc HP StorageWorks 4/32B SAN Switch Vdc 2Vdc HP StorageWorks 4/32B SAN Switch Vdc Figure 2: A same SAN Fabric can consist of several SAN switches. In the following configuration, each SAN Fabric has two SAN switches, and the previous Participating uplinks must connect to the same SAN fabric prerequisite is met. VC Domain VC SAN A- VC SAN B- VC SAN Fabrics defined in the VC Domain Bay Bay 2 Fabric A Fabric B External Datacenter SAN Fabrics MSA EVA XP 3PAR Figure 3: Connecting uplinks of a single VC SAN Fabric to two different SAN fabrics is not supported VC Domain VC SAN A- VC SAN B- VC SAN Fabrics defined in the VC Domain VC-FC Module VC-FC Module Bay 3 Bay 4 Fabric A Fabric 2A Fabric B Fabric 2B External Datacenter SAN Fabrics VC Domain To provide granular control over which server blades use each uplink port, different VC SAN fabrics can be connected to the same SAN fabric. This configuration enables the distribution of servers according to I/O workloads, as shown in Figure Supported VC SAN Fabric configuration - Fabric-Attach support

16 Figure 4: Servers distributed according to I/O workloads VC Domain VC SAN A- VC SAN A-2 VC SAN B- VC SAN B-2 VC SAN Fabrics defined in the VC Domain VC-FC Module VC-FC Module Bay 3 Bay 4 Fabric A Fabric B External Datacenter SAN Fabrics VC Domain Direct attached Storage Systems are not supported with the Fabric-Attach mode. Figure 5: Direct Attached Storage Systems are not supported VC SAN VC FlexFabric Module 3PAR, MSA, EVA or XP 6 - Supported VC SAN Fabric configuration - Fabric-Attach support

17 0/00Base-TX H3C S3600 Speed:Green=00Mbps,Yellow=0Mbps Duplex:Green=Full Duplex,Yellow=Half Duplex Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget Series 0/00Base-TX 2Vdc HP StorageWorks 4/32B SAN Switch Console 000Base-X H3C S3600 Speed:Green=00Mbps,Yellow=0Mbps Duplex:Green=Full Duplex,Yellow=Half Duplex Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget Series 2Vdc HP StorageWorks 4/32B SAN Switch Unit 20% 40% 60% 80% 00% Mode R PWR Flashing=PoE Yellow=Duplex Green=Speed FAN OA FAN 6 6 X 2 DP-A DP-B 2 DP-A DP-B HP VC Flex-0 Enet Module X X2 X3 X4 X5 X6 HP 4Gb VC-FC Module ilo Reset SHARED Console 000Base-X % Mode Unit 40% 60% 80% 00% R PWR Flashing=PoE Yellow=Duplex Green=Speed Active FAN OA FAN 6 Cntrl 6 4 ilo Reset 2Vdc SHARED: UPLINK or X-LINK Enclosure Remove management modules before ejecting sleeve HP VC FlexFabric 0Gb/24-Port Module Active 5 X7 X8 Cntrl Mfg Mgmt X Enclosure Interlink 4 HP VC Flex-0 Enet Module X X2 X3 X4 X5 X6 HP 4Gb VC-FC Module SHARED ilo Reset Active 2Vdc S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 Enclosure 3 Remove management modules before ejecting sleeve Cntrl 2 HP StorageWorks 4/32B SAN Switch Enclosure Interlink SHARED: UPLINK or X-LINK 2 DP-A DP-B 2 DP-A DP-B 2Vdc Mfg Mgmt 2 X7 X8 Mfg FAN 5 OA2 FAN HP VC FlexFabric 0Gb/24-Port Module ilo Reset 2Vdc Active 3 Cntrl 2 HP StorageWorks 4/32B SAN Switch S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 2 Mfg 2 2Vdc 0/00Base-TX H3C S3600 Speed:Green=00Mbps,Yellow=0Mbps Duplex:Green=Full Duplex,Yellow=Half Duplex Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget Series FAN 5 OA2 FAN 0 2Vdc 0/00Base-TX Console 000Base-X H3C S3600 Speed:Green=00Mbps,Yellow=0Mbps Duplex:Green=Full Duplex,Yellow=Half Duplex Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget Series Unit 20% 40% 60% 80% 00% Mode R PWR Flashing=PoE Yellow=Duplex Green=Speed Console 000Base-X % Mode Unit 40% 60% 80% 00% R PWR Flashing=PoE Yellow=Duplex Green=Speed Figure 6 and Figure 7 show two typical, supported configurations for Virtual Connect Fibre Channel modules and Virtual Connect FlexFabric modules. Figure 6: Typical Fabric-Attach SAN configuration with VC-FC 8Gb 20-port modules. Redundant paths - Server-to- Fabric uplink ratio 4: Storage Array Storage Controller Storage Controller Fabric- Fabric-2 SAN Switch A SAN uplink connection SAN Switch B LAN Switch A LAN Switch B Ethernet uplink connection SUS SUS-2 Ethernet uplink connection HP BladeSystem c7000 Figure 7: Typical Fabric-Attach SAN configuration with VC FlexFabric modules. Redundant paths - Server-to- Fabric uplink ratio 4: Storage Array Storage Controller Storage Controller Fabric- Fabric-2 SAN Switch A SAN uplink connection SAN Switch B LAN Switch A LAN Switch B Ethernet uplink connection SUS SUS-2 Ethernet uplink connection HP BladeSystem c Supported VC SAN Fabric configuration - Fabric-Attach support

18 Multiple Fabric-Attach fabric Support Support for multiple SAN Fabric-Attach fabrics per VC-FC and FlexFabric modules allows the Storage administrator to assign any of the available VC SAN uplinks to a different SAN fabric and dynamically assign server HBAs to the desired SAN fabric. Figure 8: Multiple SAN Fabric-Attach fabrics support Server Server 2 Server 3 Server 4 HBA HBA2 HBA HBA2 HBA HBA2 HBA HBA2 VC SAN VC SAN 2 VC SAN 3 VC SAN 4 VC Domain VC-FC Module Fabric Fabric 4 Fabric 2 Fabric 3 Figure 9: The Virtual Connect 8Gb 20-Port Fibre Channel module supports up to 4 SAN fabrics VC-FC 20-port Module Fabric Fabric 4 Fabric 2 Fabric 3 Figure 0: The Virtual Connect 8Gb 24-Port and 6Gb 24-Port Fibre Channel modules support up to 8 SAN fabrics Fabric 6 Fabric 7 Fabric 5 Fabric 8 VC-FC 8Gb 24-Port Module Fabric Fabric 4 Fabric 2 Fabric Supported VC SAN Fabric configuration - Fabric-Attach support

19 Figure : The Virtual Connect FlexFabric module supports up to 4 SAN fabrics Uplink ports available for FC connection VC FlexFabric Module Fabric Fabric 4 Fabric 2 Fabric 3 Figure 2: The Virtual Connect FlexFabric-20/40 F8 module supports up to 8 SAN fabrics Fabric 2 Fabric 3 Fabric Fabric 4 VC FlexFabric- 20/40 F8 Module Fabric 5 Fabric 8 Fabric 6 Fabric 7 NPIV requirements for VC Fabric-Attach fabrics Uplink ports within a VC Fabric-Attach fabric can only be connected to Fibre Channel switch that supports N_port_ID virtualization (NPIV). The VC-FC and VC FlexFabric modules are FC standards-based and are compatible with all other NPIV standard compliant switch products. Due to the use of NPIV, special features that are available in standard FC switches such as Brocade ISL Trunking, Cisco SAN Port Channels, QoS, and extended distances are not supported with VC-FC and VC FlexFabric modules. For more information about NPIV support for your Fibre Channel switch, refer to the switch vendor documentation. The SAN switch ports connecting to the VC Fabric uplink ports must be configured to accept NPIV logins. For additional information about NPIV configuration, see "Appendix C: Brocade SAN switch NPIV configuration" or "Appendix D: Cisco MDS SAN switch NPIV configuration" or "Appendix E: Cisco NEXUS SAN switch NPIV configuration" depending on your switch model. 9 - Supported VC SAN Fabric configuration - Fabric-Attach support

20 Port Group in a Fabric-Attach fabric Virtual Connect Manager version.3 and later allows users to group multiple VC Fabric uplinks logically into a Virtual Connect fabric when attached to the same Fibre Channel SAN fabric. Figure 3: Fabric-Attach fabrics using 4 uplinks ports VC Domain VC SAN VC SAN 2 VC-FC Module VC-FC Module Fabric Fabric 2 There are several benefits with Fabric port grouping: Bandwidth is increased. Server-to-uplink ratio is improved. Better redundancy is provided with automatic port failover. Increased bandwidth Depending on the VC module and the number of uplinks used, the server-to-uplink ratio (oversubscription ratio) is adjustable to 2:, 4:, 8:, or 6:. As few as two or as many as 6 servers share one physical link on a fully populated enclosure with 6 servers. Using multiple uplinks reduces the risk of congestion. Figure 4: 2: oversubscription with Virtual Connect 8Gb 24-Port and 6Gb 24-Port Fibre Channel modules and with Virtual Connect FlexFabric-20/40 F8 modules 2: Figure 5: 4: oversubscription with Virtual Connect 8Gb 20-Port Fibre Channel modules and with Virtual Connect FlexFabric modules 4: VC-FC 6 servers 24-port 8 Uplinks VC-FC 6 servers 20-port 4 Uplinks 20 - Supported VC SAN Fabric configuration - Fabric-Attach support

21 Dynamic Logins Balancing Distribution and increased redundancy When VC Fabric uplinks are grouped into a single fabric, the module uses Dynamic Login Balancing Distribution to load balance the server connections across all available uplink ports. The module uses the port with the least number of logins across the VC SAN Fabric or, when the number of logins is equal, VC makes a round-robin decision. VC version 3.0 and later do not offer Static Uplink Login Distribution. Figure 6: Dynamic Logins Balancing Distribution with VC-FC module Server Server 2 Server 3 Server 4 HBA HBA2 HBA HBA2 HBA HBA2 HBA HBA2 VC Domain VC SAN VC-FC Module Server 3 Server 2 Server Server 4 Fabric 2 - Supported VC SAN Fabric configuration - Fabric-Attach support

22 Uplink port path failover The module uses Dynamic Login Balancing Distribution to provide an uplink port path failover that enables server connections to fail over within the Virtual Connect fabric. If a fabric uplink port in the group becomes unavailable, hosts logged in through that uplink are reconnected automatically to the fabric through the remaining uplinks in the group, resulting in auto-failover. Figure 7: Uplink port path failover Server Server 2 Server 3 Server 4 HBA HBA2 HBA HBA2 HBA HBA2 HBA HBA2 VC Domain VC SAN VC-FC Module Server 3 Server 2 Server Server 4 Fabric This automatic failover saves time and effort whenever there is a link failure between an uplink port on VC and an external fabric, and it allows smooth transition without much disruption to the traffic. However, the hosts must perform re-login before resuming their I/O operations Supported VC SAN Fabric configuration - Fabric-Attach support

23 Login Redistribution It might be necessary to re-distribute server logins if an uplink that was previously down is now available, if you added an uplink to a fabric, or if the number of logins through each available uplink has become unbalanced for any reason. Virtual Connect Login redistribution supports two modes, Manual or Automatic and is enabled on a per FabricAttach fabric basis. Table : Manual and Automatic Logins Redistribution support Login Re-Distribution Mode Auto-failover* Auto-failback** VC-FC support VC FlexFabric and VC FlexFabric-20/40 F8 support MANUAL YES NO YES YES (default) AUTOMATIC YES YES after link stability delay NO YES * when a port in the SAN Fabric group becomes unavailable. ** when a failed port returns to a good working condition. Manual Login Re-Distribution Default for all FC modules. You must initiate a Login Re-Distribution request through the VC GUI or CLI interfaces. For all standard VC-FC modules, this is the only supported mode. To manually redistribute the logins, go to the SAN Fabrics screen, select the Edit menu corresponding to the SAN Fabric, and click Redistribute. To manually redistribute logins on a VC SAN fabric through the VC CLI, enter: set fabric MyFabric loadbalance The Redistribute option is only available with a VC SAN fabric with Manual Login Re-Distribution Supported VC SAN Fabric configuration - Fabric-Attach support

24 Automatic Login Re-Distribution Automatic Login Redistribution is only available with VC FlexFabric and VC FlexFabric-20/40 F8 modules in a FabricAttach fabric. With VC-FC modules, the Login Redistribution is manual only. When Automatic Login Redistribution is selected, the VC FlexFabric module initiates Login Re-Distribution automatically when the specified Link Stability time interval expires. The Link stability interval parameter is defined on a VC domain basis in the Fibre Channel WWN Settings MisceIIaneous tab. This interval defines the number of seconds that the VC fabric uplinks have to stabilize before the VC module attempts to load-balance the logins Supported VC SAN Fabric configuration - Fabric-Attach support

25 Flat SAN Support With HP Virtual Connect for 3PAR with Flat SAN technology, you can connect HP 3PAR Storage Systems directly to HP Virtual Connect FlexFabric Modules with no need for an intermediate SAN fabric. This significantly reduces complexity and cost while reducing latency between servers and storage by eliminating the need for multitier storage area networks (SANs). This direct attachment lets you worry less about storage solution complexity. The need for an expensive intermediate SAN fabric to create the connection between Virtual Connect and HP 3PAR Storage Systems devices no longer exists. In addition to being much more cost-efficient, management of your storage solution is made easier. Valuable IT resources are freed up, along with reduced costs. Flat SAN support Direct-Attach Fabric is only supported with the HP Virtual Connect FlexFabric 0Gb/24-port and HP Virtual Connect FlexFabric-20/40 F8 modules. Direct-Attach Fabric is only supported if directly connected to HP 3PAR storage systems. Supported storage systems are HP 3PAR StoreServ 0400/0800, 7000/7450, T400/800 or F200/400. Minimum required/supported, HP Virtual Connect 3.70 Minimum required/supported, HP 3PAR InForm OS v3.. MU Unsupported storage systems HP MSA/EVA/XP HP StoreOnce B6200 HP LeftHand Storage HP Tape Storage HP Virtual Tape Libraries 3rd Party Storage Solutions such as EMC, IBM, Hitachi, NetApp and so on Flat SAN details The Flat SAN option is available only after adding a FlexFabric module port to a VC SAN Fabric. No additional licenses or fees apply. Virtual Connect SAN fabric uplink port is set as an F_Port (F for Fabric). FlexFabric modules run lightweight FC SAN services such as name server, zoning, etc. Once a fabric is defined, its type cannot be changed until the fabric is deleted and recreated. Note: Visit the SPOCK website to get the latest Flat SAN support information, see Virtual Connect Fibre Channel support section Supported VC SAN Fabric configuration - Flat SAN Support

26 FAN OA FAN 6 6 HP VC FlexFabric 0Gb/24-Port Module ilo Reset Active 5 4 S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 Enclosure Remove management modules before ejecting sleeve Enclosure Interlink HP VC FlexFabric 0Gb/24-Port Module ilo Reset Active 3 S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 2 FAN OA2 FAN 0 Figure 8: HP Virtual Connect for 3PAR with Flat SAN technology allows a direct attachment to HP 3PAR Storage Systems HP 3PAR StoreServ 7450 Flat SAN uplink connections (Direct-Attach) HP Virtual connect FlexFabric Modules HP BladeSystem c7000 Maximum number of c-class enclosures connected to a HP 3PAR storage system Connecting several BladeSystem c-class enclosures to the same 3PAR Storage system is fully supported. The number of host ports available on the HP 3PAR controller nodes defines the maximum number of enclosures supported. With 4 controller nodes, a 3PAR StoreServ 0400 can support up to 96 Fibre Channel Host Ports. This means that 24 enclosures can be connected to the storage system when using 4 Virtual Connect uplinks per enclosure. With 2 controller nodes, a 3PAR StoreServ 7200 can support up to 2 Fibre Channel Host Ports. This means that 3 enclosures can be connected to the storage system when using 4 Virtual Connect uplinks per enclosure. Figure 9: Maximum c-class enclosures connected to a HP 3PAR StoreServ 0400 and StoreServ PAR Storage System 4 4 3PAR Storage System 24 enclosures MAX 4 3 enclosures MAX 4 4 StoreServ 7200 VC Domain BladeSystem c-class with dual VC FlexFabric Modules StoreServ 0400 VC Domain BladeSystem c-class with dual VC FlexFabric Modules 26 - Supported VC SAN Fabric configuration - Flat SAN Support

27 Table 2: Maximum c-class enclosures connected to a 3PAR Storage System when using 4 Virtual Connect uplinks per enclosure (with 2 x FlexFabric modules): Storage Array StoreServ 0800 StoreServ 0400 T800 T400 StoreServ 7400 StoreServ 7200 F400 F200 Number of FC Host ports Maximum number of enclosure 48 (92 / 4) 24 (96 / 4) 32 (28 / 4) 6 (64 / 4) 6 (24 / 4) 3 (2 / 4) 6 (24 / 4) 3 (2 / 4) Important!: To avoid storage networking issues and potential loss of data associated with duplicate WWNs on the 3PAR system, all VC Domains connected to the same SPAR Storage System must use different HP Pre-Defined ranges of WWN addresses. Virtual Connect Direct-Attach SAN Fabric support When using the Virtual Connect Direct-Attach mode, participating uplinks can be directly connected to the same 3PAR Storage System in order to correctly form a Virtual Connect SAN fabric. Figure 20: Direct-Attach SAN Fabric uplinks connected to a 3PAR array VC Domain VC SAN A- VC SAN B- VC Direct-Attach SAN Fabrics defined in the VC Domain VC FlexFabric Module VC FlexFabric Module Bay Bay 2 VC Domain Direct attached Storage System 3PAR Storage System Note: When a Virtual Connect Direct-Attach fabric is using multiple uplinks, the concept of login-balancing or loginredistribution is not applicable. These concepts are only provided on uplinks within a VC Fabric-Attach fabric. The Zoning between the server ports and the VC SAN uplink ports is automatically configured based on the VC SAN Fabric and server profile definitions. This implicit zoning restricts servers connected to a given Direct-Attach fabric to access only the storage attached to uplinks in a same Direct-Attached fabric. Both the Name server scans and RSCN messages are limited to this zone. A VC SAN fabric may only contain uplink ports of one type. - VC isolates ports that do not match specified fabric type. - An isolated port will degrade fabric state and all associated profiles and domain state. The HP 3PAR Peer Motion (allows non-disruptive data migration from any-to-any 3PAR Storage Arrays) is not supported at this time with Direct-Attach Flat SAN. For the time being, Peer Motion requires an external SAN fabric Supported VC SAN Fabric configuration - Flat SAN Support

28 The HP 3PAR Persistent Ports (provides transparent and uninterrupted failover in response to firmware upgrades, in the event of a node failure, in response to an array port being taken offline administratively or as the result of a hardware failure in the SAN fabric that results in the storage array losing physical connectivity to the fabric) is supported if the configuration fulfills the necessary requirements. For more information, see HP 3PAR StoreServ Persistent Ports Technical white paper, In order to give more granular control over which server blades use each uplink port, several Direct-Attach VC SAN fabrics can be connected to the same 3PAR storage System. This configuration can enable the distribution of servers according to their I/O workloads. Figure 2: Multiple Direct-Attach VC SAN Fabrics connected to the same 3PAR array VC Domain VC SAN A- VC SAN A-2 VC SAN B- VC SAN B-2 VC Direct-Attach SAN Fabrics defined in the VC Domain VC FlexFabric Module VC FlexFabric Module Bay Bay 2 VC Domain Direct attached Storage System 3PAR Storage System 28 - Supported VC SAN Fabric configuration - Flat SAN Support

29 Up to four HP 3PAR Storage Systems can be directly connected to a redundant pair of VC FlexFabric modules; this is because only 4 uplink ports are available for FC connection on the FlexFabric module. Figure 22: Maximum 3PAR arrays connected to a VC Domain VC Domain VC SAN A- VC SAN B- VC Direct-Attach SAN Fabrics defined in the VC Domain VC FlexFabric Module VC FlexFabric Module Bay Bay 2 Direct attached Storage Systems VC Domain 3PAR Storage Systems To support more than four direct-attached 3PAR arrays, it is necessary to add more pairs of VC FlexFabric modules in a c7000 chassis. Figure 23: Direct-Attach configuration with more than a pair of VC modules VC Domain VC SAN A- VC SAN B- VC SAN A-2 VC SAN B-2 VC Direct-Attach SAN Fabrics defined in the VC Domain VC FlexFabric Modules VC FlexFabric Modules Bay Bay 2 Bay 3 Bay 4 Direct attached Storage Systems VC Domain VC Domain 3PAR Storage Systems 29 - Supported VC SAN Fabric configuration - Flat SAN Support

30 For more granularity and control, the 3PAR Storage Systems can be connected to different VC SAN Fabrics, figure 24 shows another supported configuration. Figure 24: Different SAN Fabrics can be used to connect 3PAR arrays VC Domain VC SAN A- VC SAN A-2 VC SAN B- VC SAN B-2 VC Direct-Attach SAN Fabrics defined in the VC Domain VC FlexFabric Module VC FlexFabric Module Bay Bay 2 VC Domain Direct attached Storage Systems 3PAR Storage Systems Unsupported Virtual Connect Direct-Attach SAN Fabric configurations When using the Direct-Attach mode, participating uplinks must be directly connected to a 3PAR Storage System. Any other storage systems are not supported. Figure 25: Direct-Attach uplinks can only be connected to a 3PAR array VC SAN A- VC Direct-Attach SAN Fabrics defined in the VC Domain VC FlexFabric Module MSA, EVA, XP or storage from other vendors Direct attached Storage System 30 - Supported VC SAN Fabric configuration - Flat SAN Support

31 0/00Base-TX H3C S3600 Speed:Green=00Mbps,Yellow=0Mbps Duplex:Green=Full Duplex,Yellow=Half Duplex Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget Series Console 000Base-X % Mode Unit 40% 60% 80% 00% R PWR Flashing=PoE Yellow=Duplex Green=Speed FAN OA FAN 6 6 HP VC FlexFabric 0Gb/24-Port Module ilo Reset Active 5 4 S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 Enclosure Remove management modules before ejecting sleeve Enclosure Interlink HP VC FlexFabric 0Gb/24-Port Module ilo Reset Active 3 S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 2 FAN 5 OA2 FAN 0 0/00Base-TX H3C S3600 Speed:Green=00Mbps,Yellow=0Mbps Duplex:Green=Full Duplex,Yellow=Half Duplex Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget Series Console 000Base-X % Mode Unit 40% 60% 80% 00% R PWR Flashing=PoE Yellow=Duplex Green=Speed The Direct-Attach mode does not support connecting participating uplinks to a SAN Fabric. Figure 26: Direct-Attach uplinks cannot be connected to a SAN Fabric VC SAN A- VC SAN A-2 VC Direct-Attach SAN Fabrics defined in the VC Domain VC FlexFabric Module Fabric External Datacenter SAN Fabric Physical view of a Direct-Attach Flat SAN configuration Figure 27: Virtual Connect Flat SAN with VC FlexFabric modules for HP 3PAR Storage Systems, Redundant paths - Server-to- Direct-Attach uplink ratio 6: HP 3PAR StoreServ 0400 Controller Node 0 Controller Node SAN uplink connections (Direct-Attach) LAN Switch A SUS SUS-2 LAN Switch B Ethernet uplink connection Ethernet uplink connection HP BladeSystem c Supported VC SAN Fabric configuration - Flat SAN Support

32 0/00Base-TX H3C S3600 Speed:Green=00Mbps,Yellow=0Mbps Duplex:Green=Full Duplex,Yellow=Half Duplex Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget Series Console 000Base-X % Mode Unit 40% 60% 80% 00% R PWR Flashing=PoE Yellow=Duplex Green=Speed FAN OA FAN 6 6 HP VC FlexFabric 0Gb/24-Port Module ilo Reset Active 5 4 S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 Enclosure Remove management modules before ejecting sleeve Enclosure Interlink HP VC FlexFabric 0Gb/24-Port Module ilo Reset Active 3 S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 2 FAN 5 OA2 FAN 0 0/00Base-TX H3C S3600 Speed:Green=00Mbps,Yellow=0Mbps Duplex:Green=Full Duplex,Yellow=Half Duplex Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget Series Console 000Base-X % Mode Unit 40% 60% 80% 00% R PWR Flashing=PoE Yellow=Duplex Green=Speed Figure 28: Virtual Connect Flat SAN with VC FlexFabric modules for HP 3PAR Storage Systems, Redundant paths - Server-to- Direct-Attach uplink ratio 8: HP 3PAR StoreServ 0400 Controller Node 0 Controller Node Controller Node 0 Controller Node SAN uplink connections (Direct-Attach) LAN Switch A SUS SUS-2 LAN Switch B Ethernet uplink connection Ethernet uplink connection HP BladeSystem c7000 Note: The 8: Oversubscription with 2 FC cables per FlexFabric module is the most common use case. Note: In order to improve the redundancy, it is recommended to connect the FC cables in a crisscross manner (i.e. each FlexFabric module is connected to two different controller nodes) Supported VC SAN Fabric configuration - Flat SAN Support

33 FAN OA FAN 6 6 HP VC FlexFabric 0Gb/24-Port Module Reset ilo Active 5 4 S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 Enclosure Remove management modules before ejecting sleeve Enclosure Interlink HP VC FlexFabric 0Gb/24-Port Module Reset ilo Active 3 S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 2 FAN OA2 FAN 0 FAN OA FAN 6 6 HP VC FlexFabric 0Gb/24-Port Module Reset ilo Active 5 4 S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 Enclosure Remove management modules before ejecting sleeve Enclosure Interlink HP VC FlexFabric 0Gb/24-Port Module Reset ilo Active 3 S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 2 FAN OA2 FAN 0 Remote replication design with Virtual Connect Flat SAN technology HP 3PAR Data replication services provide real-time replication and disaster recovery technology that allows the protection and sharing of data. You can implement the data replication services between HP 3PAR Storage arrays to distribute data between local and remote arrays or data centers, even if they are geographically dispersed. HP 3PAR Remote copy can be set up between several Direct-Attached 3PAR storage systems, knowing that the maximum number of supported arrays is 4 sources to target or source to 2 targets. Remote replication is delivered using either RCIP (Remote Copy over IP) or RCFC (Remote Copy over Fibre Channel). HP recommends using RCIP with a Direct-Attach 3PAR configuration because it doesn t require a SAN Fabric. Use of a SAN Fabric can result in increased complexity and IT infrastructure costs. The remote copy over IP port on the HP 3PAR StoreServ 0000 controller node is port E. (RJ45/Gb). Figure 29: HP 3PAR Remote replication services with Virtual Connect Flat SAN technology SITE HP BladeSystem c7000 SAN uplink connections (Direct-Attach) 3PAR Storage System Secondary S Replication link between site and site 2 P Primary Primary P Native IP-based Remote Copy S Secondary 3PAR Storage System SAN uplink connections (Direct-Attach) SITE 2 HP BladeSystem c7000 Supported Distance and latencies: Synchronous IP Max distance: 20km/30 miles Max supported Latency: 2.6ms Asynchronous Periodic IP Long-distance implementation Max supported Latency: 50ms round trip Note: RCFC is supported but it requires an external SAN Fabric Supported VC SAN Fabric configuration - Flat SAN Support

34 Details about the 3PAR controller connectivity HP 3PAR Controller Nodes are always installed in pairs, and a system can support from 2 to 8 controllers. 9 PCI-e slots are available per controller node and are used for both host and drive chassis connections using HP 3PAR Host/Disk Adapters. As a best practice, HP recommends using PCI slots 2, 5, 8,, 4, 7 for the Direct-Attach FlexFabric connections. These PCI slots are also recommended for the host connections. The 9 PCI-e slots are equally balanced across 3 PCI-e buses so it is better to connect the FlexFabric uplinks from 2 to 7 in the following order: If only two HP 3PAR Host adapters are available, the connections should be load balanced across the two adapters. The remote copy port (port E) used with a Direct-Attach 3PAR configuration is also seen in Figure 28. Figure 30: HP 3PAR V-Class controller node recommended connection c-class enclosures PCI Slots recommended for Virtual Connect FlexFabric Connections 3PAR StoreServ 0000 Controller Node HP 3PAR StoreServ Controller Node 0 and Controller Node E Remote Copy Ethernet Gb port (RCIP) 0 2 Recommended connection order : slot Note: This diagram shows only the first controller connections Supported VC SAN Fabric configuration - Flat SAN Support

35 2Vdc 2Vdc HP StorageWorks 4/32B SAN Switch HP StorageWorks 4/32B SAN Switch Vdc 2Vdc Mixed Fabric-Attach and Flat SAN mode support You can mix Virtual Connect Fabric-Attach and Direct-Attach Flat SAN fabrics on a same Virtual Connect domain. This mix is useful if an administrator needs to attach additional storage systems that are not supported today with the Direct-Attach mode. Figure 3: Mixed Fabric-Attach and Direct-Attach SAN Fabrics configuration VC SAN A- VC SAN A-2 VC SAN Fabrics defined in the VC Domain Fabric-Attach Direct-Attach VC FlexFabric Module N-Port F-Port Direct attached Storage Systems 3PAR Storage System StoreOnce Backup System MSA Fabric attached Storage Systems EVA XP The mixed Fabric-Attach and Direct-Attach fabrics require the creation of two different fabrics because a VC SAN fabric can only contain uplink ports of one type. The following configuration in Figure 30 is therefore not supported. Figure 32: VC SAN Fabric uplinks cannot be connected to at the same time to a SAN Fabric and to a Storage System VC SAN VC SAN Fabrics defined in the VC Domain (true for both Fabric-Attach and Direct-Attach fabrics) VC FlexFabric Module Direct attached Storage Systems 3PAR Storage System StoreOnce Backup System MSA Fabric attached Storage Systems EVA XP 35 - Supported VC SAN Fabric configuration - Mixed Fabric-Attach and Flat SAN mode support

36 2Vdc HP StorageWorks 4/32B SAN Switch /00Base-TX H3C S3600 Speed:Green=00Mbps,Yellow=0Mbps Duplex:Green=Full Duplex,Yellow=Half Duplex Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget Series 2Vdc Console 000Base-X % Mode Unit 40% 60% 80% 00% R PWR Flashing=PoE Yellow=Duplex Green=Speed 2Vdc HP StorageWorks 4/32B SAN Switch FAN OA FAN 6 6 HP VC FlexFabric 0Gb/24-Port Module ilo Reset Active 5 4 S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 Enclosure Remove management modules before ejecting sleeve Enclosure Interlink HP VC FlexFabric 0Gb/24-Port Module ilo Reset Active 3 S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 2Vdc 2 FAN 5 OA2 FAN 0 0/00Base-TX H3C S3600 Speed:Green=00Mbps,Yellow=0Mbps Duplex:Green=Full Duplex,Yellow=Half Duplex Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget Series Console 000Base-X % Mode Unit 40% 60% 80% 00% R PWR Flashing=PoE Yellow=Duplex Green=Speed Physical view of a mixed Flat SAN and Fabric-Attached configuration Figure 33: Typical mix Fabric-Attach and Direct-Attach FC configuration with VC FlexFabric modules. Redundant paths - Server-to- Fabric-Attach uplink ratio 8: - Server-to- Direct-Attach uplink ratio 8: HP EVA 8400 HP XP HP 3PAR StoreServ 0400 Fabric- SAN Switch A SAN uplink connections (Fabric-Attach) Fabric-2 SAN Switch B SAN uplink connections (Direct-Attach) LAN Switch A SUS SUS-2 LAN Switch B Ethernet uplink connection Ethernet uplink connection HP BladeSystem c Supported VC SAN Fabric configuration - Mixed Fabric-Attach and Flat SAN mode support

37 2Vdc HP StorageWorks 4/32B SAN Switch /00Base-TX H3C S3600 Speed:Green=00Mbps,Yellow=0Mbps Duplex:Green=Full Duplex,Yellow=Half Duplex Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget Series 2Vdc Console 000Base-X % Mode Unit 40% 60% 80% 00% R PWR Flashing=PoE Yellow=Duplex Green=Speed 2Vdc HP StorageWorks 4/32B SAN Switch FAN OA FAN 6 6 HP VC FlexFabric 0Gb/24-Port Module ilo Reset Active 5 4 S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 Enclosure Remove management modules before ejecting sleeve Enclosure Interlink HP VC FlexFabric 0Gb/24-Port Module ilo Reset Active 3 S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 2Vdc 2 FAN 5 OA2 FAN 0 0/00Base-TX H3C S3600 Speed:Green=00Mbps,Yellow=0Mbps Duplex:Green=Full Duplex,Yellow=Half Duplex Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget Series Console 000Base-X % Mode Unit 40% 60% 80% 00% R PWR Flashing=PoE Yellow=Duplex Green=Speed Backup design for a Direct Attached 3PAR solution Disk or Tape backup systems cannot be directly attached to FlexFabric modules. Instead, you must use a SAN Fabric or a LAN-based solution, which are common for use with backup solutions. Figure 34: Backup design example for a direct attached 3PAR solution Disk Backup Solution HP StoreOnce B6000 Backup Systems Tape Backup Solution HP StoreEver HP ESL G3 Tape Library HP 3PAR StoreServ 0800 Fabric- Fabric-2 SAN Switch A SAN uplink connections (Fabric-Attach) SAN Switch B SAN uplink connections (Direct-Attach) LAN Switch A SUS SUS-2 LAN Switch B Ethernet uplink connection Ethernet uplink connection HP BladeSystem c7000 For more information about 3PAR implementation, support and services, contact your HP representative or see For HP 3PAR documentations, see Supported VC SAN Fabric configuration - Mixed Fabric-Attach and Flat SAN mode support

38 0/00Base-TX H3C S3600 Speed:Green=00Mbps,Yellow=0Mbps Duplex:Green=Full Duplex,Yellow=Half Duplex Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget Series 2Vdc HP StorageWorks 4/32B SAN Switch Console 000Base-X Unit 20% 40% 60% 80% 00% Mode R PWR Flashing=PoE Yellow=Duplex Green=Speed FAN OA FAN 6 FAN OA FAN X HP VC Flex-0 Enet Module X X2 X3 X4 X5 X6 HP 4Gb VC-FC Module ilo Reset X SHARED Active 4 HP VC Flex-0 Enet Module X X2 X3 X4 X5 X6 HP 4Gb VC-FC Module ilo Reset SHARED Active Cntrl 2 DP-A DP-B 2 DP-A DP-B 4 2Vdc SHARED: UPLINK or X-LINK Enclosure Remove management modules before ejecting sleeve X7 X8 SHARED: UPLINK or X-LINK Enclosure Remove management modules before ejecting sleeve X7 X8 Mfg Mgmt X Enclosure Interlink Enclosure Interlink HP VC Flex-0 Enet Module X X2 X3 X4 X5 X6 HP 4Gb VC-FC Module X SHARED ilo Reset Active 3 HP VC Flex-0 Enet Module X X2 X3 X4 X5 X6 HP 4Gb VC-FC Module SHARED ilo Reset 2Vdc Active 3 Cntrl 2 HP StorageWorks 4/32B SAN Switch SHARED: UPLINK or X-LINK 2 2 X7 X7 X8 X8 SHARED: UPLINK or X-LINK Mfg FAN OA2 FAN 0 FAN OA2 FAN Vdc 0/00Base-TX H3C S3600 Speed:Green=00Mbps,Yellow=0Mbps Duplex:Green=Full Duplex,Yellow=Half Duplex Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget Series Console 000Base-X Unit 20% 40% 60% 80% 00% Mode R PWR Flashing=PoE Yellow=Duplex Green=Speed Multi-enclosure stacking configuration Virtual Connect version 2.0 and higher supports the connection of up to four c7000 enclosures, which can reduce the number of network connections per rack and also enables a single VC manager to control multiple enclosures. For more information, see HP Virtual Connect Multi-Enclosure Stacking Reference Guide in the Related Documentation section at the end of this document. Multi-enclosure stacking with Fabric-Attach FC Storage When utilizing Virtual Connect Fabric-Attach FC storage in a VC Multi Enclosure environment, it is important to remember that: All Virtual Connect Fibre Channel modules (or each SAN Uplink port of VC FlexFabric modules) must be connected to the SAN Fabrics. When Virtual Connect Fibre Channel modules (or FlexFabric modules using SAN connections) are implemented in a multi-enclosure domain, all enclosures must have identical VC-FC module (or VC FlexFabric module) placement and cabling. This ensures that the profile mobility is maintained, so that when a profile is moved from one enclosure to another within the stacked VC Domain, SAN connectivity is preserved. Figure 35: Multi-Enclosure Stacking with Fabric-Attach SAN Fabrics requires all VC FC modules to be connected to the SAN Storage Array Fabric- Storage Controller Storage Controller Fabric-2 Fabric attached Storage Systems LAN Switch A SAN Switch A SAN uplink connections (Fabric-Attach) SAN Switch B Ethernet uplink connection SUS- Identical VC Fibre Channel modules, placement and cabling Ethernet uplink connection LAN Switch B SUS-2 HP BladeSystem c enclosures stacking VC Domain : 0Gb Stack Links 38 - Multi-enclosure stacking configuration - Multi-enclosure stacking with Fabric-Attach FC Storage

39 0/00Base-TX H3C S3600 Speed:Green=00Mbps,Yellow=0Mbps Duplex:Green=Full Duplex,Yellow=Half Duplex Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget Series 2Vdc HP StorageWorks 4/32B SAN Switch Console 000Base-X Unit 20% 40% 60% 80% 00% Mode R PWR Flashing=PoE Yellow=Duplex Green=Speed FAN OA FAN 6 FAN OA FAN 6 6 X HP VC Flex-0 Enet Module X X2 X3 X4 X5 X6 HP 4Gb VC-FC Module ilo Reset 2 DP-A DP-B 2 DP-A DP-B HP VC FlexFabric 0Gb/24-Port Module ilo Reset SHARED Active Active 5 Cntrl 4 4 SHARED: UPLINK or X-LINK Enclosure Remove management modules before ejecting sleeve S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 2Vdc X7 Enclosure Enclosure Interlink Remove management modules before ejecting sleeve X8 Mfg Mgmt X Enclosure Interlink HP VC Flex-0 Enet Module X X2 X3 X4 X5 X6 HP 4Gb VC-FC Module SHARED ilo Reset HP VC FlexFabric 0Gb/24-Port Module ilo Reset Active 2Vdc Active 3 3 Cntrl 2 HP StorageWorks 4/32B SAN Switch SHARED: UPLINK or X-LINK 2 S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 X7 2 X8 FAN 5 OA2 FAN 0 Mfg FAN OA2 FAN Vdc 0/00Base-TX H3C S3600 Speed:Green=00Mbps,Yellow=0Mbps Duplex:Green=Full Duplex,Yellow=Half Duplex Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget Series Console 000Base-X Unit 20% 40% 60% 80% 00% Mode R PWR Flashing=PoE Yellow=Duplex Green=Speed Figure 36: Unsupported Multi-Enclosure Stacking configuration Storage Array Fabric- Storage Controller Storage Controller Fabric-2 Fabric attached Storage Systems LAN Switch A SAN Switch A SAN uplink connections (Fabric-Attach) SAN Switch B Ethernet uplink connection SUS- VC-FC Modules Different VC Fibre Channel modules, placement and cabling VC FlexFabric Modules Ethernet uplink connection LAN Switch B SUS-2 HP BladeSystem c enclosures stacking VC Domain : 0Gb Stack Links 39 - Multi-enclosure stacking configuration - Multi-enclosure stacking with Fabric-Attach FC Storage

40 0/00Base-TX H3C S3600 Speed:Green=00Mbps,Yellow=0Mbps Duplex:Green=Full Duplex,Yellow=Half Duplex Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget Series Console 000Base-X % Mode Unit 40% 60% 80% 00% R PWR Flashing=PoE Yellow=Duplex Green=Speed FAN OA FAN 6 FAN OA FAN HP VC FlexFabric 0Gb/24-Port Module ilo Reset ilo Reset Active 5 4 S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 HP VC FlexFabric 0Gb/24-Port Module Active 5 4 Enclosure Remove management modules before ejecting sleeve S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 Enclosure Remove management modules before ejecting sleeve Enclosure Interlink Enclosure Interlink HP VC FlexFabric 0Gb/24-Port Module ilo Reset Active 3 S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 HP VC FlexFabric 0Gb/24-Port Module ilo Reset Active 3 2 S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 2 FAN 5 OA2 FAN 0 FAN 5 OA2 FAN 0 0/00Base-TX H3C S3600 Speed:Green=00Mbps,Yellow=0Mbps Duplex:Green=Full Duplex,Yellow=Half Duplex Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget Series Console 000Base-X % Mode Unit 40% 60% 80% 00% R PWR Flashing=PoE Yellow=Duplex Green=Speed Multi-enclosure stacking with Flat SAN Technology When using HP Virtual Connect for 3PAR with Flat SAN technology in a VC Multi Enclosure environment: All VC FlexFabric modules must be connected to the 3PAR Storage System(s). Server profile migration of a SAN-booted server between enclosures is not supported. With domains managed by Virtual Connect Enterprise Manager, a server profile migration of a SAN-booted server between enclosures within the same Domain Group or between different Domain Groups is not supported. To perform in the VC Multi Enclosure environment a server profile migration of a SAN-booted server between enclosures, the following manual steps should be implemented when utilizing Virtual Connect for 3PAR with Flat SAN technology:. Power off the server. 2. Un-assign the server profile. 3. Change Primary and Secondary Target WWNs in the FC Boot Parameters section of the profile to reflect the WWNs of the 3PAR storage array ports connected to the destination enclosure. For more information about the FC Boot parameters, see Server Profile Configuration. 4. Assign profile to the destination location. 5. Power-on the destination server. Figure 37: Multi-Enclosure Stacking with Direct-Attach SAN Fabrics requires all VC FlexFabric modules to be connected to the 3PAR Storage System 3PAR Storage System LAN Switch A SAN uplink connections (Direct-Attach) Ethernet uplink connection SUS Ethernet uplink connection LAN Switch B SUS-2 HP BladeSystem c enclosures stacking VC Domain : 0Gb Stack Links 40 - Multi-enclosure stacking configuration - Multi-enclosure stacking with Flat SAN Technology

41 Scenario : Simplest scenario with multipathing Overview This scenario covers the setup and configuration of two VC Fabric-Attach SAN Fabrics, each using a single uplink connected to a redundant Fabric. Figure 38: Logical view 6: server-to-uplink ratio on a fully populated enclosure with 6 servers Server Server 2 Server 3 Server 6 HBA HBA2 HBA HBA2 HBA HBA2 HBA HBA2 VC Domain Fabric- Fabric-2 VC-FC 8Gb 20-port Module Fabric Fabric Scenario : Simplest scenario with multipathing

42 2Vdc HP StorageWorks 4/32B SAN Switch HP 4Gb VC-FC Module Cntrl 2 DP-A DP-B 2 DP-A DP-B Vdc Mfg Mgmt HP ProLiant BL460c NIC NIC 2 Cntrl 2 HP 4Gb VC-FC Module 2Vdc Mfg 2 HP StorageWorks 4/32B SAN Switch Vdc Figure 39: Physical view Storage Array Fabric- Fabric-2 HBA HBA 2 Benefits Blade Server This configuration offers the simplicity of managing only one redundant fabric with a single uplink. Transparent failover is managed by a multipathing I/O driver running on the server Operating System. This scenario maximizes the use of the VC Fabric uplink ports, reduces the total number of switch ports needed in the datacenter and saves money (Fabric ports can be expensive). Considerations In a fully populated c7000 enclosure, the server-to-uplink ratio is 6:. This configuration can result in poor response time and sometimes requires particular performance monitoring attention. You can use more uplink ports, both for better performance and because doing so provides login-balancing or loginredistribution. Also, the use of more than one uplink per VC SAN fabric provides uplink failover in case of failure. A multipathing I/O driver must be running on the server Operating System to prevent server I/O disruption when a failure occurs somewhere between VC and the external fabric. This automatic failover allows smooth transition without much disruption to the traffic. However, the hosts have to perform re-login before resuming their I/O operations. Only a redundant SAN fabric with a properly configured multipathing I/O driver running on the server Operating System can provide a completely transparent transition. The SAN switch ports connecting to the VC-FC modules must be configured to accept NPIV logins. Due to the use of NPIV, special features that are available in standard FC switches such as ISL Trunking, QoS or extended distances are not supported with VC-FC and VC FlexFabric. Requirements This configuration requires: Two VC Fabric-Attach SAN fabrics with two SAN switches that support NPIV. At least two VC-FC modules. A minimum of two VC fabric uplink ports connected to the redundant SAN fabric. For more information about configuring FC switches for NPIV, see "Appendix C: Brocade SAN switch NPIV configuration" or "Appendix D: Cisco MDS SAN switch NPIV configuration" or "Appendix E: Cisco NEXUS SAN switch NPIV configuration" depending on your switch model Scenario : Simplest scenario with multipathing

43 Installation and configuration Switch configuration Appendices B, C and D provide the steps required to configure NPIV on the upstream SAN switch in a Brocade, Cisco Nexus or Cisco MDS Fibre Channel infrastructure. VC CLI commands In addition to the GUI, many of the configuration settings within VC can be established using a CLI command set. In order to connect to VC using a CLI, open an SSH connection to the IP address of the active VCM. Once logged in, VC provides a CLI with help menus. The Virtual Connect CLI guide also provides many useful examples. The following scenario provides the CLI commands needed to configure each setting of the VC. Configuring the VC module Physically connect Port on the first VC-FC module to a switch port in SAN Fabric. Physically connect Port on the second VC-FC module to a switch port in SAN Fabric 2. Defining a new VC Fabric-Attach SAN Fabric using the GUI Configure the VC-FC modules and create a VC SAN Fabric.. From the HP Virtual Connect Manager GUI, select Define SAN Fabric from the HP Virtual Connect Home page. 2. From the Define SAN Fabric dialog, provide the VC Fabric Name, in this case Fabric-, and add the Fabric uplink Port from the first VC-FC module Scenario : Simplest scenario with multipathing

44 3. Click Apply. 4. On the SAN Fabrics screen, click Add to create the second fabric: 5. Create a new VC Fabric named Fabric Under Enclosure Uplink Ports, add Port from the second VC-FC module, and then click Apply. Two VC SAN fabrics have been created, each with one uplink port allocated from one VC module Scenario : Simplest scenario with multipathing

45 Defining a new VC SAN Fabric using CLI Configure the VC-FC modules from the CLI.. Log in to the Virtual Connect Manager CLI: 2. Enter the following commands to create the fabrics and assign the uplink ports: add fabric Fabric- Bay=5 Ports= add fabric Fabric-2 Bay=6 Ports= 3. When complete, run the show fabric command. Blade Server configuration Server profile configuration steps can be found in Appendix A. Verification See Appendix F and Appendix G for verifications and troubleshooting steps. Summary In this scenario you have created two FC SAN Fabrics, utilizing a single uplink each; this is the simplest scenario that can be used to maximize the use of the VC-FC uplink ports and reduce the number of datacenter SAN ports. A multipathing driver is required for transparent failover between the two server HBA ports. Additional uplinks could be added to the SAN fabrics which could increase performance and/or availability. This is covered in the following scenario Scenario : Simplest scenario with multipathing

46 Scenario 2: VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to the same redundant SAN fabric Overview This scenario covers the setup and configuration of two VC Fabric-Attach SAN Fabrics with Dynamic Login Balancing Distribution, each utilizing two to eight uplink ports connected to a redundant Fabric. Figure 40 : 8: oversubscription with VC-FC 8Gb 20-Port modules using 4 uplink ports 8: server-to-uplink ratio on a fully populated enclosure with 6 servers Server Server 2 Server 3 Server 6 HBA HBA2 HBA HBA2 HBA HBA2 HBA HBA2 VC Domain Fabric- Fabric-2 VC-FC 8Gb 20-port Modules Fabric Fabric 2 Note: Static Login Distribution has been removed since VC firmware 3.00 but is the only method available in VC firmware.24 and earlier. Dynamic Login Balancing capabilities are included in VC firmware.3x and later Scenario 2: VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to the same redundant SAN fabric

47 Figure 4: 4: oversubscription with VC-FC 8Gb 20-Port modules using 8 uplink ports 4: server-to-uplink ratio on a fully populated enclosure with 6 servers Server Server 2 Server 3 Server 6 HBA HBA2 HBA HBA2 HBA HBA2 HBA HBA2 VC Domain Fabric- Fabric-2 VC-FC 8Gb 20-port Module Fabric Fabric 2 Figure 42: 2: oversubscription with VC-FC 8Gb 24-Port modules using 6 uplink ports 2: server-to-uplink ratio on a fully populated enclosure with 6 servers Server Server 2 Server 3 Server 6 HBA HBA2 HBA HBA2 HBA HBA2 HBA HBA2 VC Domain Fabric- Fabric-2 VC-FC 8Gb 24-port Module Fabric Fabric Scenario 2: VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to the same redundant SAN fabric

48 2Vdc HP StorageWorks 4/32B SAN Switch HP 4Gb VC-FC Module Cntrl 2 DP-A DP-B 2 DP-A DP-B Vdc Mfg Mgmt HP ProLiant BL460c NIC NIC 2 Cntrl 2 HP 4Gb VC-FC Module 2Vdc Mfg 2 HP StorageWorks 4/32B SAN Switch Vdc Figure 43: Physical view Storage Array Fabric- Fabric-2 HBA HBA 2 Blade Server Benefits Using multiple ports in each VC Fabric-Attach SAN fabric allows dynamically distribution of server logins across the ports using a round robin format. Dynamic Login Distribution performs auto-failover for the server logins if the corresponding uplink port becomes unavailable. Servers that were logged in to the failed port are reconnected to one of the remaining ports in the VC SAN fabric. This configuration offers increased performance and better availability. The server-to-uplink ratio is adjustable, up to 2: with the VC-FC 8Gb 24-port module (as few as two servers share one physical Fabric uplink) and up to 4: with the VC-FC 20-port and FlexFabric modules. Considerations A multipathing I/O driver must be running on the server Operating System to prevent server I/O disruption when a failure occurs somewhere between VC and the external fabric. This automatic failover allows smooth transition without much disruption to the traffic. However, the hosts must perform re-login before resuming their I/O operations. Only a redundant SAN fabric with a properly configured multipathing I/O driver running on the server Operating System can provide a completely transparent transition. The SAN switch ports connecting to the VC-FC modules must be configured to accept NPIV logins. Due to the use of NPIV, special features that are available in standard FC switches such as ISL Trunking, QoS or extended distances are not supported with VC-FC and VC FlexFabric. Requirements This configuration requires: Two VC Fabric-Attach SAN fabrics with two SAN switches that support NPIV At least two VC-FC modules A minimum of four VC fabric uplink ports connected to the redundant SAN fabric. For more information about configuring FC switches for NPIV, see "Appendix C: Brocade SAN switch NPIV configuration" or "Appendix D: Cisco MDS SAN switch NPIV configuration" or "Appendix E: Cisco NEXUS SAN switch NPIV configuration" depending on your switch model Scenario 2: VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to the same redundant SAN fabric

49 Installation and configuration Switch configuration Appendices B, C and D provide the steps required to configure NPIV on the upstream SAN switch in a Brocade, Cisco MDS or Cisco Nexus Fibre Channel infrastructure. VC CLI commands In addition to the GUI, you can complete many of the configuration settings within VC using a CLI command set. In order to connect to VC using a CLI, open an SSH connection to the IP address of the active VCM. Once logged in, VC provides a CLI with help menus. The Virtual Connect CLI guide also provides many useful examples. The following scenario provides the CLI commands needed to configure each setting of the VC. Configuring the VC module Physically connect the uplink ports on the first VC-FC module to switch ports in SAN Fabric Physically connect the uplink ports on the second VC-FC module to switch ports in SAN Fabric 2 Defining a new VC SAN Fabric using the GUI Configure the VC-FC modules from the HP Virtual Connect Manager home screen.. From the HP Virtual Connect Manager GUI, select Define SAN Fabric from the HP Virtual Connect Home page Scenario 2: VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to the same redundant SAN fabric

50 2. In the Define SAN Fabric screen, provide the VC Fabric Name, in this case Fabric-. 3. Add the uplink ports that will be connected to this fabric, and then click Apply. The following example uses Port, Port 2, Port 3 and Port 4 from the first VC-FC module (Bay 5). 4. On the SAN Fabrics screen, click on Add to create the second fabric: 50 - Scenario 2: VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to the same redundant SAN fabric

51 5. Create a new VC Fabric named Fabric-2 and add the uplink ports that will be connected to this fabric and then click Apply. The following example uses Port, Port 2, Port 3 and Port 4 from the second VC-FC module (Bay 6). 6. You have created two VC Fabric-Attach SAN fabrics each with four uplink ports allocated from a VC module in Bay 5 and a VC module in Bay Scenario 2: VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to the same redundant SAN fabric

52 Defining a new VC SAN Fabric using the CLI Configure the VC-FC modules from the CLI:. Log in to the Virtual Connect Manager CLI: 2. Enter the following commands to create the fabrics and assign the uplink ports: add fabric Fabric- Bay=5 Ports=,2,3,4 add fabric Fabric-2 Bay=6 Ports=,2,3,4 3. When complete, run the show fabric command. Blade Server configuration and verification See Appendix A for server profile configuration steps. See Appendix F and Appendix G for verifications and troubleshooting steps. Summary This scenario shows two FC SAN Fabric-Attach fabrics with multiple uplink ports using Dynamic Login Distribution which allows for login balancing and host connectivity auto failover. This configuration enables increased performance and improved availability. Host login connections to the VC Fabric uplink ports are handled dynamically, and the load is balanced across all available ports in the group. A multipathing driver is required for transparent failover between the two server HBA ports Scenario 2: VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to the same redundant SAN fabric

53 Scenario 3: Multiple VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to the same redundant SAN fabric with different priority tiers Overview This scenario covers the setup and configuration of four VC Fabric-Attach SAN fabrics with Dynamic Login Balancing Distribution that are all connected to the same redundant SAN fabric. Figure 44 : Multiple VC Fabric-Attach SAN Fabrics with different priority tiers connected to the same fabrics 5: server-to-uplink ratio on a fully populated enclosure with 6 servers : server-to-uplink ratio on a fully populated enclosure with 6 servers Server Server 2 Server 3 Server 4 Server 5 Server 6 HBA HBA2 HBA HBA2 HBA HBA2 HBA HBA2 HBA HBA2 HBA HBA2 VC Domain Fabric--Tier Fabric--Tier2 Fabric-2-Tier Fabric-2-Tier2 VC-FC 8Gb 20-port Module Fabric Fabric Scenario 3: Multiple VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to the same redundant SAN fabric with different priority tiers

54 2Vdc HP StorageWorks 4/32B SAN Switch HP 4Gb VC-FC Module Cntrl 2 DP-A DP-B 2 DP-A DP-B HP ProLiant BL460c NIC NIC 2 HP ProLiant BL460c NIC NIC 2 HP ProLiant BL460c NIC NIC 2 HP ProLiant BL460c NIC NIC 2 2Vdc Mfg Mgmt Cntrl 2 HP 4Gb VC-FC Module 2Vdc Mfg 2 HP StorageWorks 4/32B SAN Switch HP ProLiant BL460c NIC NIC 2 2Vdc Figure 45: Physical view Storage Array Fabric- Fabric-2 HBA HBA 2 HBA HBA 2 HBA HBA 2 HBA HBA 2 HBA HBA 2 Blade Servers to 5 Blade 6 Note: Static Login Distribution has been removed since VC firmware 3.00 but is the only method available in VC firmware.24 and earlier. Dynamic Login Balancing capabilities are included in VC firmware.3x and later. Benefits This configuration can guarantee non-blocking throughput for a particular application or set of blades by creating a separate VC SAN Fabric for that important traffic, and ensuring that the total aggregate uplink throughput for that particular fabric is greater than or equal to the throughput for the HBAs used. In other words, this is a way to adjust the server-to-uplink ratio, to control more granularly which server blades use which VC uplink port, and also to enable the distribution of servers according to their I/O workloads Scenario 3: Multiple VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to the same redundant SAN fabric with different priority tiers

55 Considerations A multipathing I/O driver must be running on the server Operating System to prevent server I/O disruption when a failure occurs somewhere between VC and the external fabric. This automatic failover allows smooth transition without much disruption to the traffic. However, the hosts must perform re-login before resuming their I/O operations. Only a redundant SAN fabric with a properly configured multipathing I/O driver running on the server Operating System can provide a completely transparent transition. The SAN switch ports connecting to the VC-FC modules must be configured to accept NPIV logins. Due to the use of NPIV, special features that are available in standard FC switches such as ISL Trunking, QoS or extended distances are not supported with VC-FC and VC FlexFabric. Requirements This configuration requires: Four VC Fabric-Attach SAN fabrics with two SAN switches that support NPIV At least two VC-FC modules A minimum of four VC fabric uplink ports connected to the redundant SAN fabric. For more information about configuring FC switches for NPIV, see "Appendix C: Brocade SAN switch NPIV configuration" or "Appendix D: Cisco MDS SAN switch NPIV configuration" or "Appendix E: Cisco NEXUS SAN switch NPIV configuration" depending on your switch model. Additional information, such as over subscription rates and server I/O statistics, is important to help with server workload distribution across VC-FC ports. Installation and configuration Switch configuration Appendices B, C and D provide the steps required to configure NPIV on the upstream SAN switch in a Brocade, Cisco MDS or Cisco Nexus Fibre Channel infrastructure. VC CLI commands In addition to the GUI, you can complete many of the configuration settings within VC using a CLI command set. In order to connect to VC via a CLI, open an SSH connection to the IP address of the active VCM. Once logged in, VC provides a CLI with help menus. The Virtual Connect CLI guide also provides many useful examples. The following scenario provides the CLI commands needed to configure each setting of the VC. Configuring the VC module Physically connect the uplink ports on the first VC-FC module to switch port in SAN Fabric Physically connect the uplink ports on the second VC-FC module to switch port in SAN Fabric 2 Defining a new VC SAN Fabric using the GUI Configure the VC-FC modules from the HP Virtual Connect Manager home screen: 55 - Scenario 3: Multiple VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to the same redundant SAN fabric with different priority tiers

56 . From the HP Virtual Connect Manager GUI, select Define SAN Fabric from the HP Virtual Connect Home page. 2. From the Define SAN Fabric dialog, provide the VC Fabric Name, in this case Fabric--Tier 3. Add the uplink ports that will be connected to this fabric, and then click Apply. The following example uses Port, Port 2 and Port 3 from the first VC-FC module (Bay 5) Scenario 3: Multiple VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to the same redundant SAN fabric with different priority tiers

57 4. On the SAN Fabrics screen, click Add to create the second fabric: 5. Create a new VC Fabric named Fabric--Tier2. 6. Under Enclosure Uplink Ports, add Bay 5, Port 4, and then click Apply. You have created two VC Fabric-Attach fabrics, each with uplink ports allocated from one VC module in Bay 5. One of the fabrics is configured with three 4Gb uplinks, and one is configured with one 4Gb uplink Scenario 3: Multiple VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to the same redundant SAN fabric with different priority tiers

58 7. Create two additional VC SAN Fabrics, Fabric-2-Tier and Fabric-2-Tier2 attached this time to the second VC- FC module: - Fabric-2-Tier with 3 ports: - Fabric-2-Tier2 with one port: 58 - Scenario 3: Multiple VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to the same redundant SAN fabric with different priority tiers

59 You have created four VC Fabric-Attach fabrics, two for the server group Tier with three uplink ports and two for the guaranteed throughput server Tier2 with one uplink port. Defining a new VC SAN Fabric using the CLI Configure the VC-FC modules from the CLI:. Log in to the Virtual Connect Manager CLI: 2. Enter the following commands to create the fabrics and assign the uplink ports: add fabric Fabric--Tier Bay=5 Ports=,2,3 add fabric Fabric--Tier2 Bay=5 Ports=4 add fabric Fabric-2-Tier Bay=6 Ports=,2,3 add fabric Fabric-2-Tier2 Bay=6 Ports=4 3. When complete, run the show fabric command Scenario 3: Multiple VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to the same redundant SAN fabric with different priority tiers

60 Verification of the SAN Fabrics configuration Make sure all the SAN fabrics belonging to the same Bay are connected to the same core FC SAN fabric switch.. Go to the SAN Fabrics screen. 2. The Connected To column displays the upstream SAN fabric switch to which the VC module uplink ports are connected. Verify that the entries from the same Bay number are all the same, indicating a single SAN fabric. For additional verification and troubleshooting steps, see Appendix F. Same upstream fabric 60 - Scenario 3: Multiple VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to the same redundant SAN fabric with different priority tiers

61 Blade Server configuration For server profile configuration steps, see Appendix A. After you have configured the VC SAN fabrics, you can select a Server Profile and choose the VC fabric to which you would like your HBA ports to connect.. Select the server profile, in this case esx4-2. Under FC HBA Connections, select the FC SAN fabric name to which you would like Port Bay 5 to connect. Verification See Appendix F and Appendix G for verifications and troubleshooting steps. Summary This scenario shows how you can create multiple VC Fabric-Attach SAN fabrics that are all connected to the same redundant SAN fabric. This configuration enables you to control the throughput for a particular application or set of blades. 6 - Scenario 3: Multiple VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to the same redundant SAN fabric with different priority tiers

62 Scenario 4: Multiple VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to several redundant SAN fabric with different priority tiers Overview This scenario covers the setup and configuration of four VC Fabric-Attach SAN fabrics with Dynamic Login Balancing Distribution that are all connected to the different redundant SAN fabric. Figure 46 : Multiple VC SAN Fabrics with different priority tiers connected to different SAN Fabrics 5: server-to-uplink ratio on a fully populated enclosure with 6 servers : server-to-uplink ratio on a fully populated enclosure with 6 servers Server Server 2 Server 3 Server 4 Server 5 Server 6 HBA HBA2 HBA HBA2 HBA HBA2 HBA HBA2 HBA HBA2 HBA HBA2 VC Domain Fabric-A- Fabric-A-2 Fabric-B- Fabric-B-2 VC-FC 8Gb 20-port Module Fabric A Fabric 2A Fabric B Fabric 2B 62 - Scenario 4: Multiple VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to several redundant SAN fabric with different priority tiers

63 2Vdc HP StorageWorks 4/32B SAN Switch HP 4Gb VC-FC Module 2 DP-A DP-B 2 DP-A DP-B 2Vdc Cntrl 2Vdc HP StorageWorks 4/32B SAN Switch HP ProLiant BL460c NIC NIC 2 HP ProLiant BL460c NIC NIC 2 Mfg HP ProLiant BL460c NIC NIC 2 Mgmt HP ProLiant BL460c NIC NIC 2 Cntrl 2 Mfg 2 HP 4Gb VC-FC Module 2Vdc C I S C O S Y S T E M S DS-C940-K9 STATUS FAN Console MGMT 0/00 LINK ACT MDS 940 MULTILAYER INTELLIGENT FC SWITCH C I S C O S Y S T E M S DS-C940-K9 STATUS FAN Console MGMT 0/00 LINK ACT RS232 4Gb 2Gb 5 Gb Gb 2Gb 4Gb RS Invalid Address ID Hub Mode Gb MDS 940 MULTILAYER INTELLIGENT FC SWITCH Figure 47: Physical view Storage Array HP EVA HSV300 Storage Array 2 Host 3 Host 2 Host 0 Host Fault Host 2 Host 3 Host 0 Host Fault Fabric A Fabric B HP 3PAR F-class Fabric 2A Fabric 2B HBA HBA 2 HBA HBA 2 HBA HBA 2 HBA HBA 2 HBA HBA HBA 2 HBA 2 Blade Servers to 4 Blade 5 to 6 Note: Static Login Distribution has been removed since VC firmware 3.00 but is the only method available in VC firmware.24 and earlier. Dynamic Login Balancing capabilities are included in VC firmware.3x and later. Benefits This configuration offers the ability to connect different redundant SAN Fabrics to the VC-FC module which gives you more granular control over which server blades use each VC-FC port, while also enabling the distribution of servers according to their I/O workloads. Considerations Each Virtual Connect 4Gb 20-Port Fibre Channel module, 8Gb 20-Port Fibre Channel module and FlexFabric module support up to 4 SAN fabrics. Each Virtual Connect 8Gb 24-Port Fibre Channel module supports up to 8 SAN fabrics. A multipathing I/O driver must be running on the server Operating System to prevent server I/O disruption when a failure occurs somewhere between VC and the external fabric. This automatic failover allows smooth transition without much disruption to the traffic. However, the hosts must re-login before resuming their I/O operations. Only a redundant SAN fabric with a properly configured multipathing I/O driver running on the server Operating System can provide a completely transparent transition. The SAN switch ports connecting to the VC-FC modules must be configured to accept NPIV logins. Due to the use of NPIV, special features that are available in standard FC switches such as ISL Trunking, QoS or extended distances are not supported with VC-FC and VC FlexFabric Scenario 4: Multiple VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to several redundant SAN fabric with different priority tiers

64 Requirements This configuration requires: At least two Fabric-Attach SAN fabrics with one or more switches that support NPIV At least one VC-FC module A minimum of two VC fabric uplinks connected to each of the SAN fabrics. For more information about configuring FC switches for NPIV, see "Appendix C: Brocade SAN switch NPIV configuration" or "Appendix D: Cisco MDS SAN switch NPIV configuration" or "Appendix E: Cisco NEXUS SAN switch NPIV configuration" depending on your switch model. Additional information, such as over subscription rates and server I/O statistics, is important to help with server workload distribution across VC-FC ports. Installation and configuration Switch configuration Appendices B, C and D provide the steps required to configure NPIV on the upstream SAN switch in a Brocade, Cisco MDS or Cisco Nexus Fibre Channel infrastructure. VC CLI commands In addition to the GUI, you can complete many of the configuration settings within VC using a CLI command set. In order to connect to VC using a CLI, open an SSH connection to the IP address of the active VCM. Once logged in, VC provides a CLI with help menus. The Virtual Connect CLI guide also provides many useful examples. The following scenario provides the CLI commands needed to configure each setting of the VC. Configuring the VC module Physically connect some uplink ports as follows: On the first VC-FC module to switch port in SAN Fabric A On the first VC-FC module to switch port in SAN Fabric 2A On the second VC-FC module to switch port in SAN Fabric B On the second VC-FC module to switch port in SAN Fabric 2B Defining a new VC SAN Fabric via the GUI Configure the VC-FC modules from the HP Virtual Connect Manager home screen.. From the HP Virtual Connect Manager GUI, select Define SAN Fabric from the HP Virtual Connect Home page. 2. In the Define SAN Fabric screen, provide the VC Fabric Name, in this case Fabric-A Scenario 4: Multiple VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to several redundant SAN fabric with different priority tiers

65 3. Add the uplink ports that will be connected to this fabric, and then click Apply. The following example uses Port, Port 2 and Port 3 from the first VC-FC module (Bay 5). 4. On the SAN Fabrics screen, click Add to create the second fabric. 5. Create a new VC Fabric named Fabric-A-2. Under Enclosure Uplink Ports, add Bay 5, Port 4, and then click Apply Scenario 4: Multiple VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to several redundant SAN fabric with different priority tiers

66 You have created two VC Fabric-Attach fabrics, each with uplink ports allocated from one VC module in Bay 5. One of the fabrics is configured with three 4Gb uplinks, and one is configured with one 4Gb uplink. 6. Follow the same steps to create two additional VC SAN Fabrics, Fabric-B- and Fabric_B-2 attached this time to the second VC-FC module: - Fabric_B- with 3 ports: - Fabric-B-2 with one port: You have created four VC Fabric-Attach fabrics have been created, each with uplink ports allocated from one VC module in Bay 3. Two of the fabrics are configured with Dynamic Login Balancing, and one is configured with 66 - Scenario 4: Multiple VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to several redundant SAN fabric with different priority tiers

67 Static Login Distribution. Defining a new VC SAN Fabric using the CLI Configure the VC-FC modules from the CLI:. Log in to the Virtual Connect Manager CLI: 2. Enter the following commands to create the fabrics and assign the uplink ports: add fabric Fabric-A- Bay=5 Ports=,2,3 add fabric Fabric-A-2 Bay=5 Ports=4 add fabric Fabric-B- Bay=6 Ports=,2,3 add fabric Fabric-B-2 Bay=6 Ports=4 3. When complete, run the show fabric command Scenario 4: Multiple VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to several redundant SAN fabric with different priority tiers

68 Verification of the SAN Fabrics configuration Make sure all the SAN fabrics belonging to the same Bay are connected to a different core FC SAN fabric switch:. Go to the SAN Fabrics screen. 2. The Connected To column displays the upstream SAN fabric switch to which the VC module uplink ports are connected. Verify that the VC module uplink ports are physically connected to four independent FC SAN switches: Different upstream fabrics First Fabric (Brocade) Second Fabric (Cisco) The first redundant Fabric is connected to two Brocade Silkworm 300 SAN switches: Fabric_A- uplink ports are connected to FC switch 0:00:00:05:E:5B:2C:4 Fabric_B- uplink ports are connected to FC switch 0:00:00:05:E:5B:DC:82 The second redundant Fabric is connected to two Cisco Nexus 500 switches: Fabric_A-2 uplink ports are connected to FC switch 20:0:00:0D:EC:CD:F:C Fabric_B-2 uplink ports are connected to FC switch 20:0:00:0D:EC:CF:B4:C For additional verification and troubleshooting steps, see Appendix G Scenario 4: Multiple VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to several redundant SAN fabric with different priority tiers

69 Blade Server configuration For server profile configuration steps, see Appendix A. After you have configured the VC SAN fabrics, you can select a Server Profile and choose the VC fabric to which you would like your HBA ports to connect.. Select the server profile, in this case esx4-2. Under FC HBA Connections, select the FC SAN fabric name to which you would like Port Bay 5 to connect. Verification See Appendix F and Appendix G for verifications and troubleshooting steps. Summary This scenario shows how you can create multiple VC Fabric-Attach SAN fabrics that are connected to independent SAN fabric switches; for example, a first VC SAN Fabric can be connected to a Brocade SAN environment while a second one is connected to a Cisco SAN Fabric. This configuration enables you to granularly control the server connections to independent SAN fabrics Scenario 4: Multiple VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to several redundant SAN fabric with different priority tiers

70 Scenario 5: Fabric-Attach SAN fabrics connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port module Overview Virtual Connect FlexFabric is an extension of Virtual Connect Flex-0 which leverages the new FCoE (Fibre Channel over Ethernet) protocols. By leveraging FCoE for connectivity to existing Fibre Channel SAN networks, you can reduce the number of HBAs required within the server blade and the Fibre Channel modules. This further reduces cost, complexity, power and administrative overhead. Figure 48: Multiple VC Fabric-Attach SAN Fabrics with different priority tiers connected to the same fabrics 5: server-to-uplink ratio on a fully populated enclosure with 6 servers : server-to-uplink ratio on a fully populated enclosure with 6 servers Server CNA HBA 2 Server 2 Server 3 Server 4 Server 5 Server 6 CNA HBA 2 CNA HBA 2 CNA HBA 2 CNA HBA 2 CNA HBA 2 Fabric--Tier Fabric--Tier2 Fabric-2-Tier Fabric-2-Tier2 VC Domain VC FlexFabric 0Gb/24-Port Modules Fabric Ethernet only ports! Fabric 2 This scenario is similar to scenario 3, which has multiple VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to the same redundant SAN fabric with different priority tiers but instead of using the legacy Virtual Connect Fibre Channel modules with HBAs inside the servers, this scenario uses CNAs (Converged Network Adapters) with adjustable FlexHBAs (Flexible Host Bus Adapters) with the VC FlexFabric modules. However, you can implement scenarios through 4 with the VC FlexFabric modules. Benefits Using FlexFabric technology with Fibre Channel over Ethernet, these modules converge traffic over high speed 0 Gb connections to servers with HP FlexFabric Adapters. Each module provides four adjustable connections (three data and one storage or all data) to each 0 Gb server port. You avoid the confusion of traditional and other converged network solutions by eliminating the need for multiple Ethernet and Fibre Channel switches, extension modules, cables and software licenses. Also, Virtual Connect wire-once connection management is built-in, enabling server adds, moves, and replacement in minutes instead of days. With the HP Virtual Connect FlexFabric 0Gb/24-port Module, you can fine-tune the bandwidth speed of the storage connection between the FlexFabric adapter and VC FlexFabric module just as you can for Ethernet Flex-0 NIC connections. You can adjust the FlexHBA port in 00Mb increments up to a full 0Gb connection. On external uplinks, Fibre Channel ports will auto-negotiate 2, 4 or 8Gb speeds based on the upstream switch port setting. This configuration offers the simplest, most converged and flexible way to connect to any network eliminating 95% of network sprawl and improving performance without disrupting operations. The server-to-uplink ratio is adjustable, up to 4: with the FlexFabric modules Scenario 5: Fabric-Attach SAN fabrics connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port module

71 2Vdc HP StorageWorks 4/32B SAN Switch Cntrl 2 DP-A DP-B 2 DP-A DP-B S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 HP ProLiant BL460c NIC NIC 2 HP ProLiant BL460c NIC NIC 2 HP ProLiant BL460c NIC NIC 2 HP ProLiant BL460c NIC NIC 2 2Vdc Mfg Mgmt Cntrl 2 2Vdc Mfg 2 HP StorageWorks 4/32B SAN Switch S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 2Vdc Considerations A multipathing I/O driver must be running on the server Operating System to prevent server I/O disruption when a failure occurs somewhere between VC and the external fabric. This automatic failover allows smooth transition without much disruption to the traffic. However, the hosts must perform re-login before resuming their I/O operations. Only a redundant SAN fabric with a properly configured multipathing I/O driver running on the server Operating System can provide a completely transparent transition. The SAN switch ports connecting to the VC FlexFabric modules must be configured to accept NPIV logins. Due to the use of NPIV, special features that are available in standard FC switches such as ISL Trunking, QoS or extended distances are not supported with VC-FC and VC FlexFabric. Figure 49: Physical view Storage Array Fabric- Fabric-2 Native Fibre Channel VC FlexFabric Modules HP VC FlexFabric 0Gb/24-Port Module HP VC FlexFabric 0Gb/24-Port Module CNA CNA 2 CNA CNA 2 CNA CNA 2 CNA CNA 2 CNA CNA 2 DCB/FCoE Blade Servers to 4 Blade server Scenario 5: Fabric-Attach SAN fabrics connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port module

72 Virtual Connect FlexFabric Uplink Port Mappings It is important to note how the external uplink ports on the FlexFabric module are configured. The graphic below outlines the type and speed each port can be configured as: Ports X X4: Can be configured as 0Gb Ethernet or Fibre Channel. FC speeds supported = 2Gb, 4Gb or 8Gb using 4Gb or 8Gb FC SFP modules, refer to the FlexFabric Quick Spec for a list of supported SFP modules. Ports X5 X8: Can be configured as Gb or 0Gb Ethernet Ports X7 X8: Are also shared as internal cross connect Note: Even though the Virtual Connect FlexFabric module supports Stacking, stacking only applies to Ethernet traffic. FC uplinks cannot be consolidated, as it is not possible to stack the FC ports. Figure 50: FlexFabric Module port configuration, speeds and types Four Flexible Uplink Ports (X-X4) Individually configurable as FC/FCoE/Ethernet Ethernet 0Gb (only): SR/LR/LRM SFP+ Transceivers, Copper DAC Fibre Channel: 2/4/8Gb: Short/Long Wave FC Transceivers FC uplinks: N_Ports, just like legacy VC-FC module uplinks Flat SAN support with 3PAR Four Ethernet Uplink Ports (X5-X8) Ethernet only (/0 GbE) SFP+ SR/LR/ELR/LRM/Copper DAC Stacking supported for Ethernet Only (FCoE future upgrade) LAN SAN LAN X X2 X3 X4 Ports that can be enabled for SAN connection 6 connections to FlexFabric CNA s Individually Configurable 0Gb Ethernet or Flex-0/FCoE or Flex-0/iSCSI Note: Since VC 4.0, the Virtual Connect FlexFabric SAN uplinks can be connected to an upstream DCB port with Dual-hop FCoE support. For more information about FCoE support, see Dual-Hop FCoE with HP Virtual Connect modules Cookbook in the Related Documentation section Scenario 5: Fabric-Attach SAN fabrics connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port module

73 Requirements With Virtual Connect FlexFabric this configuration requires: Four VC Fabric-Attach SAN fabrics with two SAN switches that support NPIV At least two VC FlexFabric modules A minimum of four VC fabric uplink ports connected to the redundant SAN fabric. For more information about configuring FC switches for NPIV, see "Appendix C: Brocade SAN switch NPIV configuration" or "Appendix D: Cisco MDS SAN switch NPIV configuration" or "Appendix E: Cisco NEXUS SAN switch NPIV configuration" depending on your switch model. Additional information, such as over subscription rates and server I/O statistics, is important to help with server workload distribution across VC FlexFabric FC ports. Installation and configuration Switch configuration Appendices B, C and D provide the steps required to configure NPIV on the upstream SAN switch in a Brocade, Cisco MDS or Cisco Nexus Fibre Channel infrastructure. VC CLI commands In addition to the GUI, you can complete many of the configuration settings within VC using a CLI command set. In order to connect to VC via a CLI, open an SSH connection to the IP address of the active VCM. Once logged in, VC provides a CLI with help menus. The Virtual Connect CLI guide also provides many useful examples. The following scenario provides the CLI commands needed to configure each setting of the VC. Configuring the VC FlexFabric module With Virtual Connect FlexFabric modules, you can use only uplink ports X, X2, X3 and X4 for Fibre Channel connectivity. Physically connect some uplink ports (X, X2, X3 or X4) as follows: On the first VC FlexFabric module to switch ports in SAN Fabric A On the first FlexFabric module to switch ports in SAN Fabric 2A On the second FlexFabric module to switch ports in SAN Fabric B On the second FlexFabric module to switch ports in SAN Fabric 2B Defining a new VC SAN Fabric via GUI Configure the VC FlexFabric module from the HP Virtual Connect Manager home screen:. From the HP Virtual Connect Manager GUI, select Define SAN Fabric from the HP Virtual Connect Home page Scenario 5: Fabric-Attach SAN fabrics connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port module

74 2. In the Define SAN Fabric window, provide the VC Fabric Name, in this case Fabric--Tier. Leave the default Fabric-Attach fabric type. 3. Add the uplink ports that will be connected to this fabric, and then click Apply. The following example uses Port X, Port X2 and Port X3 from the first Virtual Connect FlexFabric module (Bay ). Show Advanced Settings is only available with VC FlexFabric modules and provides the option to enable the Automatic Login Re-Distribution. The Automatic Login Re-distribution method allows FlexFabric modules to fully control the login allocation between the servers and Fibre uplink ports. A FlexFabric module will automatically re-balance the server logins once every time interval defined in the Fibre Channel WWW Settings Miscellaneous tab. 4. Select either Manual or Automatic and click Apply. If you select Manual Login Re-Distribution, the login allocation between the servers and Fabric uplink ports never changes even after the recovery of a port failure. This remains true until an administrator decides to initiate the server login re-balancing. To initiate server login rebalancing, go to the SAN Fabrics screen and select the Edit menu corresponding to the SAN Fabric, and click Redistribute 74 - Scenario 5: Fabric-Attach SAN fabrics connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port module

75 On the SAN Fabrics screen, click Add to create the second fabric: 5. Create a new VC Fabric-Attach fabric named Fabric--Tier2. Under Enclosure Uplink Ports, add Bay, Port X4, and then click Apply. You have created two VC fabrics, each with uplink ports allocated from one VC FlexFabric module in Bay. One of the fabrics is configured with three 4Gb uplinks, and one is configured with one 4Gb uplink Scenario 5: Fabric-Attach SAN fabrics connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port module

76 6. Create two additional VC Fabric-Attach SAN Fabrics, Fabric-2-Tier and Fabric-2-Tier2 attached this time to the second VC FlexFabric module: - Fabric-2-Tier with 3 ports: - Fabric-2-Tier2 with one port: You have created four VC fabrics, two for the server group Tier with three uplink ports and two for the guaranteed throughput server Tier2 with one uplink port Scenario 5: Fabric-Attach SAN fabrics connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port module

77 Defining a new VC SAN Fabric using the CLI Configure the VC FlexFabric modules from the CLI:. Log in to the Virtual Connect Manager CLI: 2. Enter the following commands to create the fabrics and assign the uplink ports: add fabric Fabric--Tier Bay= Ports=,2,3 add fabric Fabric--Tier2 Bay=5 Ports=4 add fabric Fabric-2-Tier Bay=6 Ports=,2,3 add fabric Fabric-2-Tier2 Bay=6 Ports=4 3. When complete, run the show fabric command. Verification of the SAN Fabrics configuration Make sure all of the SAN fabrics belonging to the same Bay are connected to the same core FC SAN fabric switch:. Go to the SAN Fabrics screen Scenario 5: Fabric-Attach SAN fabrics connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port module

78 2. The Connected To column displays the upstream SAN fabric switch to which the VC module uplink ports are connected. Verify that the entries from the same Bay number are all the same, indicating a single SAN fabric. For additional verification and troubleshooting steps, see Appendix F. Same upstream fabric Blade Server configuration See Appendix B for server profile configuration steps with VC FlexFabric. Summary This VC FlexFabric scenario shows how you can create multiple VC SAN fabrics that are all connected to the same redundant SAN fabric. This configuration enables you to control the throughput for a particular application or set of blades Scenario 5: Fabric-Attach SAN fabrics connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port module

79 Scenario 6: Flat SAN connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port modules and HP 3PAR Storage Systems Overview HP Virtual Connect provides the industry s first direct attach connection to Fibre channel storage that does not require dedicated Fibre Channel switches. This new technology called Flat SAN significantly reduces complexity and cost with while reducing latency between servers and storage by eliminating the need for multitier storage area networks (SANs). Designed for virtual and cloud workloads, this solution reduces storage networking costs by 50% and enables 2.5X faster provisioning compared to competitive offerings. Figure 5: HP Virtual Connect Flat SAN technology with HP 3PAR Storage Systems Server CNA HBA 2 6: server-to-uplink ratio on a fully populated enclosure with 6 servers Server 2 Server 3 Server 4 Server 6 CNA HBA 2 CNA HBA 2 CNA HBA 2 CNA HBA 2 VC Domain Direct-Attach Fabric- Direct-Attach Fabric-2 VC FlexFabric Modules Ethernet only ports! 3PAR Storage System 79 - Scenario 6: Flat SAN connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port modules and HP 3PAR Storage Systems

80 Benefits Storage solutions usually include components such as server HBAs (Host Bus Adapters), SAN switches/directors, optical transceivers/cables, and storage systems. You may have concerns about management and efficiency, because of the sheer number of components. Moreover, different components require different tools such as SAN fabric management, storage management (for each type of storage), and HBA management. HP Virtual Connect Flat SAN technology for HP 3PAR Storage Systems helps you to: Reduce costs Do away with the need for expensive SAN fabrics, HBAs, and cables. Save on operating costs and cut down on capital expenditure. Scale with the pay-as-you-grow model, which lets you pay for only what you need now. Overcome complexity Connect Virtual Connect FlexFabric Fibre Channel connects directly to HP 3PAR FC storage to simplify your server connections. Improve efficiency with automated fabric management with simplified management tools. Configure your Virtual Connect as direct attach or fabric attach, depending on your solution design. Simplify management Manage through a single pane of glass with Virtual Connect Manager Web-based and Command Line Interfaces. Use Virtual Connect technology to further improve management efficiency. Reduce disparity with separate fabric and device management. Considerations In a fully populated c7000 enclosure, the server-to-uplink ratio is 6:. This configuration can result in poor response time and might also require particular performance monitoring attention. You can use additional uplink ports for better performance. A properly configured multipathing I/O driver must be running on the server Operating System to prevent server I/O disruption when a failure occurs somewhere between VC and the external storage system. Be sure to configure the SAN 3PAR host ports that connect to the VC FlexFabric modules to accept the connection Scenario 6: Flat SAN connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port modules and HP 3PAR Storage Systems

81 0/00Base-TX H3C S3600 Speed:Green=00Mbps,Yellow=0Mbps Duplex:Green=Full Duplex,Yellow=Half Duplex Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget Series Console 000Base-X % Mode Unit 40% 60% 80% 00% R PWR Flashing=PoE Yellow=Duplex Green=Speed FAN OA FAN 6 6 HP VC FlexFabric 0Gb/24-Port Module ilo Reset Active 5 4 S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 Enclosure Remove management modules before ejecting sleeve Enclosure Interlink HP VC FlexFabric 0Gb/24-Port Module ilo Reset Active 3 S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 2 FAN 5 OA2 FAN 0 0/00Base-TX H3C S3600 Speed:Green=00Mbps,Yellow=0Mbps Duplex:Green=Full Duplex,Yellow=Half Duplex Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget Series Console 000Base-X % Mode Unit 40% 60% 80% 00% R PWR Flashing=PoE Yellow=Duplex Green=Speed Physical view of a Direct-Attach configuration Figure 52: Virtual Connect Flat SAN with VC FlexFabric modules for HP 3PAR Storage Systems, Redundant paths - Server-to- Direct-Attach uplink ratio 6: HP 3PAR StoreServ 0400 Controller Node 0 Controller Node SAN uplink connections (Direct-Attach) LAN Switch A SUS SUS-2 LAN Switch B Ethernet uplink connection Ethernet uplink connection HP BladeSystem c7000 Requirements This configuration with Virtual Connect FlexFabric requires: Two VC Direct-Attach SAN fabrics. At least two VC FlexFabric modules. A minimum of two VC fabric uplink ports connected to the 3PAR Controller nodes. For more information about implementing and configuring a 3PAR storage system, contact your HP representative or see Scenario 6: Flat SAN connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port modules and HP 3PAR Storage Systems

82 Installation and configuration VC CLI commands In addition to the GUI, you can complete many of the configuration settings within VC using a CLI command set. In order to connect to VC via a CLI, open an SSH connection to the IP address of the active VCM. Once logged in, VC provides a CLI with help menus. The Virtual Connect CLI guide also provides many useful examples. The following scenario provides the CLI commands needed to configure each setting of the VC. Configuring the VC FlexFabric module With Virtual Connect FlexFabric modules, you can only use uplink ports X, X2, X3 and X4 for Fibre Channel connectivity. Make the following physical connections: Uplink ports Xon the first VC FlexFabric module to the first HP 3PAR Controller node. Uplink ports Xon the second VC FlexFabric module to the second HP 3PAR Controller node. See Details about the 3PAR controller connectivity to properly connect the 3PAR V-Class controller node. Defining a new VC SAN Fabric via GUI Configure the VC FlexFabric module from the HP Virtual Connect Manager home screen:. From the HP Virtual Connect Manager GUI, select Define SAN Fabric from the HP Virtual Connect Home page. 2. In the Define SAN Fabric window, provide the VC Fabric Name, in this case Fabric--3PAR. 3. Add Port X from the first Virtual Connect FlexFabric module (Bay ) Scenario 6: Flat SAN connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port modules and HP 3PAR Storage Systems

83 4. Change the default Fabric-Attach fabric type to DirectAttach and then click Apply. 5. On the SAN Fabrics screen, click Add to create the second fabric. 6. Create a new VC Direct-Attach fabric named Fabric-2-3PAR. 7. Add Port X from the second Virtual Connect FlexFabric module (Bay 2). 8. For the fabric type, select DirectAttach and then click Apply. You have created two VC Direct-Attach fabrics, each with one uplink port allocated from one VC module. The connection status is red for both fabrics because you still need to configure the 3PAR controller ports Scenario 6: Flat SAN connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port modules and HP 3PAR Storage Systems

84 Defining a new VC SAN Fabric using the CLI Configure the VC Direct-Attach SAN Fabrics from the CLI:. Log in to the Virtual Connect Manager CLI: 2. Enter the following commands to create the fabrics and assign the uplink ports: add fabric Fabric--3PAR Bay= Ports= Type=DirectAttach add fabric Fabric-2-3PAR Bay=2 Ports= Type=DirectAttach 3. When complete, run the show fabric command Scenario 6: Flat SAN connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port modules and HP 3PAR Storage Systems

85 Configuration of the 3PAR controller ports Configure the 3PAR Controller ports to accept the Direct-Attach connection to the FlexFabric modules.. Start the 3PAR InForm Management Console. 2. Go to the Ports folder and select the first port used by the VC modules. 3. Right-click the port and click Configure 4. Change the Connection Mode from Disk to Host Scenario 6: Flat SAN connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port modules and HP 3PAR Storage Systems

86 5. Change the Connection Type from Loop to Point and click Ok. 6. Modifying the FC Port configuration can disrupt partner hosts using the same PCI slot. If all of your hosts use a redundant connection to the 3PAR array, you can click Yes to the warning message. If they don t, do not continue until every host is also connected to another 3PAR controller node. 7. Repeat the same steps for the second 3PAR port connected to the second FlexFabric module. Verification of the 3PAR connection After you have completed the 3PAR port configuration, the External Connections tab of the SAN Fabrics window should indicate a green status and a green port status for each of the Direct-Attach Fabrics, and show the link speed and the WWN of the controller node port. If the port is unlinked and no connectivity exists, check the Port Status for the reason. For more information about possible causes, see Appendix F Scenario 6: Flat SAN connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port modules and HP 3PAR Storage Systems

87 Blade Server configuration Configure a server profile with a Direct-Attached 3PAR array.. From the server profile interface, select the 3PAR Direct-Attach Fabrics for the two FC/FCOE HBA connections. 2. For a Boot from SAN configuration click Fibre Channel Boot Parameters Scenario 6: Flat SAN connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port modules and HP 3PAR Storage Systems

88 3. Select Primary and Secondary in the SAN Boot column 4. Enter the WWN of your 3PAR Controller node ports and LUN numbers: Virtual Connect Manager 3PAR InForm Management Console 3PAR Controller port 3PAR Controller port Scenario 6: Flat SAN connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port modules and HP 3PAR Storage Systems

89 OS Configuration The Operating System requires installation of MPIO for high-availability with load balancing of I/O. In this scenario, where each Direct-Attach Fabrics have one VC uplink port, the Operating System should discover 2 different paths to the 3PAR volume. For more information about 3PAR implementation or configuring an HP 3PAR Storage System with Windows, Linux, or VMware, see Summary This VC Direct-Attach FlexFabric scenario shows how you can create easily multiple VC SAN fabrics that are connected directly to the same 3PAR Storage System. This configuration enables you to reduce the complexity of your enterprise storage solution Scenario 6: Flat SAN connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port modules and HP 3PAR Storage Systems

90 2Vdc HP StorageWorks 4/32B SAN Switch Vdc 2Vdc HP StorageWorks 4/32B SAN Switch Vdc Scenario 7: Mixed Fabric-Attach and Flat SAN connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port modules and HP 3PAR Storage Systems Overview You can mix the HP Virtual Connect direct-attach Flat SAN connection to Fibre Channel storage with a traditional VC Fabric-Attach SAN Fabric connected to Fibre Channel switches. This capability in Virtual Connect FlexFabric modules can be useful for backup and data migration because attaching additional storage systems or backup solutions are not supported today with the Direct-Attach mode. This scenario is particularly useful for migrating existing legacy SAN storage systems to 3PAR using the SAN Fabric connection. Figure 53: Mix Fabric-Attach and Direct-Attach FC configuration with VC FlexFabric modules 7: server-to-uplink ratio on a fully populated enclosure with 6 servers : server-to-uplink ratio on a fully populated enclosure with 6 servers Server Server 2 Server 3 Server 4 Server 4 Server 5 Server 6 CNA HBA 2 CNA HBA 2 CNA HBA 2 CNA HBA 2 CNA HBA 2 CNA HBA 2 CNA HBA 2 VC Domain Fabric-Attach Fabric- Direct-Attach Fabric- Fabric-Attach Fabric-2 Direct-Attach Fabric-2 VC FlexFabric Modules Fabric- Fabric-2 MSA EVA Optional FC links XP StoreOnce Disk Backup StoreEver Tape Library 3PAR Storage System 90 - Scenario 7: Mixed Fabric-Attach and Flat SAN connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port modules and HP 3PAR Storage Systems

91 2Vdc HP StorageWorks 4/32B SAN Switch /00Base-TX H3C S3600 Speed:Green=00Mbps,Yellow=0Mbps Duplex:Green=Full Duplex,Yellow=Half Duplex Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget Series 2Vdc Console 000Base-X Unit 20% 40% 60% 80% 00% Mode R PWR Flashing=PoE Yellow=Duplex Green=Speed 2Vdc HP StorageWorks 4/32B SAN Switch FAN OA FAN 6 6 HP VC FlexFabric 0Gb/24-Port Module ilo Reset Active 5 4 S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 Enclosure Remove management modules before ejecting sleeve Enclosure Interlink HP VC FlexFabric 0Gb/24-Port Module ilo Reset Active 3 S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 2Vdc 2 FAN OA2 FAN 0 0/00Base-TX H3C S3600 Speed:Green=00Mbps,Yellow=0Mbps Duplex:Green=Full Duplex,Yellow=Half Duplex Power:Green=DeliveringPower,Yellow=Fault,Flashing Green=Over Budget Series Console 000Base-X Unit 20% 40% 60% 80% 00% Mode R PWR Flashing=PoE Yellow=Duplex Green=Speed Considerations A multipathing I/O driver must be running on the server Operating System to prevent server I/O disruption when a failure occurs somewhere between VC and the external fabrics. This automatic failover allows smooth transition without much disruption to the traffic. However, the hosts must re-login before resuming their I/O operations. Only a redundant SAN fabric with a properly configured multipathing I/O driver running on the server Operating System can provide a completely transparent transition. The SAN switch ports connecting to the VC Fabric-Attach SAN Uplinks must be configured to accept NPIV logins. Due to the use of NPIV, special features that are available in standard FC switches such as ISL Trunking, QoS or extended distances are not supported with VC FlexFabric. Be sure to properly configure the SAN 3PAR host ports connecting to the VC FlexFabric modules to accept the connection. You can also connect the 3PAR to the SAN switches to offer a 3PAR access to all servers connected to the SAN. Physical view of a mix Flat SAN and Fabric-Attach configuration Figure 54: Mix Fabric-Attach and Direct-Attach FC configuration with VC FlexFabric modules. Redundant paths - Server-to- Fabric-Attach uplink ratio 8: - Server-to- Direct-Attach uplink ratio 8: HP EVA 8400 HP XP HP 3PAR StoreServ 0400 Controller Node 0 Controller Node Fabric- Fabric-2 SAN uplink connections (Direct-Attach) SAN Switch A SAN uplink connections (Fabric-Attach) SAN Switch B Controller Node 0 Controller Node LAN Switch A SUS- SUS-2 LAN Switch B Ethernet uplink connection Ethernet uplink connection HP BladeSystem c7000 Note: The 8: Oversubscription with 2 FC cables per FlexFabric module is the most common use case. Note: In order to improve the redundancy, it is recommended to connect the FC cables in a crisscross manner (i.e. each FlexFabric module is connected to two different controller nodes). 9 - Scenario 7: Mixed Fabric-Attach and Flat SAN connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port modules and HP 3PAR Storage Systems

92 Requirements This configuration with Virtual Connect FlexFabric requires: Two VC Direct-Attach SAN fabrics Two VC Fabric-Attach SAN fabrics with two SAN switches that support NPIV At least two VC FlexFabric modules A minimum of two VC fabric uplink ports connected to the 3PAR Controller nodes A minimum of two VC fabric uplink ports connected to the redundant SAN fabric For more information about implementing and configuring a 3PAR storage system, contact your HP representative or see Installation and configuration VC CLI commands In addition to the GUI, you can complete many of the configuration settings within VC using a CLI command set. In order to connect to VC via a CLI, open an SSH connection to the IP address of the active VCM. Once logged in, VC provides a CLI with help menus. The Virtual Connect CLI guide also provides many useful examples. The following scenario provides the CLI commands needed to configure each setting of the VC. Configuring the VC FlexFabric module With Virtual Connect FlexFabric modules, you can only use uplink ports X, X2, X3 and X4 for Fibre Channel connectivity. We recommend the following physical connections: Uplink ports X and X2 on FlexFabric module in Bay to a switch port in SAN Fabric. Uplink ports X and X2 on FlexFabric module in Bay 2 to a switch port in SAN Fabric 2. Uplink ports X3 on FlexFabric module in Bay to the first HP 3PAR Controller node. Uplink ports X4 on FlexFabric module in Bay to the second HP 3PAR Controller node. Uplink ports X3 on FlexFabric module in Bay 2 to the first HP 3PAR Controller node. Uplink ports X4 on FlexFabric module in Bay 2 to the second HP 3PAR Controller node. See Details about the 3PAR controller connectivity to properly connect the 3PAR controller node. Defining a new VC SAN Fabric using the GUI Configure the VC FlexFabric module from the HP Virtual Connect Manager home screen: 92 - Scenario 7: Mixed Fabric-Attach and Flat SAN connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port modules and HP 3PAR Storage Systems

93 . From the HP Virtual Connect Manager GUI, select Define SAN Fabric from the HP Virtual Connect Home page. 2. In the Define SAN Fabric window, provide the VC Fabric Name, in this case Fabric--3PAR. 3. Add Port X3 and Port X4 from the first Virtual Connect FlexFabric module (Bay ). 4. Change the default Fabric-Attach fabric type to DirectAttach 5. You can set a Preferred and Maximum FCoE connection speed that can be applied to server profiles when an FCoE connection is used. 6. Click Apply. 7. On the SAN Fabrics screen, click Add to create the second fabric Scenario 7: Mixed Fabric-Attach and Flat SAN connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port modules and HP 3PAR Storage Systems

94 8. Create a new VC Direct-Attach fabric named Fabric-2-3PAR and then add Port X3 and Port X4 from the second Virtual Connect FlexFabric module (Bay 2). 9. For the fabric type, select DirectAttach and if required select the proper Preferred and Maximum FCoE connection speed, then click Apply. You have created two VC Direct-Attach fabrics, each with two uplink ports allocated from one VC module but connected to 2 different 3PAR controller nodes for better redundancy. Now you can create the two Fabric-Attach SAN Fabrics:. On the SAN Fabrics screen, click Add to create the third fabric. 2. Create a new VC Fabric-Attach fabric named Fabric- and then add Port X and X2 from the first Virtual Connect FlexFabric module (Bay ). 3. For the fabric type, keep the default FabricAttach Scenario 7: Mixed Fabric-Attach and Flat SAN connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port modules and HP 3PAR Storage Systems

95 4. If necessary, select the Show Advanced Settings to change the default Login Re-Distribution or the Preferred/Maximum FCoE connection speed, then click Apply. 5. Create the last VC Fabric-Attach fabric named Fabric-2 and then add Port X and X2 from the second Virtual Connect FlexFabric module (Bay2). 6. For the fabric type, keep the default FabricAttach, change the advanced settings options then click Apply Scenario 7: Mixed Fabric-Attach and Flat SAN connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port modules and HP 3PAR Storage Systems

96 You have created four VC fabrics, two for the Direct-Attach 3PAR and two for the SAN Fabric connectivity. The connection status is red for both Flat SAN fabrics because we still need to configure the 3PAR controller ports. Defining a new VC SAN Fabric using the CLI Configure the VC Direct-Attach SAN Fabrics from the CLI:. Log in to the Virtual Connect Manager CLI: 2. Enter the following commands to create the fabrics and assign the uplink ports: add fabric Fabric--3PAR Bay= Ports=3,4 Type=DirectAttach add fabric Fabric-2-3PAR Bay=2 Ports=3,4 Type=DirectAttach add fabric Fabric- Bay= Ports=,2 add fabric Fabric-2 Bay=2 Ports=, Scenario 7: Mixed Fabric-Attach and Flat SAN connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port modules and HP 3PAR Storage Systems

97 3. When complete, run the show fabric command Scenario 7: Mixed Fabric-Attach and Flat SAN connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port modules and HP 3PAR Storage Systems

98 Configuration of the 3PAR controller ports Configure the 3PAR Controller ports to accept the Direct-Attach connection to the FlexFabric modules.. Start the HP 3PAR Management Console. 2. Go to the Ports folder and select the first port used by the VC modules. 3. Right-click this port and select Configure 4. Change the Connection Mode from Disk to Host Scenario 7: Mixed Fabric-Attach and Flat SAN connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port modules and HP 3PAR Storage Systems

99 5. Change the Connection Type from Loop to Point and click Ok. 6. Modifying the FC Port configuration can disrupt partner hosts using the same PCI slot. If all of your hosts use a redundant connection to the 3PAR array, you can click Yes to the warning message. If they don t, do not continue until every host is also connected to another 3PAR controller node. 7. Repeat the same steps for the three other 3PAR ports connected to the FlexFabric modules Scenario 7: Mixed Fabric-Attach and Flat SAN connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port modules and HP 3PAR Storage Systems

100 Verification of the 3PAR connection After you have completed the 3PAR port configuration, the External Connections tab of the SAN Fabrics window should indicate a green status and a green port status for each of the Direct-Attach Fabrics, and show the link speed and the WWN of the controller node port. If the port is unlinked and no connectivity exists, the cause is displayed in the Port Status column. For more information about possible causes, see Appendix F. Server Profile configuration Configure a server profile with a Direct-Attached 3PAR array.. From the server profile interface, select the 3PAR Direct-Attach Fabrics for the two FC/FCOE HBA connections. 2. For a Boot from SAN server configuration click Fibre Channel Boot Parameters Scenario 7: Mixed Fabric-Attach and Flat SAN connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port modules and HP 3PAR Storage Systems

101 3. Select Primary and Secondary in the SAN Boot column. 4. For the Target Port Name, enter the WWN of your 3PAR Controller node ports and LUN numbers. For each Direct-Attach Fabric, it is recommended to choose 3PAR port WWNs located on different controller nodes as VC supports only two SAN boot targets per VC profile: Controller Node 0 Controller Node 0 - Scenario 7: Mixed Fabric-Attach and Flat SAN connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port modules and HP 3PAR Storage Systems

102 OS Configuration The Operating System requires installation of MPIO for high-availability with load balancing of I/O. In this scenario, where each Direct-Attach Fabrics have two VC uplink ports, the Operating System should discover 4 different paths to the 3PAR volume. For more information about 3PAR implementation or configuring an HP 3PAR Storage System with Windows, Linux, or VMware, see Summary This VC Direct-Attach and Fabric-Attach FlexFabric scenario shows how you can create multiple VC SAN fabrics that are connected directly to a 3PAR Storage System and to a SAN Fabric with other Storage arrays. This configuration enables you to support the heterogeneous environments that are often found in a datacenter, and is also useful for migrating an existing, costly SAN to a less complex and lower cost storage solution Scenario 7: Mixed Fabric-Attach and Flat SAN connectivity with HP Virtual Connect FlexFabric 0Gb/24-Port modules and HP 3PAR Storage Systems

103 Scenario 8: Adding VC fabric uplink ports with Dynamic Login Balancing to an existing VC Fabric-Attach SAN fabric Overview This scenario provides instructions for adding VC Fabric uplink ports to an existing VC Fabric-Attach SAN fabric using VC-FC modules, and to manually redistribute the server blade HBA logins. Benefits With this configuration, you can add additional VC Fabric uplink ports to an existing VC SAN fabric and redistribute the server blade HBA logins. You can also add ports to decrease the number of server blades accessing the VC fabric uplinks and provide increased bandwidth. Initial configuration The following figure (Figure 55) shows the use of two uplink ports per VC-FC modules to connect to a redundant SAN fabric. Figure 55: Initial configuration with 2 uplink ports Server Server 2 Server 3 HBA HBA2 HBA HBA2 HBA HBA2 VC Domain Fabric- Fabric-2 VC-FC 8Gb 20-port Module Fabric Fabric Scenario 8: Adding VC fabric uplink ports with Dynamic Login Balancing to an existing VC Fabric-Attach SAN fabric

104 . In HP Virtual Connect Manager, click Interconnect Bays on the left side of the screen. 2. Select the first VC-FC module Scenario 8: Adding VC fabric uplink ports with Dynamic Login Balancing to an existing VC Fabric-Attach SAN fabric

105 As shown in the following image, the screen displays the VC-FC Uplink port information (port speed, connection status, the upstream switch WWN ports) and provides the server port details. The Uplink Port column identifies which VC-FC uplink port a server is using. Two uplink ports are used for Fabric-A- Note: VC 3.70 and later include the Uplink Port information, so it is no longer necessary to look on the upstream SAN switch port information to identify which VC uplink FC port a server is using. Figure 56: Actual distributed logins across the uplink ports of VC-FC Bay Server Server 2 Server 3 HBA HBA2 HBA HBA2 HBA HBA2 VC Domain Fabric- Fabric-2 Server 2 Server 7 Server Server Server 3 VC-FC 8Gb 20-port Module Fabric Fabric Scenario 8: Adding VC fabric uplink ports with Dynamic Login Balancing to an existing VC Fabric-Attach SAN fabric

106 Adding an additional uplink port Add an additional uplink port to the VC SAN Fabric to increase the server bandwidth. Figure 57: Configuration with 3 uplink ports Server Server 2 Server 3 HBA HBA2 HBA HBA2 HBA HBA2 VC Domain Fabric- Fabric-2 VC-FC 8Gb 20-port Module Fabric Fabric 2 Using the GUI Add an additional uplink port from the GUI.. Select the VC-FC SAN fabric to which you want to add a port and click Edit Scenario 8: Adding VC fabric uplink ports with Dynamic Login Balancing to an existing VC Fabric-Attach SAN fabric

107 2. Select the port you want to add and click Apply. Using the CLI Add an additional uplink port from the CLI.. Log in to the Virtual Connect Manager CLI: 2. Enter the following command to add one uplink port (Port 3) to the existing fabric (Fabric-A-) with port and 2 already member of that fabric. set fabric Fabric-A- Ports=,2, Scenario 8: Adding VC fabric uplink ports with Dynamic Login Balancing to an existing VC Fabric-Attach SAN fabric

108 Login Redistribution The login redistribution is not automatic with the Manual Login Re-Distribution mode. So the current logins may not have changed yet on the module if you are using this mode. Connect back to the Interconnect Bays / VC-FC Module to see that every server is still using the same uplink ports. Note: When Automatic Login Re-Distribution is configured (only supported with the FlexFabric modules) the login redistribution is automatically initiated when the defined time interval expires (for more information, see Consideration and concepts ). Figure 58: Hot adding one uplink port to the existing VC SAN fabric does not automatically redistribute the server logins with VC-FC (except with VC FlexFabric) Server Server 2 Server 3 HBA HBA2 HBA HBA2 HBA HBA2 VC Domain Fabric- Fabric-2 Server 2 Server 7 Server Server Server 3 VC-FC 8Gb 20-port Module Fabric Fabric Scenario 8: Adding VC fabric uplink ports with Dynamic Login Balancing to an existing VC Fabric-Attach SAN fabric

109 Manual Login Redistribution using the GUI To manually redistribute the logins, go to the SAN Fabrics screen, select the Edit menu corresponding to the SAN Fabric, and click Redistribute. Manual Login Redistribution using the CLI Enter the following command to redistribute the logins using the VC CLI: set fabric MyFabric -loadbalance Verification Go back to the VC-FC Module port information to check again which server9s) is connected to each port Scenario 8: Adding VC fabric uplink ports with Dynamic Login Balancing to an existing VC Fabric-Attach SAN fabric

110 Figure 59: Newly distributed logins Server Server 2 Server 3 HBA HBA2 HBA HBA2 HBA HBA2 VC Domain Fabric- Fabric-2 Server 2 Server Server Server 3 Server 7 VC-FC 8Gb 20-port Module Fabric Fabric 2 Summary This scenario demonstrates how to add an uplink port to an existing VC-FC fabric with Dynamic Login Balancing enabled. VCM allows you to manually redistribute the HBA host logins to balance the FC traffic through a newly added uplink port. New logins go to the newly added port until the number of logins becomes equal. After this, the login distribution uses a round robin method of assigning new host logins. Logins are not redistributed if you do not use this manual method, with one exception. With the Virtual connect FlexFabric module, automatic login redistribution is available by configuring a link stability interval parameter. This interval defines the number of seconds that the VC module waits for the VC Fabric uplinks to stabilize before the module attempts to re-load balance the server logins. Login redistribution can affect the server traffic because the hosts must re-login before resuming their I/O operations. A smooth transition without much traffic disruption can occur with a redundant Fabric connection and an appropriate server MPIO driver. 0 - Scenario 8: Adding VC fabric uplink ports with Dynamic Login Balancing to an existing VC Fabric-Attach SAN fabric

111 Scenario 9: Cisco MDS Dynamic Port VSAN Membership Overview This scenario covers the steps to configure Cisco MDS family FC switches to operate in Dynamic Port VSAN Membership mode. This allows you to configure VSAN membership based on the WWN of an HBA instead of the physical port, which allows an NPIV-based VC fabric uplink with multiple WWNs to be set up on separate VLANs. With standard VSAN membership configuration that is based on physical ports, you must configure all the HBAs on the uplink port to the same VSAN. Knowledge of Cisco Fabric Manager is needed to complete this scenario. For more information, see the Cisco website Additional information about setting up this scenario can be found in the MDS Cookbook 3. ( df). Benefits This configuration allows you to assign the Virtual connect fabric hosts to different VSANs. Requirements This configuration requires: A single SAN fabric with one or more switches that support NPIV. At least one VC-FC module. At least one VC fabric uplink connected to the SAN fabric. A Cisco MDS switch running minimum SAN-OS 2.x or NX-OS 4.x. Additional information, such as over subscription rates and server I/O statistics, is important to help with server workload distribution across VC-FC ports. For more information about configuring Cisco SAN switches for NPIV, see Appendix D: Cisco MDS SAN switch NPIV configuration. Installation and configuration. Log in to Fabric Manager for the MDS FC Switch. 2. Click DPVM Setup. - Scenario 9: Cisco MDS Dynamic Port VSAN Membership

112 3. From the DVPM Setup Wizard, select the Master Switch, and click Next. 4. To enable the manual selection of device WWN to VSAN assignment, be sure that the Create Configuration from Currently Logged in End Devices is unchecked. If you want to accept the current VSAN assignments, check the box. This presents all the WWNs and VSAN assignments from the fabric. 5. Click Next. 2 - Scenario 9: Cisco MDS Dynamic Port VSAN Membership

113 6. Click Insert to add the WWN and VSAN assignments. 7. Select all VC-FC fabric devices in the fabric for interface FC2/0. You must configure each one individually and assign the VSAN ID to which you want that WWN associated. 3 - Scenario 9: Cisco MDS Dynamic Port VSAN Membership

114 8. After all WWNs are configured, click Finish to activate the database. DPVM database configuration overrides any settings made in the VSAN port configuration at the physical port. Summary This scenario gives a quick glance on how to follow the DVPM Setup Wizard to enable the MDS switch for the assignment of VSANs based on the WWN of the device logging into the fabric, and not by the configuration of the physical port. For additional details and steps not covered here, see MDS 3.x Cookbook ( 4 - Scenario 9: Cisco MDS Dynamic Port VSAN Membership

115 Scenario 0: Fabric-Attach SAN fabrics connectivity with HP Virtual Connect FlexFabric-20/40 F8 module Overview HP Virtual Connect FlexFabric-20/40 F8 Modules are the simplest, most flexible way to connect virtualized server blades to data or storage networks. VC FlexFabric-20/40 F8 modules eliminate up to 95% of network sprawl at the server edge with one device that converges traffic inside enclosures and directly connects to external LANs and SANs. Using Flex-0 and Flex-20 technology with Fibre Channel over Ethernet and accelerated iscsi, these modules converge traffic over high speed 0Gb/20Gb connections to servers with HP FlexFabric Adapters. Each redundant pair of Virtual Connect FlexFabric modules provide 8 adjustable downlink connections ( six Ethernet and two Fibre Channel, or six Ethernet and 2 iscsi or eight Ethernet) to dual port0gb/20gb FlexFabric Adapters on servers. Up to twelve uplinks with eight Flexport and four QSFP+ interfaces, without splitter cables, are available for connection to upstream Ethernet and Fibre Channel switches. Including splitter cables up to 24 uplinks are available for connection to upstream Ethernet and Fibre Channel. VC FlexFabric-20/40 F8 modules avoid the confusion of traditional and other converged network solutions by eliminating the need for multiple Ethernet and Fibre Channel switches, extension modules, cables and software licenses. Also, Virtual Connect wire-once connection management is builtin enabling server adds, moves and replacement in minutes instead of days or weeks. Figure 60: Multiple VC Fabric-Attach SAN Fabrics with different priority tiers connected to the same fabrics 5: server-to-uplink ratio on a fully populated enclosure with 6 servers : server-to-uplink ratio on a fully populated enclosure with 6 servers Server CNA HBA 2 Server 2 Server 3 Server 4 Server 5 Server 6 CNA HBA 2 CNA HBA 2 CNA HBA 2 CNA HBA 2 CNA HBA 2 Fabric--Tier Fabric--Tier2 Fabric-2-Tier Fabric-2-Tier2 VC Domain VC FlexFabric-20/40 F8 Modules Ethernet only ports! Fabric Fabric 2 This scenario is similar to scenario 5, which has multiple VC Fabric-Attach SAN fabrics with Dynamic Login Balancing connected to the same redundant SAN fabric with different priority tiers but instead of using the legacy Virtual Connect FlexFabric modules, this scenario uses the Virtual Connect FlexFabric-20/40 F8 Modules with the HP FlexFabric 20Gb Adapters using the Flex-20 technology with Fibre Channel over Ethernet, providing adjustable FlexHBAs up to 20Gb speed. 5 - Scenario 0: Fabric-Attach SAN fabrics connectivity with HP Virtual Connect FlexFabric-20/40 F8 module

116 2Vdc HP StorageWorks 4/32B SAN Switch HP ProLiant BL460c NIC NIC 2 Cntrl 2 DP-A DP-B 2 DP-A DP-B HP ProLiant BL460c NIC NIC 2 HP ProLiant BL460c NIC NIC 2 HP ProLiant BL460c NIC NIC 2 2Vdc Mfg Mgmt Cntrl 2 2Vdc Mfg 2 HP StorageWorks 4/32B SAN Switch Vdc Benefits Using FlexFabric technology with Fibre Channel over Ethernet, these modules converge traffic over high speed 0Gb and 20Gb connections to servers with HP FlexFabric 0/20Gb Adapters. Each module provides 4 adjustable connections (three data and one storage or all data) to each 0Gb/20Gb server port. You avoid the confusion of traditional and other converged network solutions by eliminating the need for multiple Ethernet and Fibre Channel switches, extension modules, cables and software licenses. Also, Virtual Connect wire-once connection management is built-in, enabling server adds, moves, and replacement in minutes instead of days. With the HP Virtual Connect FlexFabric-20/40 F8 Module, you can fine-tune the bandwidth speed of the storage connection between the FlexFabric adapter and VC FlexFabric module just as you can for Ethernet Flex-0 NIC connections. You can adjust the FlexHBA port in 00Mb increments up to a full 20Gb connection with 20Gb FlexFabric adapters. On external uplinks, Fibre Channel ports will auto-negotiate 2, 4 or 8Gb speeds based on the upstream switch port setting. This configuration offers the simplest, most converged and flexible way to connect to any network eliminating 95% of network sprawl and improving performance without disrupting operations. The server-to-uplink ratio is adjustable, up to 2: with the FlexFabric-20/40 F8 modules. Considerations A multipathing I/O driver must be running on the server Operating System to prevent server I/O disruption when a failure occurs somewhere between VC and the external fabric. This automatic failover allows smooth transition without much disruption to the traffic. However, the hosts must perform re-login before resuming their I/O operations. Only a redundant SAN fabric with a properly configured multipathing I/O driver running on the server Operating System can provide a completely transparent transition. The SAN switch ports connecting to the VC FlexFabric modules must be configured to accept NPIV logins. Due to the use of NPIV, special features that are available in standard FC switches such as ISL Trunking, QoS or extended distances are not supported with VC-FC and VC FlexFabric. Figure 6: Physical view Storage Array Fabric- Fabric-2 Native Fibre Channel VC FlexFabric-20/40 F8 Modules CNA CNA 2 CNA CNA 2 CNA CNA 2 CNA CNA 2 CNA CNA 2 DCB/FCoE Blade Servers to 4 Blade server 6 Virtual Connect FlexFabric-20/40 F8 Uplink Port Mappings It is important to note how the external uplink ports on the FlexFabric-20/40 F8 module are configured. The graphic below outlines the type and speed each port can be configured as: 6 - Scenario 0: Fabric-Attach SAN fabrics connectivity with HP Virtual Connect FlexFabric-20/40 F8 module

117 Flexible Ports X X8: Can be configured as /0 Gb Ethernet or Fibre Channel FC speeds supported = 2Gb, 4Gb or 8Gb using 4Gb or 8Gb FC SFP modules, refer to the FlexFabric-20/40 F8 Quick Spec for a list of supported SFP+ transceivers and DAC Ports Q Q4: Can be configured as only 0Gb or 40Gb Ethernet (each port can do x40, x0 or 4x0) Ports X5/X6 and X7/X8: Are paired and can only be configured to carry same traffic types (either FC or Ethernet) Note: Even though the Virtual Connect FlexFabric-20/40 F8 module supports Stacking, stacking only applies to Ethernet traffic. FC uplinks cannot be consolidated, as it is not possible to stack the FC ports. Figure 62: FlexFabric-20/40 F8 Module port configuration, speeds and types Four 40Gb QSFP+ Uplink Ports (Q-Q4) Ethernet (only): x40, x0 or 4x0 (FCoE future upgrade) QSFP+ transceivers: SR4 00m/SR4 300m/LR4/Quad-to-SFP+ QSFP+ cables: Copper/AOC QSFP+ DAC, QSFP+ to 4x0G SFP+ Eight Flexible Uplink Ports (X-X8) Individually configurable as FC/FCoE/Ethernet Ethernet /0Gb: SR/LR/LRM SFP+ Transceivers, Copper/AOC DAC Fibre Channel: 2/4/8Gb: Short/Long Wave FC Transceivers FC uplinks: N_Ports, just like legacy VC-FC module uplinks Flat SAN support with 3PAR LAN LAN SAN X X2 X3 X4 X5 X6 X7 X8 6 connections to FlexFabric CNA s Individually Configurable 0Gb Ethernet or Flex-0/FCoE or Flex-0/iSCSI 20Gb Ethernet or Flex-20/FCoE or Flex-20/iSCSI PAIRED PAIRED Ports that can be enabled for SAN connection Note: Since VC 4.0, the Virtual Connect FlexFabric SAN uplinks can be connected to an upstream DCB port with Dual-hop FCoE support. For more information about FCoE support, see Dual-Hop FCoE with HP Virtual Connect modules Cookbook in the Related Documentation section. 7 - Scenario 0: Fabric-Attach SAN fabrics connectivity with HP Virtual Connect FlexFabric-20/40 F8 module

118 Requirements With Virtual Connect FlexFabric-20/40 F8, this configuration requires: Four VC Fabric-Attach SAN fabrics with two SAN switches that support NPIV At least two VC FlexFabric-20/40 F8 modules A minimum of four VC fabric uplink ports connected to the redundant SAN fabric. For more information about configuring FC switches for NPIV, see "Appendix C: Brocade SAN switch NPIV configuration" or "Appendix D: Cisco MDS SAN switch NPIV configuration" or "Appendix E: Cisco NEXUS SAN switch NPIV configuration" depending on your switch model. Additional information, such as over subscription rates and server I/O statistics, is important to help with server workload distribution across VC FlexFabric-20/40 F8 FC ports. Installation and configuration Switch configuration Appendices B, C and D provide the steps required to configure NPIV on the upstream SAN switch in a Brocade, Cisco MDS or Cisco Nexus Fibre Channel infrastructure. VC CLI commands In addition to the GUI, you can complete many of the configuration settings within VC using a CLI command set. In order to connect to VC via a CLI, open an SSH connection to the IP address of the active VCM. Once logged in, VC provides a CLI with help menus. The Virtual Connect CLI guide also provides many useful examples. The following scenario provides the CLI commands needed to configure each setting of the VC. Configuring the VC FlexFabric-20/40 F8 module With Virtual Connect FlexFabric-20/40 F8 modules, you can use only uplink ports X to X8 for Fibre Channel connectivity. Physically connect some uplink ports (X, X2, X3 or X4) as follows: On the first VC FlexFabric-20/40 F8 module to switch ports in SAN Fabric A On the first FlexFabric-20/40 F8 module to switch ports in SAN Fabric 2A On the second FlexFabric-20/40 F8 module to switch ports in SAN Fabric B On the second FlexFabric-20/40 F8 module to switch ports in SAN Fabric 2B 8 - Scenario 0: Fabric-Attach SAN fabrics connectivity with HP Virtual Connect FlexFabric-20/40 F8 module

119 Defining a new VC SAN Fabric via GUI Configure the VC FlexFabric-20/40 F8 module from the HP Virtual Connect Manager home screen:. From the HP Virtual Connect Manager GUI, select Define SAN Fabric from the HP Virtual Connect Home page. 2. In the Define SAN Fabric window, provide the VC Fabric Name, in this case Fabric--Tier. Leave the default Fabric-Attach fabric type. 3. Add the uplink ports that will be connected to this fabric, and then click Apply. The following example uses Port X, Port X2 and Port X3 from the first Virtual Connect FlexFabric module (Bay ). 9 - Scenario 0: Fabric-Attach SAN fabrics connectivity with HP Virtual Connect FlexFabric-20/40 F8 module

120 Show Advanced Settings is only available with VC FlexFabric modules and provides the option to enable the Automatic Login Re-Distribution. The Automatic Login Re-distribution method allows FlexFabric modules to fully control the login allocation between the servers and Fibre uplink ports. A FlexFabric module will automatically re-balance the server logins once every time interval defined in the Fibre Channel WWW Settings Miscellaneous tab. 4. Select either Manual or Automatic and click Apply. If you select Manual Login Re-Distribution, the login allocation between the servers and Fabric uplink ports never changes even after the recovery of a port failure. This remains true until an administrator decides to initiate the server login re-balancing. To initiate server login rebalancing, go to the SAN Fabrics screen and select the Edit menu corresponding to the SAN Fabric, and click Redistribute On the SAN Fabrics screen, click Add to create the second fabric: 20 - Scenario 0: Fabric-Attach SAN fabrics connectivity with HP Virtual Connect FlexFabric-20/40 F8 module

121 5. Create a new VC Fabric-Attach fabric named Fabric--Tier2. Under Enclosure Uplink Ports, add Bay, Port X4, and then click Apply. You have created two VC fabrics, each with uplink ports allocated from one VC FlexFabric module in Bay. One of the fabrics is configured with three 4Gb uplinks, and one is configured with one 4Gb uplink. 2 - Scenario 0: Fabric-Attach SAN fabrics connectivity with HP Virtual Connect FlexFabric-20/40 F8 module

122 6. Create two additional VC Fabric-Attach SAN Fabrics, Fabric-2-Tier and Fabric-2-Tier2 attached this time to the second VC FlexFabric module: - Fabric-2-Tier with 3 ports: - Fabric-2-Tier2 with one port: You have created four VC fabrics, two for the server group Tier with three uplink ports and two for the guaranteed throughput server Tier2 with one uplink port Scenario 0: Fabric-Attach SAN fabrics connectivity with HP Virtual Connect FlexFabric-20/40 F8 module

123 Defining a new VC SAN Fabric using the CLI Configure the VC FlexFabric modules from the CLI:. Log in to the Virtual Connect Manager CLI: 2. Enter the following commands to create the fabrics and assign the uplink ports: add fabric Fabric--Tier Bay= Ports=,2,3 add fabric Fabric--Tier2 Bay=5 Ports=4 add fabric Fabric-2-Tier Bay=6 Ports=,2,3 add fabric Fabric-2-Tier2 Bay=6 Ports=4 3. When complete, run the show fabric command. Verification of the SAN Fabrics configuration Make sure all of the SAN fabrics belonging to the same Bay are connected to the same core FC SAN fabric switch:. Go to the SAN Fabrics screen Scenario 0: Fabric-Attach SAN fabrics connectivity with HP Virtual Connect FlexFabric-20/40 F8 module

124 2. The Connected To column displays the upstream SAN fabric switch to which the VC module uplink ports are connected. Verify that the entries from the same Bay number are all the same, indicating a single SAN fabric. For additional verification and troubleshooting steps, see Appendix F. Same upstream fabric Blade Server configuration See Appendix B for server profile configuration steps with VC FlexFabric. Summary This VC FlexFabric-20/40 F8 scenario shows how you can create multiple VC SAN fabrics that are all connected to the same redundant SAN fabric. This configuration enables you to control the throughput for a particular application or set of blades Scenario 0: Fabric-Attach SAN fabrics connectivity with HP Virtual Connect FlexFabric-20/40 F8 module

125 Scenario : Enhanced N-port Tunking with HP Virtual Connect 6G 24-Port Fibre Channel Module Overview This scenario describes the setup and requirements for installing HP Virtual Connect 6G 24-Port Fibre Channel Modules with Enhanced N-port Trunking support. Figure 63: HP Virtual Connect 6G 24-Port Fibre Channel Modules connected to Brocade switches using N_Port Trunking 8: server-to-uplink ratio on a fully populated enclosure with 6 servers Server Server 2 Server 3 Server 6 HBA HBA2 HBA HBA2 HBA HBA2 HBA HBA2 VC Domain Fabric- Fabric-2 VC-FC 6Gb 24-Port Modules Trunking Trunking Brocade Fabric Brocade Fabric 2 This scenario with two VC Fabric-Attach SAN fabrics is similar to scenario 2 but is using the Enhanced N-port Trunking support of the HP Virtual Connect 6G 24-Port Fibre Channel Module with external Fabric OS switches (Brocade or HP B-series switches). Note: With any other type of Fabric switches, the use of Dynamic Login Balancing Distribution must be used. Benefits Enhanced N-port Trunking support with external Fabric OS switches (Brocade or HP B-series switches) provides higher bandwidth to enable demanding applications and high density server virtualization. A single N_Port trunk made up of up to eight SAN ports can provide a total of up to 28 Gbps balanced throughput. The HP Virtual Connect 6G 24-Port Fibre Channel Module introduces 6Gb FC technology on both internal and external facing ports enabling high-performance connectivity with aggregate bandwidth of 896 Gbps (28 ports x 6 Gbps x 2 for full duplex). 6Gb FC technology provides enough capacity for emerging technologies and new hardware demands while lowering costs through fewer SFPs and cables. With height 6Gb external SAN-facing ports, the server-to-uplink ratio can reach up to 2: for best performance Scenario : Enhanced N-port Tunking with HP Virtual Connect 6G 24-Port Fibre Channel Module

126 2Vdc HP StorageWorks 4/32B SAN Switch DP-A DP-B 2 DP-A DP-B 2Vdc Cntrl Mfg Mgmt 2Vdc HP StorageWorks 4/32B SAN Switch HP ProLiant BL460c NIC NIC 2 Cntrl 2 Mfg HP ProLiant BL460c NIC NIC 2 HP ProLiant BL460c NIC NIC 2 HP ProLiant BL460c NIC NIC 2 2Vdc Considerations The HP Virtual Connect 6Gb 24-Port Fibre Channel Module requires a Virtual Connect Ethernet Module installed in the system for management and administration. A multipathing I/O driver must be running on the server Operating System to prevent server I/O disruption when a failure occurs somewhere between VC and the external fabric. This automatic failover allows smooth transition without much disruption to the traffic. However, the hosts must perform re-login before resuming their I/O operations. Only a redundant SAN fabric with a properly configured multipathing I/O driver running on the server Operating System can provide a completely transparent transition. Compatibility support The HP BladeSystem c7000 Platinum Enclosure is required to permit 6 Gbps speed on internal ports. Other HP BladeSystem c7000 Enclosures will have a maximum speed of 8 Gbps. The VC 6G 24-Port Fibre Channel Module and VC 8G Fibre Channel Module (20-Port or 24-Port) are not supported side-by-side nor are they swappable. The VC 6G 24-Port Fibre Channel Module and VC 8G Fibre Channel Module (20-Port or 24-Port) are not supported together in same bay group in stacked domain. Double Dense mode is not supported with the VC 6G 24-Port Fibre Channel Module. The VC 6G 24-Port Fibre Channel Module is not supported in c3000 enclosures. The VC 6G 24-Port Fibre Channel Module does not support 4Gb FC HBA in any server. The VC 6G 24-Port Fibre Channel Module supports 8Gb FC HBA in G7 or later. HP LPe605 6Gb Fibre Channel HBA (Emulex) and HP QMH2672 6Gb FC HBA (Qlogic) are supported in Gen8 servers or later. For more information you can refer to the Virtual Connect 6Gb 24-port Fibre Chanel module QuickSpec. Figure 64: Physical view Storage Array F_Port F_Port Brocade or HP B-series Switches Trunking Trunking N_Port N_Port VC-FC 6Gb 24-Port Modules HBA HBA 2 HBA HBA 2 HBA HBA 2 Blade Servers to 6 HBA HBA Scenario : Enhanced N-port Tunking with HP Virtual Connect 6G 24-Port Fibre Channel Module

127 2Vdc HP StorageWorks 4/32B SAN Switch Vdc Requirements This configuration requires: Two VC 6G 24-Port Fibre Channel Modules A minimum of four (2x2) VC fabric uplink ports Two external Fabric OS switches (Brocade or HP B-series switches) with an ISL Trunking license VC 4.40 or later is required for the 6Gb 24-Port Fibre Channel Module support Trunking Requirements N_Port Trunking allows the creation of a trunk group between N_Ports on the VC 6G 24-Port Fibre Channel Module and F_Ports on a Fabric OS switch (Brocade or HP B-series switches). On the Fabric OS switch side, it requires an F_Port trunking configuration. F_Port trunking was originally only supported when Brocade ports were connected to a Brocade Access Gateway module or to a Brocade Host Bus Adapter. Starting with VC 4.40, the VC 6G 24-port Fibre Channel Module is added to this supported list. VC SAN A N_Ports Bay N_Port Trunking 6G 24-Port VC FC Module Trunking F_Ports F_Port Trunking Fabric OS switch Note: This feature does not require any particular license on the Virtual Connect Module but you must install an ISL Trunking license (usually included in the Power Pack+ Software Bundle) on the external Fabric OS switch. Note: When the external switch is not configured for trunking, VC uses the legacy Dynamic Login Balancing Distribution Scenario : Enhanced N-port Tunking with HP Virtual Connect 6G 24-Port Fibre Channel Module

128 N-port Trunking vs. Dynamic Login Balancing Distribution The following table describes the Pros and Cons between Enhanced N-port Trunking and Dynamic Login Balancing Distribution. Table 3: Pros & Cons between N-port Trunking and Dynamic Login Balancing Distribution Pros Cons Enhanced N-port Trunking Optimized bandwidth utilization Optimizes fabric-wide performance and load balancing distribution No traffic disruption during uplink failure Is only compatible with Fabric OS Brocade switches Requires additional configuration steps on the external SAN switches Requires a Brocade ISL license Dynamic Login Balancing Distribution Does not require any external switch configuration Is compatible with any switch vendors Does not require any license Wasted bandwidth Limited performance Short traffic disruption during uplink failure Some limitations during workload peaks The following table describes the differences between N-port Trunking and Dynamic Login Balancing Distribution. Table 4: N-port Trunking vs. Dynamic Login Balancing Distribution comparison Traffic Distribution Performance Enhanced N-port Trunking Optimizes uplink usage by evenly distributing traffic across all uplinks at the frame level Uses the same mechanism as E_port to E_port trunking Managed as a single logical link Maintains in-order delivery to ensure data reliability Server performance is limited to the speed of the entire trunk Each uplink inside the trunk is loaded equally Provides a better high-performance solution for network and data intensive applications Optimizes fabric-wide performance and load balancing with Dynamic Path Selection (D) Dynamic Login Balancing Distribution Routes are assigned to uplinks with the least number of logins or in a round-robin fashion when the number of logins is equal Load balancing does not look at the port utilization Server traffic stays on the dedicated server s uplink Server performance is limited to the speed of one uplink Some uplinks can experience congestion while others are underutilized Bandwidth Optimized bandwidth utilization Wasted bandwidth by less efficient traffic routing Transient workload peaks impact Much less likely to impact the performance of other servers Likely to impact the performance of other servers as uplink port utilization is not a login balancing criteria Availability / Fault Tolerance Max Server performance with 6G HBA and two 8G uplinks Enables seamless failover during individual link failures Prevents reassignments of the Address Identifier when N_Ports go offline Limited to the speed of the logical link Max server bandwidth: 2x8G=6G Short traffic disruption during individual link failures (maintained seamless by the MPIO driver when the second path is available) Failover during individual link failures causes hosts to re-login Limited to the speed of one uplink Max server bandwidth: x8g=8g 28 - Scenario : Enhanced N-port Tunking with HP Virtual Connect 6G 24-Port Fibre Channel Module

129 The use of Dynamic Login Balancing Distribution can impact the performance of servers whereas N_Port Trunking can significantly optimize the overall bandwidth utilization as illustrated in Figure 65. Figure 65 : Comparing the server bandwidth impact with Dynamic Login Balancing Distribution vs. N_Port Trunking With Dynamic Login Balancing Distribution 4 Gbps 4 Gbps With N_Port Trunking 5.5 Gbps 4 Gbps.5 Gbps.5 Gbps 4 Gbps 4.5 Gbps Congestion 8G 8G 24Gbps 5.5 Gbps 4 Gbps VC-F C 6G Module 5.5 Gbps 4 Gbps VC-F C 6G Module.5 Gbps.5 Gbps 4.5 Gbps 4.5 Gbps N_Port Trunking dynamically performs load sharing, at the frame level, across the VC uplinks with the adjacent Brocade switch. One uplink port is used to assign traffic for the trunking group, and is referred to as the trunking master. The trunking master represents and identifies the entire trunking group. The rest of the group members are referred to as slave links that help the trunking master direct traffic across uplinks, allowing efficient and balanced in-order communication. Figure 66 : With trunking only the N_Port Master has the F_Ports mapped to it Hosts VC-FC 6G Modules Switches F _P ort With Dynamic Login Balancing Distribution F _P ort F _P ort N_P ort N_P ort Area - F _P ort (NP IV) Area 2 - F _P ort (NP IV) F _P ort N_P ort Area 3 - F _P ort (NP IV) F _P ort With N_Port Trunking Only the N_Port master will have the F_Ports mapped to it F _P ort F _P ort N_P ort Master N_P ort Slave Area - F _P ort Master (NP IV) Area - F _P ort Slave (NP IV) F _P ort N_P ort Slave Area - F _P ort Slave (NP IV) With N_Port Trunking, whenever a link within the trunk goes offline or becomes disabled, the trunk remains fully functional, the traffic is automatically rerouted in less than a second and there are no reconfiguration requirements. A failure does not completely break the pipe, but simply makes the pipe thinner. As a result, data traffic is much less likely to be affected by link failures, and the bandwidth automatically increases when the link is repaired Scenario : Enhanced N-port Tunking with HP Virtual Connect 6G 24-Port Fibre Channel Module

130 2Vdc HP StorageWorks 4/32B SAN Switch Vdc 2Vdc HP StorageWorks 4/32B SAN Switch Vdc 2Vdc HP StorageWorks 4/32B SAN Switch Vdc 2Vdc HP StorageWorks 4/32B SAN Switch Vdc 2Vdc HP StorageWorks 4/32B SAN Switch Vdc Trunking support N_Port Trunking is only supported between VC 6G 24-Port Fibre Channel Module and F_Ports on a Fabric OS switch (Brocade or HP B-series switches). The Fabric OS switch must be running in Native mode. You cannot configure trunking between the VC 6G 24-port Fibre Channel Module and the F_Ports of a Fabric OS switch running in Access Gateway mode. All of the ports in a trunk group must belong to the same port group. N_Port Trunking is only supported between ports belonging to the same VC Module and ports belonging to the same external Fabric OS switch. VC SAN A VC SAN A VC SAN A Fabric A Fabric A Fabric B An FC trunk can only be formed between a 6G VC-FC Module and one Brocade device. Trunking between the VC 6Gb 24-port Fibre Channel Module and the HP B Series SAN switches should be properly configured. The VC N_Ports do not participate in the trunking handshake if there is a misconfiguration in the upstream switch. This causes the misconfigured upstream switch ports to be disabled, blocking communication with the attached VC FC uplink ports. Only VC SAN fabric uplink set can be created if trunking is configured on ToR. Additional SAN fabrics or uplink sets will only function correctly if they are not connected to a TOR port with trunking configured. For more information, see Brocade Access Gateway Administrator s Guide: df All of the ports in a trunk group must meet the following conditions: o o o o They must be running at the same speed. They must belong to the same port group. They must be configured for the same distance. The F_Ports on the Fabric OS switch must have the same encryption, compression, QoS, and FEC settings. There must be a direct connection between the Fabric OS switch and VC FC Module. For more information about F_Port trunking on a Fabric OS switch, see the latest version of the Brocade Fabric OS Administrators Guide Scenario : Enhanced N-port Tunking with HP Virtual Connect 6G 24-Port Fibre Channel Module

131 Installation and configuration Configuring NPIV on the External Switch It is necessary to configure the external Fabric OS switches for NPIV support, see "Appendix C: Brocade SAN switch NPIV configuration" for the steps required to configure NPIV on a Brocade SAN switch. Configuring F_Port trunking on the External Switch In order to create a Trunk between VC and Brocade, it is necessary to configure the external Brocade switches for F_Port trunking but there is no configuration required on the Virtual Connect Fibre Channel Module. For each external switch, you must configure first an F_Port trunk group and statically assign an Area_ID to the trunk group. Assigning a Trunk Area (TA) to a port or trunk group enables F_Port masterless trunking on that port or trunk group. This section describes the configuration steps you must perform on the two switches.. Connect to the first switch and log in using an account assigned to the admin role. 2. Ensure that the switch has the trunking licenses enabled. 3. Configure both ports for trunking by using the portcfgtrunkport port mode command. switch:admin> portcfgtrunkport 2 switch:admin> portcfgtrunkport 3 Note: Mode enables trunking on the specified port, mode 0 disables trunking. Note: Ensure that the ports within a trunk have the same speed. For more information on this command, see help portcfgtrunkport 4. Disable the ports to be used for trunking by using the portdisable command. switch:admin> portdisable Enable the trunk on the ports by using the porttrunkarea command. The following example enables a TA (Trunk Area) for ports 2 and 3 with port index of 2. switch:admin> porttrunkarea --enable 2-3 -index 2 Trunk index 2 enabled for ports 2 and Enable the ports specified in step 3 using the portenable command. switch:admin> portenable Repeat the same steps on switch 2. You can use an identical or different port index. Connecting the VC module Physically connect the uplink ports on the first VC 6G 24-Port Fibre Channel Module to switch ports in Brocade Fabric Physically connect the uplink ports on the second VC 6G 24-Port Fibre Channel Module to switch ports in Brocade Fabric 2 Defining a new VC SAN Fabric via GUI As previously stated, there is no configuration required on the Virtual Connect Fibre Channel Module in order to create a Trunk between VC and Brocade. So we will simply create a standard VC SAN Fabric using the VC 6G 24-Port Fibre Channel module. 3 - Scenario : Enhanced N-port Tunking with HP Virtual Connect 6G 24-Port Fibre Channel Module

132 . From the HP Virtual Connect Manager GUI, select Define SAN Fabric from the HP Virtual Connect Home page. 2. Create a first VC Fabric named Fabric- and add Port and Port 2 from the first VC 6G 24-Port Fibre Channel Module (Bay 3) and then click Apply. 3. On the SAN Fabrics screen, click on Add to create the second fabric: 32 - Scenario : Enhanced N-port Tunking with HP Virtual Connect 6G 24-Port Fibre Channel Module

133 4. Create a second VC Fabric named Fabric-2 and add Port and Port 2 from the second VC 6G 24-Port Fibre Channel Module (Bay 4) and then click Apply. 5. You have created two VC Fabric-Attach SAN fabrics each with two uplink ports allocated from VC Module in Bay 3 and Scenario : Enhanced N-port Tunking with HP Virtual Connect 6G 24-Port Fibre Channel Module

134 Defining a new VC SAN Fabric using the CLI To configure the VC 6G 24-Port Fibre Channel Modules from the CLI:. Log in to the Virtual Connect Manager CLI: 2. Enter the following commands to create the fabrics and assign the uplink ports: add fabric Fabric- Bay=3 Ports=,2 add fabric Fabric-2 Bay=4 Ports=,2 3. When complete, run the show fabric command. Verification of the trunking configuration Virtual Connect Manager does not display any trunking states however trunking monitoring and verification can be done on the Brocade switches:. Connect to the first switch and log in using an account assigned to the admin role. 2. Using the porttrunkarea --show enabled command, verify that the ports you enabled for F_Trunking appear in the output: switch:admin> porttrunkarea --show enabled Port Type State Master TI DI F-port Slave F-port Master Enter switchshow to display the switch and port information. switch:admin> switchshow switchname: SW6505 switchtype: 8. switchstate: Online switchmode: Native switchrole: Principal switchdomain: switchid: fffc0 switchwwn: 0:00:00:27:f8:49:6c:da zoning: ON (Test_Compellent) switchbeacon: OFF FC Router: OFF 34 - Scenario : Enhanced N-port Tunking with HP Virtual Connect 6G 24-Port Fibre Channel Module

135 FC Router BB Fabric ID: Address Mode: 0 Index Port Address Media Speed State Proto ============================================== id N8 Online FC F-Port N Port + NPIV public 0000 id N8 No_Light FC id N6 Online FC F-Port (Trunk port, master is Port 3 ) id N6 Online FC F-Port (Trunk master) id N8 No_Light FC id N8 No_Light FC / / 4. Enter porttrunkarea --show trunk to display the trunking information. switch:admin> porttrunkarea --show trunk Trunk Index 2: 3->8 sp: 6.000G bw: G deskew 5 MASTER Tx: Bandwidth 32.00Gbps, Throughput 0.00bps (0.00%) Rx: Bandwidth 32.00Gbps, Throughput 0.00bps (0.00%) Tx+Rx: Bandwidth 64.00Gbps, Throughput 0.00bps (0.00%) 2->7 sp: 6.000G bw: 6.000G deskew 5 Tx: Bandwidth 32.00Gbps, Throughput 0.00bps (0.00%) Rx: Bandwidth 32.00Gbps, Throughput 0.00bps (0.00%) Tx+Rx: Bandwidth 64.00Gbps, Throughput 0.00bps (0.00%) Note: Additional trunking information is also available under VCM but only when a server is logged into the fabric. Blade Server configuration See Appendix A for server profile configuration steps. See Appendix F and Appendix G for verifications and troubleshooting steps. Trunking information under VCM Once a server profile is created and a server is logged into the fabric, Virtual Connect displays the Trunk Area index in the Interconnect Bays / Module / Server Ports section:. Click Interconnect Bays at the bottom of the left navigation menu. 2. Click on the bay number corresponding to the 6G Fibre Channel Module Scenario : Enhanced N-port Tunking with HP Virtual Connect 6G 24-Port Fibre Channel Module

136 3. Then go to the Server Ports section The Uplink Port column displays the Trunk Area Index configured on the Brocade switch (i.e. 2 for both ports). Note: When trunking is neither configured nor enabled on the Brocade switch, Dynamic Login Balancing Distribution is used and the Uplink Port column displays the uplink port used by each profile. Summary This N_Port Trunking scenario provides a high-performance solution for network and data-intensive applications by optimizing application performance and availability across the network, and simplifying network design and management Scenario : Enhanced N-port Tunking with HP Virtual Connect 6G 24-Port Fibre Channel Module

137 Appendix A: Blade Server configuration with Virtual Connect Fibre Channel Modules Defining a Server Profile with FC Connections, using the GUI. On the Virtual Connect Manager screen, click Define / Server Profile to create a Server Profile. 2. Enter a Profile name. 3. In the Network Connections section, select the required networks. 4. In the FC HBA Connections section, expand the Port drop down menu and select the first fabric Appendix A: Blade Server configuration with Virtual Connect Fibre Channel Modules

138 5. Then expand the Port 2 drop down menu and select the second fabric. Note: HP recommends using redundant FC connections for failover to improve availability. If a SAN failure occurs, the multipath connection uses the alternate path so that servers can still access data. FC performance can also be improved with I/O load balancing mechanisms. 6. The following screen illustrates the creation of the Profile_ server profile. Note: WWN addresses are provided by Virtual Connect. Although this is not recommended, you can override this setting and use the WWNs that were assigned to the hardware during manufacture by selecting the Use Server Factory Defaults for Fibre Channel WWNs checkbox. This action applies to every Fibre Channel connection in the profile. 7. Assign the Profile to a Server Bay and click Apply. Defining a Server Profile with FC Connections, via CLI You can copy and paste the following commands into an SSH based CLI session (the command syntax might be different with an earlier VC version). # Create and Assign Server Profile Profile_ to server bay add profile Profile_ set enet-connection Profile_ pxe=enabled Network=-management-vlan set enet-connection Profile_ 2 pxe=disabled Network=2-management-vlan set fc-connection Profile_ Fabric=Fabric-A- Speed=Auto 38 - Appendix A: Blade Server configuration with Virtual Connect Fibre Channel Modules

139 set fc-connection Profile_ 2 Fabric=Fabric-B- Speed=Auto assign profile Profile_ enc0: Defining a Boot from SAN Server Profile using the GUI. On the Virtual Connect Server Profile screen, click the Fibre Channel Boot Parameters checkbox to configure the Boot from SAN parameters. 2. A new section pops up, click the drop-down arrow in the SAN Boot box for Port, then select the boot order: Primary. 3. Enter a valid Boot Target name (WWN) and LUN number for the Primary Port. 4. Click the second port drop-down menu, and select Secondary. 5. Enter a valid Boot Target name and LUN number for the Secondary Port, and then click Apply Appendix A: Blade Server configuration with Virtual Connect Fibre Channel Modules

140 Note: Target Port name can be entered with the following format: mm-mm-mm-mm-mm-mm-mm-mm or mm:mm:mm:mm:mm:mm:mm:mm or mmmmmmmmmmmmmmmm 6. Assign the profile to a server bay, and then click Apply. 7. The server can now be powered on (using the OA, the ilo, or the Power button) Note: To view the Option ROM boot details on servers with recent System BIOS, press any key as the system is booting Appendix A: Blade Server configuration with Virtual Connect Fibre Channel Modules

141 8. While the server starts up, a screen similar to the following appears: Boot from SAN disk correctly detected during POST Defining a Boot from SAN Server Profile via CLI You can copy and paste the following commands into an SSH based CLI session (the command syntax might be different with an earlier VC version). # Create and Assign Server Profile Profile_ Booting from SAN to server bay add profile BfS_Profile_ set enet-connection BfS_Profile_ pxe=enabled Network=-management-vlan set enet-connection BfS_Profile_ 2 pxe=disabled Network=2-management-vlan set fc-connection BfS_Profile_ Fabric=Fabric_ Speed=Auto set fc-connection BfS_Profile_ BootPriority=Primary BootPort=50:0:43:80:02:5D:9:78 BootLun= set fc-connection BfS_Profile_ 2 Fabric=Fabric_2 Speed=Auto set fc-connection BfS_Profile_ 2 BootPriority=Secondary BootPort=50:0:43:80:02:5D:9:7D BootLun= 4 - Appendix A: Blade Server configuration with Virtual Connect Fibre Channel Modules

142 Appendix B: Blade Server configuration with Virtual Connect FlexFabric Modules Defining a Server Profile with FCoE Connections, using the GUI. On the Virtual Connect Manager screen, click Define, Server Profile to create a Server Profile. 2. Enter a Profile name. 3. In the Network Connections section, select the required networks Appendix B: Blade Server configuration with Virtual Connect FlexFabric Modules

143 4. Select the FC SAN Name box of the FCoE HBA Connections: - For Port, select Fabric_. - For Port 2, select Fabric_2. Note: HP recommends using redundant Fabric connections for failover to improve availability. If a SAN fails, the multipath connection uses the alternate path so that servers can still access their data. You can also improve FC performance using I/O load balancing mechanisms. Also, you do not need to configure any iscsi Connection when using a single CNA because the CNA Physical Function 2 can only be configured as Ethernet or FCoE or iscsi Appendix B: Blade Server configuration with Virtual Connect FlexFabric Modules

144 5. The following screen illustrates the creation of the Profile_ server profile. Note: WWNs for the domain are provided by Virtual Connect. You can override this setting and use the WWNs that were assigned to the hardware during manufacture by selecting the Use Server Factory Defaults for Fibre Channel WWNs checkbox. This action applies to every Fibre Channel connection in the profile. 6. Assign the Profile to a Server Bay and click Apply. Defining a Server Profile with FCoE Connections, via CLI You can copy and paste the following commands into an SSH based CLI session with Virtual Connect v3.5; however, the command syntax might be different with an earlier VC version. # Create and Assign Server Profile Profile_ to server bay add profile Profile_ set enet-connection Profile_ pxe=enabled Network=-management-vlan set enet-connection Profile_ 2 pxe=disabled Network=2-management-vlan set fcoe-connection Profile_: Fabric=Fabric_ SpeedType=4Gb set fcoe-connection Profile_:2 Fabric=Fabric_2 SpeedType=4Gb assign profile Profile_ enc0: 44 - Appendix B: Blade Server configuration with Virtual Connect FlexFabric Modules

145 Defining a Boot from SAN Server Profile using the GUI. From the FCoE HBA Connections section of the Virtual Connect Server Profile screen, select Fibre Channel Boot Parameters to configure the Boot from FCoE SAN parameters. 2. From the FCoE HBA Connections pop up, click the drop-down arrow in the SAN Boot box for Port and select the boot order Primary. 3. Enter a valid Boot Target name and LUN number for the Primary Port. 4. Optionally, select the second port, click the drop-down arrow in the SAN Boot box, and then select the boot order Secondary. 5. Enter a valid Boot Target name and LUN number for the Secondary Port and click Apply. Note: Target Port name can be entered with the following format: mm-mm-mm-mm-mm-mm-mm-mm or mm:mm:mm:mm:mm:mm:mm:mm or mmmmmmmmmmmmmmmm 45 - Appendix B: Blade Server configuration with Virtual Connect FlexFabric Modules

146 6. Assign the profile to a server bay and click Apply. You can now power on the server (using either the OA, the ilo, or the Power button). Note: To view the Option ROM boot details servers with a recent System BIOS, press any key as the system boots up. 7. While the server starts up, a screen similar to this one should be displayed: SAN volume correctly detected during POST by the two adapters Defining a Boot from SAN Server Profile using the CLI You can copy and paste the following commands into an SSH based CLI session with Virtual Connect v3.5; however, the command syntax might be different with an earlier VC version. # Create and Assign Server Profile Profile_ Booting from SAN to server bay add profile BfS_Profile_ set enet-connection BfS_Profile_ pxe=enabled Network=-management-vlan set enet-connection BfS_Profile_ 2 pxe=disabled Network=2-management-vlan set fcoe-connection BfS_Profile_: Fabric=Fabric_ SpeedType=4Gb set fcoe-connection BfS_Profile_:2 Fabric=Fabric_2 SpeedType=4Gb set fcoe-connection BfS_Profile_: BootPriority=Primary BootPort=50:0:43:80:02:5D:9:78 BootLun= set fcoe-connection BfS_Profile_:2 BootPriority=Secondary BootPort=50:0:43:80:02:5D:9:7D BootLun= assign profile BfS_Profile_ enc0: 46 - Appendix B: Blade Server configuration with Virtual Connect FlexFabric Modules

147 Appendix C: Brocade SAN switch NPIV configuration Enabling NPIV using the GUI. Log on to the SAN switch using the IP address and a web browser. After you are authenticated, the switch home page appears. 2. Click Port Admin. The Port Administration screen appears. 3. If you are in Basic Mode, click Show Advanced Mode in the top right corner. When you are in Advanced Mode, the Show Basic Mode button appears. 4. Select the port you want to enable with NPIV, in this case, Port 3. When NPIV is disabled, the NPIV Enabled field shows a value of false Appendix C: Brocade SAN switch NPIV configuration

148 5. To enable NPIV on this port, click Enable NPIV under the General tab, and then confirm your selection. The NPIV Enabled entry shows a value of true. Enabling NPIV using the CLI. Initiate a telnet session to the switch, and then authenticate your account. The Brocade Fabric OS CLI appears. 2. To enable or disable NPIV on a port-by-port basis, use the portcfgnpivport command. For example, to enable NPIV on port 3, enter the following command: Brocade:admin> portcfgnpivport 3 where indicates that NPIV is enabled (0 indicates that NPIV is disabled) Appendix C: Brocade SAN switch NPIV configuration

149 3. To be sure that the port is enabled, enter the switchshow command. NPIV is enabled and detected on that port 4. To be sure that NPIV is enabled and operational on a specific port, use the portshow command. For example, to display information for Port 3, enter the following: Brocade:admin> portshow 3 In the portwwn of device(s) connected entry, more than one HBA appears. This indicates a successful implementation of VC-FC. On the enclosure, two server blade HBAs are installed and powered on, and either an HBA driver is loaded or the HBA BIOS utility is active. The third WWN on the port is the VC module (currently, all VC-FC modules use the 20:00 range). Two servers are currently connected Port WWN of the VC-FC 49 - Appendix C: Brocade SAN switch NPIV configuration

150 Recommendations When Fibre Channel uplink ports on VC-FC 8Gb 20-port module or VC FlexFabric module are configured to operate at 8Gb speed and are connecting to HP B-series Fibre Channel SAN switches, the minimum supported version of the Brocade FOS is v6.4.x. In addition, FillWord on those switch ports must be configured with option Mode 3 to prevent connectivity issues at 8Gb speed. This setting is only required: with VC-FC 8Gb 20-port module when running VC 3.70 or earlier. with VC FlexFabric module when running VC 3.60 or earlier. Note: With VC-FC 8Gb 24-port module, FillWord is not required. On HP B-series FC switches, use the portcfgfillword (portcfgfillword <port#> <Mode>) command to configure this setting. Mode Link Init/Fill Word Mode 0 Mode Mode 2 Mode 3 IDLE / IDLE ARBF / ARBF IDLE / ARBF If ARBF / ARBF fails, use IDLE / ARBF Modes 2 and 3 are compliant with FC-FS-3 specifications (standards specify the IDLE/ARBG behavior of Mode 2, which is used by Mode 3 if ARBF/ARBF fails after 3 attempts). For most environments, Brocade recommends using Mode 3 because it provides more flexibility and compatibility with a wide range of devices. If the default setting or Mode 3 does not work with a particular device, contact your switch vendor for further assistance Appendix C: Brocade SAN switch NPIV configuration

151 Appendix D: Cisco MDS SAN switch NPIV configuration Enabling NPIV using the GUI Most Cisco MDS Fibre Channel switches running SAN-OS 3. (2a) or later support NPIV. To enable NPIV on Cisco Fibre Channel switches:. From the Cisco Device Manager, click Admin, and then select Feature Control. 2. From the Feature Control screen, click npiv. 3. In the Action column select enable, and then click Apply. 4. Click Close to return to the Device Manager screen. 5. To verify that NPIV is enabled on a specific port, double-click the port you want to check. 5 - Appendix D: Cisco MDS SAN switch NPIV configuration

152 6. Click the FLOGI tab. In the PortName column, more than one HBA appears. This indicates a successful implementation of VC-FC Appendix D: Cisco MDS SAN switch NPIV configuration

153 Enabling NPIV using the CLI. To verify that NPIV is enabled, enter following command: CiscoSANswitch# show running-config If the npiv enable entry does not appear in the list, NPIV is not enabled on the switch. 2. To enable NPIV, use the following commands from global config mode: CiscoSANswitch# config terminal CiscoSANswitch# NPIV enable CiscoSANswitch# exit CiscoSANswitch# copy running-config startup-config NPIV is enabled globally on the switch on all ports and all VSANs. 3. To disable NPIV, enter the no npiv enable command Appendix D: Cisco MDS SAN switch NPIV configuration

154 4. To verify that NPIV is enabled on a specific port, enter following command for port ext: CiscoSANswitch# show flogi database interface ext Port WWN of the VC-FC Four servers are currently connected In the PORT NAME column, more than one HBA appears. This indicates a successful implementation of VC-FC. On the enclosure, two server blade HBAs are installed and powered on, and either an HBA driver is loaded or the HBA BIOS utility is active. The third WWN on the port is the VC module (currently, all VC-FC modules use the 20:00 range). 5. If the VC module is the only device on the port, verify that: - A VC profile is applied to at least one server blade. - At least one server blade with a profile applied is powered on. - At least one server blade with a profile applied has an HBA driver loaded. - You are using the latest BIOS version on your HBA Appendix D: Cisco MDS SAN switch NPIV configuration

155 /0 GIGABIT ETHERNET /2/4/8G FIBRE CHANNEL Appendix E: Connecting VC FlexFabric to Cisco Nexus 50xx and 55xx series Since VC 4.0, Virtual Connect provides the ability to pass FCoE (Dual Hop) to an external FCoE capable network switch like the Nexus switches. VC 3.70 and later allow you to connect VC FlexFabric modules to Nexus 50xx and 55xx series using Native Fibre Channel. For information about the FCoE integration between Virtual Connect and Nexus switches see Dual-Hop FCoE with HP Virtual Connect modules Cookbook in the Related Documentation section. For information about the Ethernet integration between Virtual Connect and Nexus switches see HP Virtual Connect Flex-0 & FlexFabric Cisco Nexus 5000 & 2000 series Integration Support information Visit the C-Series FCoE Switch Connectivity stream from the SPOCK website to get the latest support information for Virtual Connect. See Fibre Channel functions on Nexus Support of native Fibre Channel functions on Nexus switches has the following options: You can configure all Unified ports as 8/4/2/G Native Fibre Channel. The S (Storage Protocol Services) license is required to enable the use of Native FC operations. Unified ports are identified by their orange color: Figure 67: Nexus N55-M6UP Expansion module for Nexus 5548 with 6 Unified Ports N55-M6UP The expansion module N55-M8P8FP for Nexus 55xx series provides eight ports as native Fibre Channel ports. The N55-M8P8FP for Nexus 55xx series provides eight ports as native Fibre Channel ports. The S license is also required to enable Native Fibre Channel operation. Fibre Channel ports are identified by their green color: Figure 68: Nexus N55-M8P8FP Expansion module for Nexus 5548 with 8 FC Ports Figure 69: Nexus N5K-M008 Expansion module for Nexus 50xx with 8 FC Ports /2/4G FIBRE CHANNEL N5K-M008 Note: The Ethernet Nexus ports on the base chassis as well as those on the expansion modules cannot be used to support Fibre Channel functions. There are not colored like the 8 ports on the left-hand side of the N55-M8P8FP expansion module Appendix E: Connecting VC FlexFabric to Cisco Nexus 50xx and 55xx series

156 CISCO NEXUS N5548UP ID STAT CISCO NEXUS N5548UP ID STAT CISCO NEXUS N5548UP /0 GIGABIT ETHERNET /2/4/8G FIBRE CHANNEL N55-M8P8FP /0 GIGABIT ETHERNET N55-M6P ID STAT Cntrl 2 DP-A DP-B 2 DP-A DP-B Mfg Mgmt /0 GIGABIT ETHERNET /2/4/8G FIBRE CHANNEL S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 HP VC FlexFabric 0Gb/24-Port Module DP-A DP-B 2 DP-A DP-B S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 HP VC FlexFabric 0Gb/24-Port Module Cntrl HP ProLiant BL460c NIC NIC 2 Mfg HP ProLiant BL460c NIC NIC 2 Mgmt HP ProLiant BL460c NIC NIC 2 HP ProLiant BL460c NIC NIC 2 Cntrl 2 CISCO NEXUS N5548UP ID STAT HP ProLiant BL460c NIC NIC 2 HP ProLiant BL460c Mfg NIC NIC 2 2 HP ProLiant BL460c NIC NIC 2 HP ProLiant BL460c NIC NIC 2 Cntrl 2 Mfg N55-M8P8FP 2 HP VC FlexFabric 0Gb/24-Port Module S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X /0 GIGABIT ETHERNET S H A R E D : U P L I N K o r X - L I N K X X2 X3 X4 X5 X6 X7 X8 HP VC FlexFabric 0Gb/24-Port Module N55-M6P Figure 70: Nexus 5548 connected to FlexFabric modules using Native FC ports from the N55-M8P8FP Expansion module Storage Array LAN LAN Fabric-2 Fabric- 3 Cisco Nexus 5548 Cisco Nexus FC links LAN FC links LAN X X2 X X2 VC FlexFabric Modules Blade Servers to 6 CNA CNA 2 CNA CNA 2 CNA CNA 2 CNA CNA 2 Figure 7: Nexus 5548 connected to FlexFabric modules using Unified ports Storage Array LAN LAN Fabric- Fabric-2 Cisco Nexus 5548 Cisco Nexus SUS- (Eth) SUS-2 (Eth) FC links X X2 FC links X X2 VC FlexFabric Modules Blade Servers to 6 CNA CNA 2 CNA CNA 2 CNA CNA 2 CNA CNA Appendix E: Connecting VC FlexFabric to Cisco Nexus 50xx and 55xx series

157 Configuration of the VC SAN Fabric The configuration of the VC Domain will follow one of the scenarios described in this cookbook. After you have configured the VC Domain with two or more VC SAN Fabrics (Fabric-Attach), configure the Nexus switches. Configuration of the Nexus switches Enabling the FC protocol on Nexus ports Before using FC capabilities, make sure that a correct Storage Protocol Services license is installed. If the license is not found, the software loads the FC plugins with a grace period of 80 days. If the license is found, all Fibre Channel and FCoE related CLI are available. To enable Fibre Channel on the switch (including FCoE); enter the following commands from the CLI: switch# configure terminal switch(config) # feature fcoe Note: Cisco SAN port channels that are useful to bond multiple Fibre Channel interfaces together for both redundancy and increased aggregate throughput are not supported with Virtual Connect. Configuring Fibre Channel Ports When using expansion modules with native FC ports, there is no specific configuration. Interfaces fc2/, fc2/2, etc. are automatically presented. Configuring Unified Ports By default, the Unified ports are Ethernet ports but you can change the port mode to native Fibre Channel on any port on the Cisco Nexus 55xx switch. Note: Fibre Channel ports cannot be enabled randomly; they must be configured in a specific order, starting from the last Unified port of the module to the first one (/32 > /3 > > /). If you do not follow this order, the system displays the following error: ERROR: FC range should end on last port of the module To change the Unified port mode to Fibre Channel for port 3 and 32, enter the following command in the CLI: switch# configure terminal switch(config) # slot switch(config-slot) # port 3-32 type fc switch(config-slot) # copy running-config startup-config switch(config-slot) # reload When the switch comes back, two new Fibre Channel interfaces fc/3 and fc/32 are available. To enable the two Fibre Channel interface enter: Nexus-switch(config)# interface fc/3 Nexus-switch(config-if)# no shutdown Nexus-switch(config)# interface fc/32 Nexus-switch(config-if)# no shutdown Complete the same configuration on the second Nexus switch. Physical connection Physically connect the VC SAN uplinks to the Nexus Fibre Channel interfaces. Make sure to use Fibre Channel transceivers on the Nexus ports and on the FlexFabric uplinks Appendix E: Connecting VC FlexFabric to Cisco Nexus 50xx and 55xx series

158 Enabling NPIV on the Nexus switches You must enable NPIV on the Nexus switches in order to connect to the VC FlexFabric modules. Enabling NPIV using the GUI. From the Cisco Device Manager, click Admin, and then select Feature Control. 2. From the Feature Control screen, click npiv. 3. In the Action column select enable, and then click Apply. 4. Click Close to return to the Device Manager screen. Enabling NPIV using the CLI. To verify that NPIV is enabled, enter the following command: Nexus-switch#show npiv status NPIV is disabled 2. To enable NPIV, use the following commands from global config mode: Nexus-switch# config terminal Nexus-switch(config)# feature NPIV Nexus-switch(config)# exit Nexus-switch# copy running-config startup-config NPIV is enabled globally on the switch on all ports and all VSANs Appendix E: Connecting VC FlexFabric to Cisco Nexus 50xx and 55xx series

159 Connectivity checking. To check an FC interface status, enter: Nexus-switch#show interface fc2/2 Or Nexus-switch#show interface fc /3 The FC interface must be up and displaying the link speed. 2. To verify that NPIV is properly detected on a specific port, enter following command for port fc2/2: Nexus-switch# show flogi database interface fc2/2 The PORT NAME column displays two WWNs. This indicates a successful SAN connection between VC and two, installed and powered on, blade servers with either an FC driver loaded or with the Emulex BIOS utility running. The third WWN is the Virtual Connect module WWN. Port WWN of the VC-FC Two servers are currently connected Note: If NPIV is not detected, a No flogi sessions found is returned. 3. If the VC module is the only device on the FC port, verify that: - A VC profile is applied to at least one server blade. - At least one blade server with a profile assigned is powered on. - At least one blade server with a profile assigned has a CNA/HBA driver loaded. - You are using the latest BIOS version on your CNA/HBA Appendix E: Connecting VC FlexFabric to Cisco Nexus 50xx and 55xx series

160 Appendix F: Connectivity verification and testing Uplink Port connectivity verification Use the VCM GUI to verify that VC Fabric uplink ports and the server Blade are logged in to the fabric.. Click the Interconnect Bays link on the VCM left-menu. 2. Click either the first VC-FC module (shown below as Bay 5), or the first FlexFabric module (shown in the second image as Bay ). - With VC-FC module: 60 - Appendix F: Connectivity verification and testing

161 - With FlexFabric module: 3. Make sure that all uplink ports are logged in to the fabric: - With VC-FC module: 6 - Appendix F: Connectivity verification and testing

162 - With FlexFabric module: Click Uplink Ports to see the uplink ports information Appendix F: Connectivity verification and testing

163 Uplink Port connection issues There are several issues that can lead to a VC Fabric uplink ports NOT-LOGGED-IN status: Faulty cable, transceiver failure, wrong or incompatible SFP used (port populated with an SFP module that does not match the usage assigned to the port, such as a Fibre Channel SFP connected to a port designated for Ethernet network traffic). Upstream switch does not support NPIV or NPIV is not enabled (See Appendix C, D, and E for more information about configuring NPIV on FC switches). Using an unsupported configuration, see Supported VC SAN fabric configuration. - Uplink ports that are members of the same Fabric-Attach SAN Fabric have been connected to different SAN switches belonging to a different SAN Fabric. - Uplink ports have been connected directly to a Storage Disk Array when using a Fabric-Attach VC Fabric. - Uplink ports member of a Direct-Attach fabric have been connected to a non-supported Storage Disk Array. The Direct-Attach fabric ports have been connected to un-configured 3PAR ports. You are connected to a Brocade SAN switch at 8Gb and you might need to configure FillWord, see Appendix C. The Fibre Channel SFP transceiver brand is usually not the reason for an unlinked connectivity issue because many FC SFPs are allowed to interoperate with VC-FC modules and FlexFabric modules regardless of whether they are HP or third party branded. In VC 3.30 and later, FlexFabric modules detect whether SFPs are from a supported list of HP part numbers as documented in the VC Quickspecs. If the SFPs are not one of the HP part numbers, such as A7446B, AJ75A, AJ76A, AJ78A, these SFPs are reported as uncertified. If a customer has an issue with these pluggable modules, HP support recommends using only officially supported models. Some useful information that can help to troubleshoot connectivity issues is displayed in the SAN Fabrics or Interconnect Bays / Module VCM pages: Note: Port status information appears on several screens throughout the GUI Appendix F: Connectivity verification and testing

164 If a port status is unlinked and no connectivity exists, one of the following causes may appear: Not Linked/E-Key Port is not linked due to an electronic keying error. For example, a mismatch in the type of technology exists between the server and module ports. Not Logged In Port is not logged in to the remote device. Incompatible Port is populated with an SFP module that does not match the usage assigned to the port, such as a Fibre Channel SFP connected to a port designated for Ethernet network traffic. Note: A port that is not assigned to a specific function is assumed to be designated for Ethernet network traffic. An FCoE-capable port that has an SFP-FC module connected that is not yet assigned to a fabric or network is designated for a network, and the status is "Incompatible". When a fabric is created on that port, the status changes to "Linked". Unsupported Port is populated with an SFP module that is not supported. For example: - An unsupported module is connected. - A Gb or 0Gb Ethernet module is connected to a port that does not support that particular speed. - An LRM module is connected to a port that is not LRM-capable. - An FC module is connected to a port that is not FC-capable. Administratively Disabled Port has been disabled by an administrative action, such as setting the uplink port speed to disabled. Unpopulated Port does not have an SFP module connected. Unrecognized SFP module connected to the port cannot be identified. Failed Validation SFP module connected to the port failed HPID validation. Smart Link Smart Link feature is enabled. Not Linked/Loop Protected VCM is intercepting BPDU packets on the server downlink ports and has disabled the server downlink port to prevent a loop condition. Linked/Uncertified Port is linked to another port, but the connected SFP module is not certified by HP to be fully compatible. In this case, the SFP module might not work properly. Use certified modules to ensure server traffic Appendix F: Connectivity verification and testing

165 Server Port connectivity verification Boot the server before verifying that a blade server can successfully log in to the fabrics.. Start the blade server. The HBA logs in to the SAN fabric right after the HBA Bios screen is shown during POST 2. From the Interconnect Bays / Module, verify that the server is logged in to the fabric: - With VC-FC module: - With FlexFabric module by going to the Server Ports tab: 65 - Appendix F: Connectivity verification and testing

166 Server Port connection issues There are several issues that can lead to a server NOT-LOGGED-IN status: - The VC Fabric uplink ports are also in a NOT-LOGGED-IN state (see the previous section). - Server is turned off or is rebooting. - HBA/CNA firmware is out of date. Connectivity verification from the upstream SAN switch You can also verify connectivity from the upstream SAN switch.. Log in to the Brocade SAN switch GUI. 2. Click Name Server on the left side of the screen. The Name Server list appears. 3. From the Name Server screen, locate the PWWN the server uses (such as, 50:06:0B:00:00:C2:62:2C) and then identify the Brocade Port used by the VC-FC uplink (port ). 4. From a Command Prompt, open a Telnet session to the Brocade SAN switch and enter: Brocade:admin> switchshow The comment NPIV public on port means that port is detected as using NPIV Appendix F: Connectivity verification and testing

167 5. To get more information on port, type: Brocade:admin> Portshow 67 - Appendix F: Connectivity verification and testing

168 Testing the loss of uplink ports This section provides details for testing the loss of uplink ports in a VC SAN Fabric, confirming the port failover in the same VC SAN Fabric, testing the loss of a complete VC SAN Fabric (all ports), and checking the working status of the MPIO Driver. Note: To find more information about MPIO, visit the MPIO manuals web page. This test is made on a Boot from SAN server running Windows 2008 R2 with a MPIO Driver installed for HP EVA. This server VC profile has a redundant FCoE connection to reach the EVA. The WWPN of this server is 50:06:0B:00:C3:A:04 for port and 50:06:0B:00:C3:A:06 for port 2. Each SAN Fabric has been configured with two uplink ports (X & X2) belonging to two different modules. The HP MPIO for EVA properly detects the four active paths to the C:\ drive Appendix F: Connectivity verification and testing

169 The server is currently logged to the Fabric through Port of the upstream Brocade SAN switch. This Brocade port is physically connected to the VC FlexFabric module uplink port X. To verify the VC uplink port failover:. Simulate a failure of VC fabric uplink port X by disabling the upstream Brocade port. Note: When the link is recovered from the SAN switch, there is no failback to the original port. The host stays logged in to the current uplink port. The next host that logs on is balanced to another available port on the VC- FC module. 2. The VC Uplink Port X is now disconnected Appendix F: Connectivity verification and testing

170 3. Back to the Brocade Command Line, the port 2 information shows the new login distribution, the server is now using Brocade Port 2 instead of Port. The server has been automatically reconnected (failed over) to another uplink port. The failover only takes a few seconds to complete. 4. The server MPIO Manager shows no degraded state as the VC Fabric_ remains good with one port still connected. 5. Disconnect the remaining port of Fabric_ by shutting down port 2 on the Brocade SAN Switch. Note: Before you turn off the port, make sure both server HBA ports are correctly presented to the Storage Array. 6. From the brocade Command Line, disable port Appendix F: Connectivity verification and testing

171 7. The new VC SAN Fabric status is now Failed as all port members have been disconnected. 8. On the server side, the Boot from SAN server is still up and running with the server MPIO Manager showing a Degraded state because half of the active path has been lost. The failover to the second HBA port took only a few seconds to complete and has not affected the Operating System. 7 - Appendix F: Connectivity verification and testing

172 Appendix G: Boot from SAN troubleshooting Verification during POST If you are having Boot from SAN issues with Virtual Connect, you can gather useful information during the server Power-On Self-Test (POST). Boot from SAN not activated During POST, the Bios is not installed message means sometimes that Boot from SAN is not activated. Figure 72: Qlogic HBA showing Boot from SAN error during POST No SAN volume Boot from SAN is not activated 72 - Appendix G: Boot from SAN troubleshooting

173 Figure 73: Emulex showing Boot from SAN deactivated during POST Boot from SAN is not activated Figure 74: Emulex OneConnect Utility (press CTRL+E) showing Boot from SAN deactivated during POST Boot from SAN is not activated 73 - Appendix G: Boot from SAN troubleshooting

174 Boot from SAN activated Figure 75: Qlogic HBA showing Boot from SAN activated and SAN volume detected during POST Boot from SAN is activated SAN volume detected by the adapter Figure 76: Emulex showing Boot from SAN activated and SAN volume detected during POST with an EVA Storage array SAN volume detected by the two adapters Boot from SAN is activated 74 - Appendix G: Boot from SAN troubleshooting

175 Figure 77: Emulex showing Boot from SAN activated and SAN volume detected during POST with a Direct-Attach 3PAR Storage array SAN volume detected by the two adapters Boot from SAN is activated 75 - Appendix G: Boot from SAN troubleshooting

176 Boot from SAN misconfigured Figure 78: A Bios is not installed message can be shown as well when Boot from SAN is activated but with an incorrectly configured Boot target WWPN Troubleshooting Main points to check when facing a Boot from SAN error: Make sure the storage presentation and zoning configuration are correct. Under the VC Profile, check the Boot from SAN configuration; make sure the WWPN of the Storage target and LUN number are correct. Make sure the VC Fabric uplink ports are logged-in the Fabric (see Appendix F). Make sure the FC/FCoE server ports are logged-in the Fabric (see Appendix F) Appendix G: Boot from SAN troubleshooting

177 Appendix H: Fibre Channel Port Statistics Fibre Channel port statistics are available in the Virtual Connect GUI and CLI to provide improved reporting and metrics for both FC server ports and FC uplink ports of the VC modules (i.e. FlexFabric modules and VC 8G/6G FC modules). Note: FC Statistics are not available on the HP Virtual Connect 8Gb and 4Gb 20-Port Fibre Channel Modules. Note: Throughput Statistics are only available for the Ethernet traffic at this time. Note: FC Statistics are also available through SNMP using the Fibre Alliance MIB (also known as FCMGMT-MIB, RFC 4044). You can download the Fibre Alliance MIB from the Fibre Alliance website at the following link: Once FCMGMT-MIB is imported into a SNMP tool, you can collect from the Virtual Connect modules all necessary FC statistics. With FlexFabric adapters, FlexHBA ports can be monitored using the Bridge MIB (RFC 488) and Interface MIB (RFC 2863). For more information about SNMP and how to enable SNMP, refer to the Virtual Connect User Guide. For more information about FC statistics that are available for the different VC modules and their detailed descriptions, refer to the Virtual Connect User Guide. FC Uplink Port statistics To access FC uplink port statistics from the VC GUI:. Click Interconnect Bays at the bottom of the left navigation menu. 2. Click on the bay number corresponding to the module on which you need statistics: 77 - Appendix H: Fibre Channel Port Statistics

178 3. On the next page, you need to select the Uplink Ports tab with a VC FlexFabric module: 4. Detailed information and statistics are available for each FC Uplink port: With VC FlexFabric module: 78 - Appendix H: Fibre Channel Port Statistics

179 With VC 8G /6G Fibre Channel module: Note: FC statistics from VC 6Gb 24-Port FC Module and VC 8Gb 24-Port FC Module requires VC 4.40 and later Appendix H: Fibre Channel Port Statistics

180 5. When you click on the Detailed Stats / Info link, the following FC statistics are displayed: Note: For a detailed description of every counter, refer to the Virtual Connect User Guide 6. The same statistics are also available from the VC CLI: # Show the statistics of FC uplink port X of VC FlexFabric module in bay 2 show statistics enc0:2:x 80 - Appendix H: Fibre Channel Port Statistics

181 FC Server Port statistics Fibre Channel statistics are collected for all server ports on the VC 8G/6G 24-port FC Module. Important: On Virtual Connect FlexFabric Modules, all server downlink ports are Enhanced Ethernet ports only so native FC statistics are not provided as Fibre Channel over Ethernet (FCoE) is used to transport FC frames. The following screenshot shows the Ethernet detailed statistics of the first FlexHBA of the server in bay 3 (i.e. LOM:- b of d3): To access FC server port statistics from the VC GUI:. Click in the left navigation menu: Interconnect Bays > bay number. 8 - Appendix H: Fibre Channel Port Statistics

182 2. Detailed information and statistics are available for each FC Server port: Note: FC statistics from VC 6Gb 24-Port FC Module and VC 8Gb 24-Port FC Module requires VC 4.40 and later Appendix H: Fibre Channel Port Statistics

183 3. When you click on the Detailed Stats / Info link, the following FC statistics are displayed: Note: For a detailed description of every counter, refer to the Virtual Connect User Guide. 4. The same statistics are also available from the VC CLI: # Show statistics of FC server port d3, physical function 2 (FCoE enabled port) of VC FlexFabric module in bay show statistics enc0::d3:v Appendix H: Fibre Channel Port Statistics

184 Acronyms and abbreviations Term BIOS CLI CNA DCB GUI FC FCoE Flex-0 NIC Port* FlexHBA** FOS HBA I/O IOS IP iscsi LACP LOM LUN MPIO MZ or MEZZ; LOM NPIV NXOS OS POST RCFC RCIP ROM SAN SCSI SFP S SSH VC VC-FC VCM VLAN VSAN vnic vnet WWN WWPN Basic Input/Output System Definition Command Line Interface Converged Network Adapter Data Center Bridging (new enhanced lossless Ethernet fabric) Graphical User Interface Fibre Channel Fibre Channel over Ethernet A physical 0Gb port that is capable of being partitioned into 4 Flex NICs Flexible Host Bus Adapter. Physical function 2 or a FlexFabric CNA can act as either an Ethernet NIC, FCoE connection or iscsi NIC with boot and iscsi offload capabilities. Fabric OS, Brocade Fibre Channel operating system Host Bus Adapter Input / Output Cisco OS (originally Internetwork Operating System) Internet Protocol Internet Small Computer System Interface Link Aggregation Control Protocol (see IEEE802.3ad) LAN-on-Motherboard. Embedded network adapter on the system board Logical Unit Number Multipath I/O Mezzanine Slot ; (LOM) LAN Motherboard/Systemboard NIC N_Port ID Virtualization Cisco OS for Nexus series Operating System Power-On Self-Test Remote Copy over Fibre Channel Remote Copy over IP Read-only memory Storage Area Network Small Computer System Interface Small form-factor pluggable transceiver Storage Protocol Services Secure Shell Virtual Connect Virtual Connect Fibre Channel module Virtual Connect Manager Virtual Local-area network Virtual storage-area network Virtual NIC port. A software-based NIC used by Virtualization Managers Virtual Connect Network used to connect server NICs to the external Network World Wide Name World Wide Port Name *This feature was added for Virtual Connect Flex-0 **This feature was added for Virtual Connect FlexFabric 84 - Acronyms and abbreviations

185 Support and Other Resources Contacting HP Before you contact HP Be sure to have the following information available before you call contact HP: Technical support registration number (if applicable) Product serial number Product model name and number Product identification number Applicable error message Add-on boards or hardware Third-party hardware or software Operating system type and revision level HP contact information For help with HP Virtual Connect, see the HP Virtual Connect webpage: For the name of the nearest HP authorized reseller: See the Contact HP worldwide (in English) webpage: For HP technical support: In the United States, for contact options see the Contact HP United States webpage: To contact HP by phone: Call -800-HP-INVENT ( ). This service is available 24 hours a day, 7days a week. For continuous quality improvement, calls may be recorded or monitored. If you have purchased a Care Pack (service upgrade), call For more information about Care Packs, refer to the HP website: In other locations, see the Contact HP worldwide (in English) webpage: Subscription service HP recommends that you register your product at the Subscriber's Choice for Business website: After registering, you will receive notification of product enhancements, new driver versions, firmware updates, and other product resources Support and Other Resources

186 Documentation feedback HP is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback Include the document title and part number, version number, or the URL when submitting your feedback. Related documentation Virtual Connect documentation is available on the HP website: HP Virtual Connect Manager 4.45 Release Notes HP Virtual Connect for c-class BladeSystem Version 4.45 User Guide HP Virtual Connect Version 4.45 CLI User Guide HP Virtual Connect for c-class BladeSystem Setup and Installation Guide Version HP Virtual Connect FlexFabric Cookbook FCoE Cookbook for HP Virtual Connect iscsi Cookbook for HP Virtual Connect HP Virtual Connect Multi-Enclosure Stacking Reference Guide Implementing HP Virtual Connect Direct-Attach Fibre Channel with HP 3PAR StoreServ Systems HP Boot from SAN Configuration Guide Get connected hp.com/go/getconnected Current HP driver, support, and security alerts delivered directly to your desktop Copyright 202 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Trademark acknowledgments, if needed Support and Other Resources c , Edition 3 - Updated January 206

FCoE Cookbook for HP Virtual Connect

FCoE Cookbook for HP Virtual Connect Technical whitepaper FCoE Cookbook for HP Virtual Connect Version 4.45 Firmware Enhancements August 2015 Table of contents Change History 6 Purpose 7 Overview 7 Requirements and support 7 Supported Designs

More information

Host and storage system rules

Host and storage system rules Host and storage system rules Host and storage system rules are presented in these chapters: Heterogeneous server rules on page 185 MSA storage system rules on page 235 HPE StoreVirtual storage system

More information

HPE Virtual Connect for c-class BladeSystem Setup and Installation Guide Version 4.60

HPE Virtual Connect for c-class BladeSystem Setup and Installation Guide Version 4.60 HPE Virtual Connect for c-class BladeSystem Setup and Installation Guide Version 4.60 Abstract This document contains setup, installation, and configuration information for HPE Virtual Connect. This document

More information

HP Virtual Connect for c-class BladeSystem Setup and Installation Guide

HP Virtual Connect for c-class BladeSystem Setup and Installation Guide HP Virtual Connect for c-class BladeSystem Setup and Installation Guide Version 4.01 Abstract This document contains setup, installation, and configuration information for HP Virtual Connect. This document

More information

vsphere 6.0 with HP ProLiant Gen9 Servers, OneView, 3PAR, Cisco Nexus 5600 and Brocade 6510 Deployment Guide

vsphere 6.0 with HP ProLiant Gen9 Servers, OneView, 3PAR, Cisco Nexus 5600 and Brocade 6510 Deployment Guide Technical white paper vsphere 6.0 with HP ProLiant Gen9 Servers, OneView, 3PAR, Cisco Nexus 5600 and Brocade 6510 Deployment Guide Updated: 4/30/2015 Hongjun Ma, HP DCA Table of contents Introduction...

More information

UCS Engineering Details for the SAN Administrator

UCS Engineering Details for the SAN Administrator UCS Engineering Details for the SAN Administrator Craig Ashapa 2 First things first: debunking a myth Today (June 2012 UCS 2.02m) there is no FCoE northbound of UCS unless you really really really want

More information

Discover 2013 HOL2653

Discover 2013 HOL2653 Discover 2013 HOL2653 HP Virtual Connect 4.01 features and capabilities Steve Mclean and Keenan Sugg June 11 th to 13 th, 2013 AGENDA Schedule Course Introduction [15-20 Minutes] Introductions and opening

More information

Cisco MDS 9000 Family Blade Switch Solutions Guide

Cisco MDS 9000 Family Blade Switch Solutions Guide . Solutions Guide Cisco MDS 9000 Family Blade Switch Solutions Guide Introduction This document provides design and configuration guidance for administrators implementing large-scale blade server deployments

More information

Cisco I/O Accelerator Deployment Guide

Cisco I/O Accelerator Deployment Guide Cisco I/O Accelerator Deployment Guide Introduction This document provides design and configuration guidance for deploying the Cisco MDS 9000 Family I/O Accelerator (IOA) feature, which significantly improves

More information

HP0-S15. Planning and Designing ProLiant Solutions for the Enterprise. Download Full Version :

HP0-S15. Planning and Designing ProLiant Solutions for the Enterprise. Download Full Version : HP HP0-S15 Planning and Designing ProLiant Solutions for the Enterprise Download Full Version : http://killexams.com/pass4sure/exam-detail/hp0-s15 QUESTION: 174 Which rules should be followed when installing

More information

HP BladeSystem c-class Virtual Connect Support Utility Version Release Notes

HP BladeSystem c-class Virtual Connect Support Utility Version Release Notes HP BladeSystem c-class Virtual Connect Support Utility Version 1.9.1 Release Notes Abstract This document provides release information for the HP BladeSystem c-class Virtual Connect Support Utility Version

More information

Architecture. SAN architecture is presented in these chapters: SAN design overview on page 16. SAN fabric topologies on page 24

Architecture. SAN architecture is presented in these chapters: SAN design overview on page 16. SAN fabric topologies on page 24 Architecture SAN architecture is presented in these chapters: SAN design overview on page 16 SAN fabric topologies on page 24 Fibre Channel routing on page 46 Fibre Channel over Ethernet on page 65 Architecture

More information

SAN Design Reference Guide

SAN Design Reference Guide SAN Design Reference Guide Abstract This reference document provides information about HPE SAN architecture, including Fibre Channel, iscsi, FCoE, SAN extension, and hardware interoperability. Storage

More information

Cisco MDS 9000 Series Switches

Cisco MDS 9000 Series Switches Cisco MDS 9000 Series Switches Overview of Cisco Storage Networking Solutions Cisco MDS 9000 Series Directors Cisco MDS 9718 Cisco MDS 9710 Cisco MDS 9706 Configuration Chassis, dual Supervisor-1E Module,

More information

Rack-Level I/O Consolidation with Cisco Nexus 5000 Series Switches

Rack-Level I/O Consolidation with Cisco Nexus 5000 Series Switches . White Paper Rack-Level I/O Consolidation with Cisco Nexus 5000 Series Switches Introduction Best practices for I/O connectivity in today s data centers configure each server with redundant connections

More information

IBM Europe Announcement ZG , dated February 13, 2007

IBM Europe Announcement ZG , dated February 13, 2007 IBM Europe Announcement ZG07-0221, dated February 13, 2007 Cisco MDS 9200 for IBM System Storage switches, models 9216i and 9216A, offer enhanced performance, scalability, multiprotocol capabilities, and

More information

HPE BladeSystem c-class Virtual Connect Support Utility Version Release Notes

HPE BladeSystem c-class Virtual Connect Support Utility Version Release Notes HPE BladeSystem c-class Virtual Connect Support Utility Version 1.12.0 Release Notes Abstract This document provides release information for the HPE BladeSystem c-class Virtual Connect Support Utility

More information

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect Vblock Architecture Andrew Smallridge DC Technology Solutions Architect asmallri@cisco.com Vblock Design Governance It s an architecture! Requirements: Pretested Fully Integrated Ready to Go Ready to Grow

More information

Large SAN Design Best Practices Using Cisco MDS 9700 and MDS 9500 Multilayer Directors

Large SAN Design Best Practices Using Cisco MDS 9700 and MDS 9500 Multilayer Directors White Paper Large SAN Design Best Practices Using Cisco MDS 9700 and MDS 9500 Multilayer Directors What You Will Learn As SANs continue to grow in size, many factors need to be considered to help scale

More information

UCS-ABC. Cisco Unified Computing System Accelerated Boot Camp. Length: 5 Days. Format: Lecture/Lab. Course Version: 5.0. Product Version: 2.

UCS-ABC. Cisco Unified Computing System Accelerated Boot Camp. Length: 5 Days. Format: Lecture/Lab. Course Version: 5.0. Product Version: 2. UCS-ABC Why Firefly Length: 5 Days Format: Lecture/Lab Course Version: 5.0 Product Version: 2.1 This special course focuses on UCS Administration and Troubleshooting UCS Manager 2.0 and provides additional

More information

HP0-S21. Integrating and Managing HP BladeSystem in the Enterprise. Download Full Version :

HP0-S21. Integrating and Managing HP BladeSystem in the Enterprise. Download Full Version : HP HP0-S21 Integrating and Managing HP BladeSystem in the Enterprise Download Full Version : http://killexams.com/pass4sure/exam-detail/hp0-s21 Question: 165 You are setting up a new infiniband network

More information

Fibre Channel Gateway Overview

Fibre Channel Gateway Overview CHAPTER 5 This chapter describes the Fibre Channel gateways and includes the following sections: About the Fibre Channel Gateway, page 5-1 Terms and Concepts, page 5-2 Cisco SFS 3500 Fibre Channel Gateway

More information

Product Overview. Send documentation comments to CHAPTER

Product Overview. Send documentation comments to CHAPTER Send documentation comments to mdsfeedback-doc@cisco.com CHAPTER 1 The Cisco MDS 9100 Series Multilayer Fabric Switches provide an intelligent, cost-effective, and small-profile switching platform for

More information

OpenVMS Storage Updates

OpenVMS Storage Updates OpenVMS Storage Updates Prashanth K E Technical Architect, OpenVMS Agenda Recent Storage options added to OpenVMS HBAs Targets Tape and Software HP StoreOnce (VLS and D2D) and OpenVMS Overview of Deduplication

More information

HPE Virtual Connect 4.62 Release Notes

HPE Virtual Connect 4.62 Release Notes HPE Virtual Connect 4.62 Release Notes Abstract This document provides Virtual Connect release information for version 4.62. This document supersedes the information in the documentation set released with

More information

HP Virtual Connect for c-class BladeSystem Version 2.10 User Guide

HP Virtual Connect for c-class BladeSystem Version 2.10 User Guide HP Virtual Connect for c-class BladeSystem Version 2.10 User Guide Part Number 519212-001 April 2009 (First Edition) Copyright 2009 Hewlett-Packard Development Company, L.P. The information contained herein

More information

Cisco MDS 9000 Series Switches

Cisco MDS 9000 Series Switches Cisco MDS 9000 Series Switches Overview of Cisco Storage Networking Solutions Cisco MDS 9000 Series 32-Gbps Directors Cisco MDS 9718 Cisco MDS 9710 Cisco MDS 9706 Configuration Chassis, dual Supervisor-1E

More information

HP Virtual Connect for c-class BladeSystem Version 3.01 User Guide

HP Virtual Connect for c-class BladeSystem Version 3.01 User Guide HP Virtual Connect for c-class BladeSystem Version 3.01 User Guide Part Number 621011-001 June 2010 (First Edition) Copyright 2010 Hewlett-Packard Development Company, L.P. The information contained herein

More information

Configuring PortChannels

Configuring PortChannels This chapter provides information about PortChannels and how to configure the PortChannels. Finding Feature Information, page 1 Information About PortChannels, page 1 Prerequisites for PortChannels, page

More information

HP Discover 2013 HOL 2653 HP Virtual Connect 4.01 features and capabilities. Lab Guide

HP Discover 2013 HOL 2653 HP Virtual Connect 4.01 features and capabilities. Lab Guide HP Discover 2013 HOL 2653 HP Virtual Connect 4.01 features and capabilities Lab Guide HP Virtual Connect 4.01 features and capabilities Lab Guide Rev. 1.1 Copyright 2013 Hewlett-Packard Development Company,

More information

HP Virtual Connect: Common Myths, Misperceptions, and Objections, Second Edition

HP Virtual Connect: Common Myths, Misperceptions, and Objections, Second Edition HP Virtual Connect: Common Myths, Misperceptions, and Objections, Second Edition A technical discussion of common myths, misperceptions, and objections to the deployment and use of HP Virtual Connect technology

More information

EXAM - HP0-J64. Designing HP Enterprise Storage Solutions. Buy Full Product.

EXAM - HP0-J64. Designing HP Enterprise Storage Solutions. Buy Full Product. HP EXAM - HP0-J64 Designing HP Enterprise Storage Solutions Buy Full Product http://www.examskey.com/hp0-j64.html Examskey HP HP0-J64 exam demo product is here for you to test the quality of the product.

More information

Designing and Deploying a Cisco Unified Computing System SAN Using Cisco MDS 9000 Family Switches

Designing and Deploying a Cisco Unified Computing System SAN Using Cisco MDS 9000 Family Switches Designing and Deploying a Cisco Unified Computing System SAN Using Cisco MDS 9000 Family Switches What You Will Learn The Cisco Unified Computing System helps address today s business challenges by streamlining

More information

Hálózatok üzleti tervezése

Hálózatok üzleti tervezése Hálózatok üzleti tervezése hogyan tervezzünk, ha eddig is jó volt... Rab Gergely HP ESSN Technical Consultant gergely.rab@hp.com IT sprawl has business at the breaking point 70% captive in operations and

More information

Question: 1 You have a Cisco UCS cluster and you must recover a lost admin password. In which order must you power cycle the fabric interconnects?

Question: 1 You have a Cisco UCS cluster and you must recover a lost admin password. In which order must you power cycle the fabric interconnects? Volume: 327 Questions Question: 1 You have a Cisco UCS cluster and you must recover a lost admin password. In which order must you power cycle the fabric interconnects? A. primary first, and then secondary

More information

Brocade Technology Conference Call: Data Center Infrastructure Business Unit Breakthrough Capabilities for the Evolving Data Center Network

Brocade Technology Conference Call: Data Center Infrastructure Business Unit Breakthrough Capabilities for the Evolving Data Center Network Brocade Technology Conference Call: Data Center Infrastructure Business Unit Breakthrough Capabilities for the Evolving Data Center Network Ian Whiting, Vice President and General Manager, DCI Business

More information

Mac OS X Fibre Channel connectivity to the HP StorageWorks Enterprise Virtual Array storage system configuration guide

Mac OS X Fibre Channel connectivity to the HP StorageWorks Enterprise Virtual Array storage system configuration guide Mac OS X Fibre Channel connectivity to the HP StorageWorks Enterprise Virtual Array storage system configuration guide Part number: 5697-0025 Third edition: July 2009 Legal and notice information Copyright

More information

N-Port Virtualization in the Data Center

N-Port Virtualization in the Data Center N-Port Virtualization in the Data Center What You Will Learn N-Port virtualization is a feature that has growing importance in the data center. This document addresses the design requirements for designing

More information

Quick Reference Guide

Quick Reference Guide Connectrix MDS Quick Reference Guide An Overview of Cisco Storage Solutions for EMC In collaboration with: 1 2016 Cisco and/or its affiliates. All rights reserved. Connectrix MDS Directors EMC Model MDS-9706

More information

Cisco MDS NX-OS Release 6.2Configuration Limits 2

Cisco MDS NX-OS Release 6.2Configuration Limits 2 Cisco MDS NX-OS Release 6.2 Configuration Limits Cisco MDS NX-OS Release 6.2Configuration Limits 2 Switch Level Fibre Channel Configuration Limits for Cisco MDS 9000 Series Switches 2 Fabric Level Fibre

More information

Configuring Fabric Congestion Control and QoS

Configuring Fabric Congestion Control and QoS CHAPTER 1 Fibre Channel Congestion Control (FCC) is a Cisco proprietary flow control mechanism that alleviates congestion on Fibre Channel networks. Quality of service () offers the following advantages:

More information

Brocade 20-port 8Gb SAN Switch Modules for BladeCenter

Brocade 20-port 8Gb SAN Switch Modules for BladeCenter Brocade 20-port 8Gb SAN Switch Modules for BladeCenter Product Guide The Brocade Enterprise 20-port, 20-port, and 10-port 8 Gb SAN Switch Modules for BladeCenter deliver embedded Fibre Channel switching

More information

Clustering In A SAN For High Availability

Clustering In A SAN For High Availability Clustering In A SAN For High Availability Steve Dalton, President and CEO Gadzoox Networks September 2002 Agenda What is High Availability? The differences between High Availability, System Availability

More information

HP OneView for VMware vcenter User Guide

HP OneView for VMware vcenter User Guide HP OneView for VMware vcenter User Guide Abstract This document contains detailed instructions for configuring and using HP OneView for VMware vcenter (formerly HP Insight Control for VMware vcenter Server).

More information

HP Virtual Connect Version 3.10 Release Notes

HP Virtual Connect Version 3.10 Release Notes HP Virtual Connect Version 3.10 Release Notes Part Number 621009-005 December 2010 (Fifth Edition) Copyright 2010 Hewlett-Packard Development Company, L.P. The information contained herein is subject to

More information

PASS4TEST. IT Certification Guaranteed, The Easy Way! We offer free update service for one year

PASS4TEST. IT Certification Guaranteed, The Easy Way!  We offer free update service for one year PASS4TEST IT Certification Guaranteed, The Easy Way! \ http://www.pass4test.com We offer free update service for one year Exam : 642-359 Title : Implementing Cisco Storage Network Solutions Vendors : Cisco

More information

HP BladeSystem c-class Server Blades OpenVMS Blades Management. John Shortt Barry Kierstein Leo Demers OpenVMS Engineering

HP BladeSystem c-class Server Blades OpenVMS Blades Management. John Shortt Barry Kierstein Leo Demers OpenVMS Engineering HP BladeSystem c-class Server Blades OpenVMS Blades Management John Shortt Barry Kierstein Leo Demers OpenVMS Engineering 1 19 March 2009 Agenda Overview c-class Infrastructure Virtual Connect Updating

More information

Exam HP0-J64 Designing HP Enterprise Storage solutions Version: 6.6 [ Total Questions: 130 ]

Exam HP0-J64 Designing HP Enterprise Storage solutions Version: 6.6 [ Total Questions: 130 ] s@lm@n HP Exam HP0-J64 Designing HP Enterprise Storage solutions Version: 6.6 [ Total Questions: 130 ] Question No : 1 Scenario Following the merger of two financial companies, management is considering

More information

Data Replication. Replication can be done at many levels. Replication can be done at many levels. Application. Application. Database.

Data Replication. Replication can be done at many levels. Replication can be done at many levels. Application. Application. Database. Data Replication Replication can be done at many levels Application Database Operating System Hardware Storage Replication can be done at many levels Application Database Operating System Hardware Storage

More information

SAN Design Best Practices for the Dell PowerEdge M1000e Blade Enclosure and EqualLogic PS Series Storage (1GbE) A Dell Technical Whitepaper

SAN Design Best Practices for the Dell PowerEdge M1000e Blade Enclosure and EqualLogic PS Series Storage (1GbE) A Dell Technical Whitepaper Dell EqualLogic Best Practices Series SAN Design Best Practices for the Dell PowerEdge M1000e Blade Enclosure and EqualLogic PS Series Storage (1GbE) A Dell Technical Whitepaper Storage Infrastructure

More information

Cisco Cisco Data Center Associate Level Accelerated - v1.0 (DCAA)

Cisco Cisco Data Center Associate Level Accelerated - v1.0 (DCAA) Course Overview DCAA v1.0 is an extended hours bootcamp class designed to convey the knowledge necessary to understand and work with Cisco data center technologies. Covering the architecture, components

More information

vsphere 5.5 with HP OneView, 3PAR StoreServ 7400, Cisco Nexus 5600 and Brocade 6510 Deployment Guide

vsphere 5.5 with HP OneView, 3PAR StoreServ 7400, Cisco Nexus 5600 and Brocade 6510 Deployment Guide Technical white paper vsphere 5.5 with HP OneView, 3PAR StoreServ 7400, Cisco Nexus 5600 and Brocade 6510 Deployment Guide Updated: 4/30/2015 Hongjun Ma and Marcus D'Andrea, HP DIA Table of contents Introduction...

More information

As enterprise organizations face the major

As enterprise organizations face the major Deploying Flexible Brocade 5000 and 4900 SAN Switches By Nivetha Balakrishnan Aditya G. Brocade storage area network (SAN) switches are designed to meet the needs of rapidly growing enterprise IT environments.

More information

Configuring Fibre Channel Interfaces

Configuring Fibre Channel Interfaces This chapter contains the following sections:, page 1 Information About Fibre Channel Interfaces Licensing Requirements for Fibre Channel On Cisco Nexus 3000 Series switches, Fibre Channel capability is

More information

Configuring FCoE NPV. Information About FCoE NPV. This chapter contains the following sections:

Configuring FCoE NPV. Information About FCoE NPV. This chapter contains the following sections: This chapter contains the following sections: Information About FCoE NPV, page 1 FCoE NPV Model, page 3 Mapping Requirements, page 4 Port Requirements, page 5 NPV Features, page 5 vpc Topologies, page

More information

HP BladeSystem Matrix Compatibility Chart

HP BladeSystem Matrix Compatibility Chart HP BladeSystem Matrix Compatibility Chart For supported hardware and software, including BladeSystem Matrix firmware set 1.01 Part Number 512185-003 December 2009 (Third Edition) Copyright 2009 Hewlett-Packard

More information

Direct Attached Storage

Direct Attached Storage , page 1 Fibre Channel Switching Mode, page 1 Configuring Fibre Channel Switching Mode, page 2 Creating a Storage VSAN, page 3 Creating a VSAN for Fibre Channel Zoning, page 4 Configuring a Fibre Channel

More information

HP StorageWorks Fabric OS 6.1.2_cee1 release notes

HP StorageWorks Fabric OS 6.1.2_cee1 release notes HP StorageWorks Fabric OS 6.1.2_cee1 release notes Part number: 5697-0045 First edition: June 2009 Legal and notice information Copyright 2009 Hewlett-Packard Development Company, L.P. Copyright 2009 Brocade

More information

"Charting the Course... Troubleshooting Cisco Data Center Infrastructure v6.0 (DCIT) Course Summary

Charting the Course... Troubleshooting Cisco Data Center Infrastructure v6.0 (DCIT) Course Summary Description Troubleshooting Cisco Data Center Infrastructure v6.0 (DCIT) Course Summary v6.0 is a five-day instructor-led course that is designed to help students prepare for the Cisco CCNP Data Center

More information

QuickSpecs. What's New Virtual Connect v4.01: Models HP Virtual Connect 8Gb 20-port Fibre Channel Module for c-class BladeSystem B21

QuickSpecs. What's New Virtual Connect v4.01: Models HP Virtual Connect 8Gb 20-port Fibre Channel Module for c-class BladeSystem B21 Overview Simplify and make your data center change-ready. The HP Virtual Connect 8Gb 20-port FC module offers enhanced Virtual Connect capabilities, allowing up to 128 virtual machines running on the same

More information

SAN Configuration Guide

SAN Configuration Guide ONTAP 9 SAN Configuration Guide November 2017 215-11168_G0 doccomments@netapp.com Updated for ONTAP 9.3 Table of Contents 3 Contents Considerations for iscsi configurations... 5 Ways to configure iscsi

More information

Configuring SR-IOV. Table of contents. with HP Virtual Connect and Microsoft Hyper-V. Technical white paper

Configuring SR-IOV. Table of contents. with HP Virtual Connect and Microsoft Hyper-V. Technical white paper Technical white paper Configuring SR-IOV with HP Virtual Connect and Microsoft Hyper-V Table of contents Abstract... 2 Overview... 2 SR-IOV... 2 Advantages and usage... 2 With Flex-10... 3 Setup... 4 Supported

More information

IBM TotalStorage SAN Switch M12

IBM TotalStorage SAN Switch M12 High availability director supports highly scalable fabrics for large enterprise SANs IBM TotalStorage SAN Switch M12 High port density packaging saves space Highlights Enterprise-level scalability and

More information

Troubleshooting N-Port Virtualization

Troubleshooting N-Port Virtualization CHAPTER 9 This chapter describes how to identify and resolve problems that can occur with N-Port virtualization. It includes the following sections: Overview, page 9-1 Initial Troubleshooting Checklist,

More information

Cisco Interconnect Solutions for HP BladeSystem c-class

Cisco Interconnect Solutions for HP BladeSystem c-class Cisco Interconnect Solutions for HP BladeSystem c-class Integrated switching solutions reduce data center complexity. HP BladeSystem solutions are quickly gaining popularity because of the function and

More information

V.I.B.E. Virtual. Integrated. Blade. Environment. Harveenpal Singh. System-x PLM

V.I.B.E. Virtual. Integrated. Blade. Environment. Harveenpal Singh. System-x PLM V.I.B.E. Virtual. Integrated. Blade. Environment. Harveenpal Singh System-x PLM x86 servers are taking on more demanding roles, including high-end business critical applications x86 server segment is the

More information

UCS Fabric Fundamentals

UCS Fabric Fundamentals UCS Fabric Fundamentals Hardware State Abstraction MAC Address NIC Firmware NIC Settings UUID BIOS Firmware BIOS Settings Boot Order BMC Firmware Drive Controller F/W Drive Firmware State abstracted from

More information

HP OneView for VMware vcenter User Guide

HP OneView for VMware vcenter User Guide HP OneView for VMware vcenter User Guide Abstract This document contains detailed instructions for configuring and using HP OneView for VMware vcenter (formerly HP Insight Control for VMware vcenter Server).

More information

Lot # 10 - Servers. 1. Rack Server. Rack Server Server

Lot # 10 - Servers. 1. Rack Server. Rack Server Server 1. Rack Server Rack Server Server Processor: 1 x Intel Xeon E5 2620v3 (2.4GHz/6 core/15mb/85w) Processor Kit. Upgradable to 2 CPU Chipset: Intel C610 Series Chipset. Intel E5 2600v3 Processor Family. Memory:

More information

HPE OneView 3.1 Support Matrix

HPE OneView 3.1 Support Matrix HPE OneView 3.1 Support Matrix Abstract This document lists the hardware, firmware, and software requirements for installing and using HPE OneView on a virtual machine host. Part Number: 5200-1774e Published:

More information

Active System Manager Release 8.2 Compatibility Matrix

Active System Manager Release 8.2 Compatibility Matrix Active System Manager Release 8.2 Compatibility Matrix Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates

More information

Fibre Channel Zoning

Fibre Channel Zoning Information About, page 1 Support for in Cisco UCS Manager, page 2 Guidelines and recommendations for Cisco UCS Manager-Based, page 4 Configuring, page 4 Creating a VSAN for, page 6 Creating a New Fibre

More information

Configuring Fabric Congestion Control and QoS

Configuring Fabric Congestion Control and QoS Send documentation comments to mdsfeedback-doc@cisco.com CHAPTER 64 Configuring Fabric Congestion Control and QoS Fibre Channel Congestion Control (FCC) is a Cisco proprietary flow control mechanism that

More information

Vendor: EMC. Exam Code: E Exam Name: Cloud Infrastructure and Services Exam. Version: Demo

Vendor: EMC. Exam Code: E Exam Name: Cloud Infrastructure and Services Exam. Version: Demo Vendor: EMC Exam Code: E20-002 Exam Name: Cloud Infrastructure and Services Exam Version: Demo QUESTION NO: 1 In which Cloud deployment model would an organization see operational expenditures grow in

More information

HPE Enhanced Network Installation and Startup Service for HPE BladeSystem

HPE Enhanced Network Installation and Startup Service for HPE BladeSystem Data sheet HPE Enhanced Network Installation and Startup Service for HPE BladeSystem HPE Lifecycle Event Services HPE Enhanced Network Installation and Startup Service for HPE BladeSystem provides configuration

More information

VPLEX Networking. Implementation Planning and Best Practices

VPLEX Networking. Implementation Planning and Best Practices VPLEX Networking Implementation Planning and Best Practices Internal Networks Management Network Management VPN (Metro/Witness) Cluster to Cluster communication (WAN COM for Metro) Updated for GeoSynchrony

More information

QuickSpecs. HP Virtual Connect 4Gb Fibre Channel Module for c-class BladeSystem. Overview

QuickSpecs. HP Virtual Connect 4Gb Fibre Channel Module for c-class BladeSystem. Overview Overview Simplify and make your data center change-ready. The BladeSystem c-class is a form, fit and functional replacement for the current HP 4Gb Virtual Connect Fibre Channel Module with enhanced support

More information

HP ProLiant blade planning and deployment

HP ProLiant blade planning and deployment HP ProLiant blade planning and deployment Chris Powell CSG Products, Services, and Solutions Training Hewlett-Packard 2004 Hewlett-Packard Development Company, L.P. The information contained herein is

More information

QuickSpecs. At A Glance Performance: HP Virtual Connect 8Gb 20-Port Fibre Channel Module for c-class BladeSystem. Overview

QuickSpecs. At A Glance Performance: HP Virtual Connect 8Gb 20-Port Fibre Channel Module for c-class BladeSystem. Overview Overview Simplify and make your data center change-ready. The HP Virtual Connect 8Gb 20-port Fibre Channel Module for BladeSystem c- Class is the next-generation successor to the current HP 4Gb Virtual

More information

Exchange Server 2007 Performance Comparison of the Dell PowerEdge 2950 and HP Proliant DL385 G2 Servers

Exchange Server 2007 Performance Comparison of the Dell PowerEdge 2950 and HP Proliant DL385 G2 Servers Exchange Server 2007 Performance Comparison of the Dell PowerEdge 2950 and HP Proliant DL385 G2 Servers By Todd Muirhead Dell Enterprise Technology Center Dell Enterprise Technology Center dell.com/techcenter

More information

My First SAN solution guide

My First SAN solution guide My First SAN solution guide Digital information is a critical component of business today. It not only grows continuously in volume, but more than ever it must be available around the clock. Inability

More information

45 10.C. 1 The switch should have The switch should have G SFP+ Ports from Day1, populated with all

45 10.C. 1 The switch should have The switch should have G SFP+ Ports from Day1, populated with all Addendum / Corrigendum Dated 29/09/2017 Tender Ref No. - 236/387/DCCS/2010/IREDA/1 Dated: 22/09/2017 Name of Project - Supply Installation and Support Services of Data centers S. No. Document Reference

More information

Exam Name: Midrange Storage Technical Support V2

Exam Name: Midrange Storage Technical Support V2 Vendor: IBM Exam Code: 000-118 Exam Name: Midrange Storage Technical Support V2 Version: 12.39 QUESTION 1 A customer has an IBM System Storage DS5000 and needs to add more disk drives to the unit. There

More information

HP BladeSystem c-class enclosures

HP BladeSystem c-class enclosures Family data sheet HP BladeSystem c-class enclosures Tackle your infrastructure s cost, time, and energy issues HP BladeSystem c3000 Platinum Enclosure (rack version) HP BladeSystem c7000 Platinum Enclosure

More information

SAN extension and bridging

SAN extension and bridging SAN extension and bridging SAN extension and bridging are presented in these chapters: SAN extension on page 281 iscsi storage on page 348 280 SAN extension and bridging SAN extension SAN extension enables

More information

Power Systems SAN Multipath Configuration Using NPIV v1.2

Power Systems SAN Multipath Configuration Using NPIV v1.2 v1.2 Bejoy C Alias IBM India Software Lab Revision History Date of this revision: 27-Jan-2011 Date of next revision : TBD Revision Number Revision Date Summary of Changes Changes marked V1.0 23-Sep-2010

More information

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES I. Executive Summary Superior Court of California, County of Orange (Court) is in the process of conducting a large enterprise hardware refresh. This

More information

Cisco Prime Data Center Network Manager 6.2

Cisco Prime Data Center Network Manager 6.2 Product Bulletin Cisco Prime Data Center Network Manager 6.2 PB639739 Product Overview Modern data centers are becoming increasingly massive and complex. Proliferation of new technologies such as virtualization

More information

HP Converged Network Switches and Adapters. HP StorageWorks 2408 Converged Network Switch

HP Converged Network Switches and Adapters. HP StorageWorks 2408 Converged Network Switch HP Converged Network Switches and Adapters Family Data sheet Realise the advantages of Converged Infrastructure with HP Converged Network Switches and Adapters Data centres are increasingly being filled

More information

Fabric infrastructure rules

Fabric infrastructure rules Fabric infrastructure rules Fabric infrastructure rules are presented in these chapters: HPE FlexFabric switches and storage support on page 103 B-series switches and fabric rules on page 105 C-series

More information

access addresses/addressing advantages agents allocation analysis

access addresses/addressing advantages agents allocation analysis INDEX A access control of multipath port fanout, LUN issues, 122 of SAN devices, 154 virtualization server reliance on, 173 DAS characteristics (table), 19 conversion to SAN fabric storage access, 105

More information

DCNX5K: Configuring Cisco Nexus 5000 Switches

DCNX5K: Configuring Cisco Nexus 5000 Switches Course Outline Module 1: Cisco Nexus 5000 Series Switch Product Overview Lesson 1: Introducing the Cisco Nexus 5000 Series Switches Topic 1: Cisco Nexus 5000 Series Switch Product Overview Topic 2: Cisco

More information

Cisco Actualtests Questions & Answers

Cisco Actualtests Questions & Answers Cisco Actualtests 642-999 Questions & Answers Number: 642-999 Passing Score: 800 Time Limit: 90 min File Version: 22.8 http://www.gratisexam.com/ Sections 1. Questions 2. Drag & Drop 3. Hot Spot Cisco

More information

UCS Networking 201 Deep Dive

UCS Networking 201 Deep Dive UCS Networking 20 Deep Dive BRKCOM-2003 Brad Hedlund bhedlund@cisco.com Manish Tandon mtandon@cisco.com Agenda Overview / System Architecture Physical Architecture Logical Architecture Switching Modes

More information

Configuring SAN Port Channel

Configuring SAN Port Channel Configuring SAN Port Channel This chapter contains the following sections: Configuring SAN Port Channels, page 1 Configuring SAN Port Channels SAN port channels refer to the aggregation of multiple physical

More information

Virtualizing SAN Connectivity with VMware Infrastructure 3 and Brocade Data Center Fabric Services

Virtualizing SAN Connectivity with VMware Infrastructure 3 and Brocade Data Center Fabric Services Virtualizing SAN Connectivity with VMware Infrastructure 3 and Brocade Data Center Fabric Services How the VMware Infrastructure platform can be deployed in a Fibre Channel-based shared storage environment

More information

Data ONTAP 8.1 High Availability and MetroCluster Configuration Guide For 7-Mode

Data ONTAP 8.1 High Availability and MetroCluster Configuration Guide For 7-Mode IBM System Storage N series Data ONTAP 8.1 High Availability and MetroCluster Configuration Guide For 7-Mode This Release Candidate publication is provided in conjunction with a Data ONTAP 8.1.2 RC code

More information

HP Supporting the HP ProLiant Storage Server Product Family.

HP Supporting the HP ProLiant Storage Server Product Family. HP HP0-698 Supporting the HP ProLiant Storage Server Product Family https://killexams.com/pass4sure/exam-detail/hp0-698 QUESTION: 1 What does Volume Shadow Copy provide?. A. backup to disks B. LUN duplication

More information

CCIE Data Center Lab Exam Version 1.0

CCIE Data Center Lab Exam Version 1.0 CCIE Data Center Lab Exam Version 1.0 CCIE Data Center Sky rocketing Popularity should not come as any surprise As per Cisco Global Cloud index, published in 2012, gave prediction that by 2016 nearly two

More information

BL8x0c i2: Overview, Setup, Troubleshooting, and Various Methods to Install OpenVMS

BL8x0c i2: Overview, Setup, Troubleshooting, and Various Methods to Install OpenVMS OpenVMS Technical Journal V15 BL8x0c i2: Overview, Setup, Troubleshooting, and Various Methods to Install OpenVMS Aditya B S and Srinivas Pinnika BL8x0c i2: Overview, Setup, Troubleshooting, and Various

More information