HPE Integrity NonStop i BladeSystem Planning Guide

Size: px
Start display at page:

Download "HPE Integrity NonStop i BladeSystem Planning Guide"

Transcription

1 HPE Integrity NonStop i BladeSystem Planning Guide Part Number: Published: May 207 Edition: J06.3 and subsequent J-series RVUs

2 Copyright 203, 207 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein. Confidential computer software. Valid license from Hewlett Packard Enterprise required for possession, use, or copying. Consistent with FAR 2.2 and 2.22, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett Packard Enterprise has no control over and is not responsible for information outside the Hewlett Packard Enterprise website. Acknowledgments Intel, Itanium, Pentium, Intel Inside, and the Intel Inside logo are trademarks of Intel Corporation in the United States and other countries. Microsoft and Windows are trademarks of the Microsoft group of companies. Java and Oracle are registered trademarks of Oracle and/or its affiliates. Intel, Pentium, and Celeron are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Motif, OSF/, UNIX, X/Open, and the "X" device are registered trademarks, and IT DialTone and The Open Group are trademarks of The Open Group in the U.S. and other countries. Open Software Foundation, OSF, the OSF logo, OSF/, OSF/Motif, and Motif are trademarks of the Open Software Foundation, Inc. OSF MAKES NO WARRANTY OF ANY KIND WITH REGARD TO THE OSF MATERIAL PROVIDED HEREIN, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. OSF shall not be liable for errors contained herein or for incidental consequential damages in connection with the furnishing, performance, or use of this material. 990, 99, 992, 993 Open Software Foundation, Inc. The OSF documentation and the OSF software to which it relates are derived in part from materials supplied by the following: 987, 988, 989 Carnegie-Mellon University. 989, 990, 99 Digital Equipment Corporation. 985, 988, 989, 990 Encore Computer Corporation. 988 Free Software Foundation, Inc. 987, 988, 989, 990, 99 Hewlett-Packard Company. 985, 987, 988, 989, 990, 99, 992 International Business Machines Corporation. 988, 989 Massachusetts Institute of Technology. 988, 989, 990 Mentat Inc. 988 Microsoft Corporation. 987, 988, 989, 990, 99, 992 SecureWare, Inc. 990, 99 Siemens Nixdorf Informationssysteme AG. 986, 989, 996, 997 Sun Microsystems, Inc. 989, 990, 99 Transarc Corporation. OSF software and documentation are based in part on the Fourth Berkeley Software Distribution under license from The Regents of the University of California. OSF acknowledges the following individuals and institutions for their role in its development: Kenneth C.R.C. Arnold, Gregory S. Couch, Conrad C. Huang, Ed James, Symmetric Computer Systems, Robert Elz. 980, 98, 982, 983, 985, 986, 987, 988, 989 Regents of the University of California.

3 Contents About This Document...0 Supported Release Version Updates (RVUs)...0 New and Changed Information for New and Changed Information for New and Changed Information for R...0 New and Changed Information for New and Changed Information for New and Changed Information for New and Changed Information for Publishing History... I HPE Integrity NonStop i BladeSystems AC Power...2 NonStop i BladeSystems Overview...3 Migrating NonStop i BladeSystems...6 Core Licensing...6 NonStop i BladeSystem Standard and Optional Hardware...7 c7000 Enclosure...7 NonStop i Server Blades...8 CLuster I/O Modules (CLIMs)...9 IP CLIM and Telco CLIM...20 Storage CLIM...22 SAS Disk Enclosure...23 IOAM Enclosure...24 G6SE Enclosure...24 Fibre Channel Disk Module (FCDM)...24 Maintenance Switch...24 System Console...24 Enterprise Storage System (Optional)...25 NonStop S-series I/O Enclosures (Optional)...25 NonStop i BladeSystems Platform Configurations Managing and Locating NonStop i BladeSystem Components...28 Management Tools for NonStop i BladeSystems...28 OSM Package...28 Onboard Administrator (OA) and Integrated Lights Out (ilo)...28 Cluster I/O Protocols (CIP) Subsystem...28 Subsystem Control Facility (SCF) Subsystem...28 Technical Document for NonStop i BladeSystems...29 Power Regulator for NonStop i BladeSystems...29 Changing Customer Passwords...30 Default Naming Conventions for NonStop i BladeSystem Resources...30 Possible Values of Disk and Tape LUNs...3 NonStop i BladeSystem Component Location and Identification...32 Terminology...32 Rack and Offset Physical Location...33 ServerNet Switch Group-Module-Slot Numbering...33 NonStop i Server Blade Group-Module-Slot Numbering...34 CLIM Enclosure Group-Module-Slot-Port-Fiber Numbering...35 G6SE Enclosure Group-Module-Slot-Port Numbering...35 IOAM Enclosure Group-Module-Slot Numbering...35 Fibre Channel Disk Module Group-Module-Slot Numbering...35 Contents 3

4 3 Site Preparation Guidelines NB50000c, NB54000c, and NB56000c...36 Rack Power and I/O Cable Entry...36 Emergency Power-Off Switches...36 EPO Requirement for NonStop BladeSystems...36 EPO Requirement for R5000 UPS...36 EPO Requirement for R2000/3 UPS...36 Electrical Power and Grounding Quality...37 Power Quality...37 Grounding Systems...37 Power Consumption...37 UPS and ERM (Optional)...37 UPS and ERM Checklist...38 Cooling and Humidity Control...38 Weight...38 Flooring...38 Dust and Pollution Control...39 Zinc Particulates...39 Space for Receiving and Unpacking a NonStop i BladeSystem...39 Operational Space for a NonStop i BladeSystem System Installation Specifications for NonStop i BladeSystems...40 Racks...40 Power Distribution for NonStop i BladeSystems in Intelligent Racks...40 Power Distribution Unit (PDU) Types for an Intelligent Rack...40 Four Intelligent PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase)...42 Four Intelligent PDUs With Single-Phase UPS (NA/JPN and INTL)...43 Four Intelligent PDUs With Three-Phase UPS (NA/JPN and INTL)...44 Two Intelligent PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase)...45 Two Intelligent PDUs With Single-Phase UPS (NA/JPN and INTL)...46 Two Intelligent PDUs With Three-Phase UPS (NA/JPN and INTL)...47 Four Modular PDUs Without UPS (NA and JPN, Single-Phase and Three-Phase)...48 Four Modular PDUs With Single-Phase UPS (NA/JPN and INTL)...49 Four Modular PDUs With Three-Phase UPS (NA/JPN and INTL)...50 Two Modular PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase)...5 Two Modular PDU Connections With Single-Phase UPS (NA/JPN and INTL)...52 Two Modular PDU Connections With Three-Phase UPS (NA/JPN and INTL)...53 AC Power Feeds in an Intelligent Rack...54 AC Input Power for Intelligent Racks...6 Enclosure AC Input...63 Enclosure Power Loads...64 Dimensions and Weights...67 Plan View of the 42U Racks...67 Service Clearances for Racks...67 Unit Sizes U G2 Rack Physical Specifications U Intelligent Rack Physical Specifications...69 Enclosure Dimensions...69 Rack and Enclosure Weights With Worksheet...70 Rack Stability...73 Environmental Specifications...74 Heat Dissipation Specifications and Worksheet NB50000c, NB54000c, and NB56000c...74 Operating Temperature, Humidity, and Altitude Contents

5 Nonoperating Temperature, Humidity, and Altitude...75 Cooling Airflow Direction...76 Blanking Panels...76 Typical Acoustic Noise Emissions...76 Tested Electrostatic Immunity...76 Calculating Specifications for Enclosure Combinations NB50000c...77 Calculating Specifications for Enclosure Combinations NB54000c...79 Calculating Specifications for Enclosure Combinations NB56000c System Configuration Guidelines NB50000c, NB54000c, and NB56000c...82 Internal ServerNet Interconnect Cabling...82 Dedicated Service LAN Cables...82 ServerNet Fabric and Supported Connections...82 ServerNet Cluster Connections...82 BladeCluster Solution Connections...82 ServerNet Fabric Cross-Link Connections...82 Interconnections Between c7000 Enclosures...83 I/O Connections (Standard and High I/O ServerNet Switch Configurations)...83 Connections to IOAM Enclosures...84 Connections to G6SE Enclosures...84 Connections to CLIMs...84 Connections to NonStop S-series I/O Enclosures...84 Factory-Default Disk Volume Locations for SAS Disk Devices...85 II NonStop i BladeSystems Carrier-Grade (CG) NonStop i BladeSystems Carrier Grade Overview...87 NEBS Required Statements...90 NonStop i BladeSystem NB50000c-cg, NB54000c-cg, and NB56000c-cg Hardware...90 Seismic Rack...9 c7000 CG Enclosure...9 IP CLIM CG and Telco CLIM CG...92 Storage CLIM CG...92 CG SAS Disk Enclosures...92 Enterprise Storage System (ESS) SE DAT Tape Unit...92 HPE NonStop 240A Breaker Panel...93 Breaker Panel Specifications for NonStop i BladeSystem CG...93 HPE NonStop 80A Fuse Panel CG...95 Fuse Panel Power Specifications for NonStop i BladeSystems CG...95 HPE NonStop System Alarm Panel...96 CG Maintenance Switch...97 System Console...97 NonStop S-series CO I/O Enclosures (Optional) System Installation Specifications NB50000c-cg, NB54000c-cg, and NB56000c-cg...98 DC Power Distribution NB50000c-cg, NB54000c-cg, and NB56000c-cg...98 Enclosure Power Loads NB50000c-cg, NB54000c-cg, and NB56000c-cg...99 Dimensions and Weights NB50000c-cg, NB54000c-cg, and NB56000c-cg...00 Seismic Rack Specifications NB50000c-cg, NB54000c-cg, and NB56000c-cg...00 Floor Space Requirements...0 Unit Sizes NB50000c-cg, NB54000c-cg, and NB56000c-cg...0 Enclosure Dimensions NB50000c-cg, NB54000c-cg, and NB56000c-cg...02 Contents 5

6 Rack and Enclosure Weights Worksheet NB50000c-cg, NB54000c-cg, and NB56000c-cg...02 Environmental Specifications NB50000c-cg, NB54000c-cg, and NB56000c-cg...03 Heat Dissipation Specifications and Worksheets Carrier Grade...04 Operating Temperature, Humidity, and Altitude...07 Power Load Worksheet NB50000c-cg, NB54000c-cg, and NB56000c-cg...08 Enclosures in Seismic Racks...08 Sample Configuration NB50000c-cg, NB54000c-cg, and NB56000c-cg Support and other resources...0 Accessing Hewlett Packard Enterprise Support...0 Accessing updates...0 Websites... Customer self repair... Remote support... Documentation feedback... A Cables for BladeSystems...2 Cable Types and Connectors...2 B Default Startup Characteristics...5 C Site Power Cables Carrier Grade...8 Required Documentation...8 Requirements for Site Power or Ground Cables...8 D Power Configurations for NonStop BladeSystems in G2 Racks...9 NonStop i BladeSystem Power Distribution G2 Rack...9 NonStop i BladeSystem Single-Phase Power Distribution G2 Rack...9 Single-Phase Power Setup, Monitored PDUs G2 Rack...20 International (INTL) Monitored Single-Phase Power Configuration in a G2 Rack...27 Single-Phase Power Setup in a G2 Rack, Modular PDU...33 NonStop i BladeSystem Three-Phase Power Distribution in a G2 Rack...43 Three-Phase Power Setup in a G2 Rack, Monitored PDUs...43 Three-Phase Power Setup in a G2 Rack, Modular PDUs...52 Enclosure AC Input G2 Rack...64 Phase Load Balancing...65 E Earlier CLIM Models (G2, G5, and G6 CLIMs)...66 F Legacy Hardware (IOAM, FCDM, S-Series I/O Enclosure)...73 NonStop S-series I/O Enclosures (Optional)...73 NonStop S-series CO I/O Enclosures (Optional)...73 Fibre Channel Devices...74 Fibre Channel Disk Module Group-Module-Slot Numbering...75 IOAM Enclosure Group-Module-Slot Numbering...77 Factory-Default Disk Volume Locations for FCDMs...78 Configurations for Fibre Channel Devices...78 Configuration Restrictions for Fibre Channel Devices...79 Recommendations for Fibre Channel Device Configuration...79 Gigabit Ethernet 4-Port ServerNet Adapter (G4SA) Ethernet Ports...80 Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module...80 Two FCSAs, Two FCDMs, One IOAM Enclosure...8 Four FCSAs, Four FCDMs, One IOAM Enclosure...8 Two FCSAs, Two FCDMs, Two IOAM Enclosures...82 Four FCSAs, Four FCDMs, Two IOAM Enclosures...83 Daisy-Chain Configurations (FCDMs)...84 Four FCSAs, Three FCDMs, One IOAM Enclosure Contents

7 G UPS and Data Center Power Configurations...88 Supported UPS Configurations...88 NonStop i BladeSystem With a Fault-Tolerant Data Center...89 NonStop i BladeSystem With a Rack-Mounted UPS...90 SAS Disk Enclosures With a Rack-Mounted UPS...9 Non-Supported UPS Configurations...92 NonStop i BladeSystem With a Data Center UPS, Single Power Rail...93 NonStop i BladeSystem With Data Center UPS, Both Power Rails...94 NonStop i BladeSystem With Rack-Mounted UPS and Data Center UPS in Parallel...95 NonStop i BladeSystem With Two Rack-Mounted UPS in Parallel...96 NonStop i BladeSystem with Cascading Rack-Mounted UPS and Data Center UPS...97 H Warranty and regulatory information...99 Warranty information...99 Regulatory information...99 Belarus Kazakhstan Russia marking...99 Turkey RoHS material content declaration Ukraine RoHS material content declaration Index...20 Contents 7

8 Figures Example of NonStop i BladeSystems (Front Views) Example of a NonStop i BladeSystem with 6 Processors (Front View) Flexible Processor Bay Configuration Four ipdus Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase) Four ipdus With Single-Phase UPS (NA/JPN and INTL) Four ipdus With Three-Phase UPS (NA/JPN and INTL) Two Intelligent PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase) Two Intelligent PDUs With Single-Phase (NA/JPN and INTL) Two Intelligent PDUs With Three-Phase UPS (NA/JPN and INTL) Four Modular PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase) Four Modular PDUs With Single-Phase UPS (NA/JPN and INTL)...49 Four Modular PDUs With Three-Phase UPS (NA/JPN and INTL) Two Modular PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase) Two Modular PDUs With a Single-Phase UPS (NA/JPN and INTL) Two Modular PDUs With a Three-Phase UPS (NA/JPN and INTL) Example of Bottom AC Power Feed in an Intelligent Rack (Without UPS) Example of Top AC Power Feed in an Intelligent Rack (Without UPS) Example of Top AC Power Feed in an Intelligent Rack (With Single-Phase UPS) Example of Bottom AC Power Feed in an Intelligent Rack (With Single-Phase UPS) Example of Top AC Power Feed in an Intelligent Rack (With Three-Phase UPS) Example of Bottom AC Power Feed in an Intelligent Rack (With Three-Phase UPS) ServerNet Switch Standard I/O Supported Connections ServerNet Switch High I/O Supported Connections Example CG SAS Disk Enclosure, Front View Example CG SAS Disk Enclosure, Back View DC Power Distribution for Sample Single Rack System Sample Power Distribution for Seismic Rack and Carrier Grade rack North America/Japan Monitored Single-Phase Power Setup in a G2 Rack (Without Rack-Mounted UPS) North America/Japan Monitored Single-Phase Power Setup in a G2 Rack (With Rack-Mounted R5000 UPS) Bottom AC Power Feed, Single-Phase NA/JPN Monitored PDUs Top AC Power Feed in a G2 Rack NA/JPN Single-Phase Monitored PDUs International Monitored Single-Phase Power Setup in a G2 Rack With Rack-Mounted R5000 UPS International Monitored Single-Phase Power Setup in a G2 Rack Without Rack-Mounted UPS Bottom AC Power Feed in a G2 Rack, Single-Phase International Monitored PDUs Top AC Power Feed in a G2 Rack, Single-Phase International Monitored PDUs North America/Japan Modular Single-Phase Power Setup in a G2 Rack With Rack-Mounted R5000 UPS North America/Japan Modular Single-Phase Power Setup in a G2 Rack Without Rack-Mounted UPS Bottom AC Power Feed in a G2 Rack Single-Phase NA/JPN Modular PDUs Top AC Power Feed in a G2 Rack NA/JPN Single-Phase Modular PDUs North America/Japan Monitored 3-Phase Power Setup in a G2 Rack With Rack-Mounted UPS North America/Japan Monitored 3-Phase Power Setup Without Rack-Mounted UPS Bottom AC Power Feed in a G2 Rack, Three-Phase (Monitored NA/JPN PDUs) Top AC Power Feed in a G2 Rack, Three-Phase (Monitored NA/JPN PDUs) International Monitored 3-Phase Power Setup in a G2 Rack With Rack-Mounted UPS International Monitored 3-Phase Power Setup Without in a G2 Rack Rack-Mounted UPS...50

9 45 North America/Japan Modular 3-Phase Power Setup in a G2 Rack With Rack-Mounted UPS North America/Japan Modular 3-Phase Power Setup in a G2 Rack Without Rack-Mounted UPS Bottom AC Power Feed in G2 Rack, Three Phase (NA/JPN Modular PDUs) Top AC Power Feed in G2 Rack, Three-Phase (NA/JPN Modular PDU) International Modular 3-Phase Power Setup in a G2 Rack (With Rack-Mounted UPS) International Modular 3-Phase Power Setup in a G2 Rack (Without Rack-Mounted UPS) Bottom AC Power Feed in a G2 Rack, Three Phase (INTL Modular PDUs) Top AC Power Feed in a G2 Rack, Three-Phase (INTL Modular PDU) NonStop i BladeSystem With a Fault-Tolerant Data Center NonStop i BladeSystem With a Rack-Mounted UPS SAS Disk Enclosures With a Rack-Mounted UPS NonStop i BladeSystem With a Data Center UPS, Single Power Rail NonStop i BladeSystem With Data Center UPS, Both Power Rails NonStop i BladeSystem With Rack-Mounted UPS and Data Center UPS in Parallel NonStop i BladeSystem With Two Rack-Mounted UPS in Parallel NonStop i BladeSystem With Cascading UPS...97 Tables Characteristics of the NB50000c Characteristics of the NB54000c Characteristics of an HPE Integrity NonStop i BladeSystem NB56000c North America/Japan Single-Phase Power Specifications North America/Japan Three-Phase Power Specifications International Single-Phase Power Specifications International Three-Phase Power Specifications Example of Rack Load Calculations NB50000c Example of Rack Load Calculations NB54000c Example of Rack Load Calculations NB56000c...8 Characteristics of an NB50000c-cg Characteristics of an NB54000c-cg Characteristics of an NB56000c-cg Rack Weight Worksheet Heat Dissipation Worksheet for Seismic Rack NB50000c-cg Heat Dissipation Worksheet for Seismic Rack NB54000c-cg Heat Dissipation Worksheet for Seismic Rack NB56000c-cg Power Load Worksheet for Seismic Rack Completed Weight Worksheet for Sample System Rack Number...09

10 About This Document This guide provides an overview of HPE Integrity NonStop i BladeSystems, specifications for planning system installation, and is intended for personnel who have completed Hewlett Packard Enterprise training on NonStop i BladeSystem support. Supported Release Version Updates (RVUs) This publication supports J06.3 and all subsequent J-series RVUs until otherwise indicated in a replacement publication. New and Changed Information for Added the maximum number of SSDs allowed (20 SSDs per Storage CLIM pair) to the NonStop i BladeSystems Overview for NB50000c, NB54000c, and NB56000c. New and Changed Information for Updated the title of this guide Updated for the new Gen9 CLIMs Removed Product IDs Moved G2, G5, and G6 CLIMs and MSA70 SAS disk enclosure to Earlier CLIM Models (G2, G5, and G6 CLIMs) (page 66) Moved some hardware to a new appendix: Legacy Hardware (IOAM, FCDM, S-Series I/O Enclosure) (page 73) New and Changed Information for R Updated Hewlett Packard Enterprise references. New and Changed Information for Updated for Solid State Drives (SSDs). New and Changed Information for Updated to support the NonStop i BladeSystem NB56000c and NonStop i BladeSystem NB56000c-cg Updated to support the Gen8 CLIM CG Updated to support the G6SE enclosure New and Changed Information for Added support for the new Gen8 IP, Telco, and Storage CLIMs. See CLuster I/O Modules (CLIMs) (page 9). Updated PDU configurations to show the output module on the three-phase UPS. Added information regarding the importance of changing the ride-through time for an Hewlett Packard Enterprise-supported UPS from the manufacturing default setting to an appropriate value for your system during installation of a NonStop i BladeSystem or UPS. 0

11 Publishing History Part Number R Product Version N.A. N.A. N.A. N.A. N.A. N.A. Publication Date February 203 August 203 May 204 November 205 May 206 May 207 Publishing History

12 Part I HPE Integrity NonStop i BladeSystems AC Power CAUTION: Information provided here is for reference and planning. Only authorized service providers with specialized training can install or service the NonStop i BladeSystem.

13 NonStop i BladeSystems Overview HPE Integrity NonStop i BladeSystems combine the NonStop operating system and the BladeSystem c-class architecture in a single footprint and use ServerNet as the system interconnect. The J06.04 RVU introduces the HPE Integrity NonStop i BladeSystem NB50000c. Characteristics of the NB50000c, NB54000c, and NB56000c are described in the following tables. For hardware overview and illustrations, refer to Standard and Optional Hardware (page 7) and Figure (page 6). For information about carrier-grade NonStop i BladeSystems, refer to NonStop i BladeSystems Carrier-Grade (CG) (page 86) Table Characteristics of the NB50000c Processor/Processor model Supported RVU Rack and chassis Minimum/maximum memory Minimum/maximum processors System configurations Maximum CLIMs in a 6 CPU system Minimum CLIMs for fault-tolerance Maximum SAS disk enclosures per Storage CLIM pair Maximum HDDs Maximum SSDs Maximum FCDMs through IOAM enclosure Maximum IOAM enclosures ESS support through Storage CLIMs or IOAMs NonStop ServerNet Clusters and BladeCluster Solution Connection to NonStop S-series I/O enclosures Intel Itanium /NSE-M NOTE: NB50000c, NB54000c, and NB56000c server blades cannot coexist in the same system J06.04 and later RVUs 42U rack, c7000 enclosure ( c7000 for 2-8 processors; 2 c7000s for 0-6 processors) 8 GB to 48 GB main memory per logical processor 2 to 6 Two supported configurations that support 2, 4, 6, 8, 0, 2, 4, or 6 processors 48 CLuster I/O Modules (CLIMs) Storage, IP, Telco, and IB 0 CLIMs (if there are IOAM enclosures) 2 Storage CLIMs (if there are no IOAM or G6SE enclosures) 2 Networking CLIMs IP, Telco, and IB (if there are no IOAM enclosures) A Storage CLIM pair supports a maximum of 4 SAS disk enclosures. This maximum applies to all Storage CLIM types. 00 Hard Disk Drives (HDDs) per Storage CLIM pair 20 Solid State Drives (SSDs) per Storage CLIM pair (supported on J06.3 and later) 4 Fibre-Channel Disk Modules (FCDMs) daisy-chained with 4 disk drives per FCDM 4 IOAMs for 2-8 processors; 6 IOAMs for 0-6 processors NOTE: When CLIMs are also included in the configuration, the maximum number of IOAMs might be smaller. Check with your Hewlett Packard Enterprise representative to determine your system's maximum for IOAMs. Supported Supported Supported for token-ring ServerNet connectivity or System Signaling Seven (SS7) connectivity for up to 4 S-series I/O enclosures 3

14 The J06. RVU introduces the HPE Integrity NonStop i BladeSystem NB54000c. Table 2 Characteristics of the NB54000c Processor/Processor model Supported RVU CLIM DVD (Minimum DVD version required for RVU) Rack and chassis Minimum/maximum memory Minimum/maximum processors Supported platform configurations Maximum CLIMs in 6 CPU system Minimum CLIMs for fault-tolerance Maximum SAS disk enclosures per Storage CLIM pair Maximum HDDs Maximum SSDs Maximum FCDMs through IOAM enclosure Maximum IOAM enclosures ESS support through Storage CLIMs or IOAMs NonStop ServerNet Clusters and BladeCluster Solution Connection to NonStop S-series I/O enclosures Core licensing file Power Regulator Intel Itanium/NSE-AB NOTE: NB50000c, NB54000c, and NB56000c server blades cannot coexist in the same system. J06. and later RVUs Refer to the CLuster I/O Module (CLIM) Software Compatibility Guide for supported version. This file is preinstalled on new NB54000c and NB54000c-cg systems. 42U rack and c7000 enclosure (one to two enclosures depending on the number of server blades or platform configuration) 6 GB to 64 GB main memory per logical processor (64 GB supported on J06.3 and later RVUs) 2 to 6 3 configuration options that support 2, 4, 6, 8, 0, 2, 4, or 6 processors with processor numbering either sequentially or in even/odd format. Refer to NonStop i BladeSystems Platform Configurations (page 26). 48 CLuster I/O Modules (CLIMs) Storage, IP, Telco, and IB 0 CLIMs (if there are IOAM enclosures) 2 Storage CLIMs (if there are no IOAM enclosures) 2 Networking CLIMs IP, Telco, and IB (if there are no IOAM enclosures) A Storage CLIM pair supports a maximum of 4 SAS disk enclosures. This maximum applies to all Storage CLIM types. 00 Hard Disk Drives (HDDs) per Storage CLIM pair 20 Solid State Drives (SSDs) per Storage CLIM pair (supported on J06.3 and later) 4 Fibre Channel disk modules (FCDMs) daisy-chained with 4 disk drives per FCDM 4 IOAMs for 2-8 processors; 6 IOAMs for 0-6 processors NOTE: When CLIMs are also included in the configuration, the maximum number of IOAMs might be smaller. Check with your Hewlett Packard Enterprise representative to determine your system's maximum for IOAMs. Supported Supported Supported for token-ring ServerNet connectivity or System Signaling Seven (SS7) connectivity for up to 4 S-series I/O enclosures. Refer to (page 26). This file is required as of J06.3. Refer to Core Licensing (page 6). Supported as of J06.4 or later RVUs. Refer to Power Regulator for NonStop i BladeSystems (page 29). 4 NonStop i BladeSystems Overview

15 The J06.6 RVU introduces the NB56000c. Table 3 Characteristics of an HPE Integrity NonStop i BladeSystem NB56000c Processor/Processor model Supported RVU CLIM DVD (Minimum DVD version required for RVU) Rack and Chassis Minimum/maximum memory Minimum/maximum processors Supported platform configurations Maximum CLIMs in a 6 CPU system Minimum CLIMs for fault-tolerance Maximum SAS disk enclosures per Storage CLIM pair Maximum HDDs Intel Itanium/NSE-AF NOTE: Mixed systems are not supported except for the duration of an online migration from NB54000c to NB56000c or NB54000c-cg to NB56000c-cg. J06.6 and later RVUs Refer to the CLuster I/O Module (CLIM) Software Compatibility Guide for supported version. NOTE: This file is preinstalled on new NB56000c/NB56000c-cg systems. 42U rack and c7000 enclosure (one to two enclosures depending on the number of server blades or platform configuration) NB56000c supports 6GB, 32GB, 48GB, 64GB and 96GB memory configurations 2 to 6 3 configuration options that support 2, 4, 6, 8, 0, 2, 4, or 6 processors wither processor numbering either sequentially or in even/odd format. Refer to NonStop i BladeSystems Platform Configurations (page 26). 48 CLuster I/O Modules (CLIMs) Storage, IP, Telco, and IB. 0 CLIMs (if there are IOAM enclosures) 2 Storage CLIMs (if there are no IOAM enclosures) 2 Networking CLIMs IP, Telco, and IB (if there are no IOAM or G6SE enclosures) A Storage CLIM pair supports a maximum of 4 SAS disk enclosures. This maximum applies to all Storage CLIM types. 00 Hard Disk Drives (HDDs) per Storage CLIM pair Maximum SDDs Maximum FCDMs through IOAM enclosure Maximum IOAM enclosures ESS support through Storage CLIMs or IOAMs Connection to NonStop ServerNet Clusters Connection to BladeCluster Solution Connection to NonStop S-series I/O enclosures Core licensing file Power Regulator 20 Solid State Drives (SSDs) per Storage CLIM pair (supported on J06.6 and later) 4 Fibre Channel disk modules (FCDMs) daisy-chained with 4 disk drives per FCDM. 4 IOAMs for 2-8 processors; 6 IOAMs for 0-6 processors. NOTE: When CLIMs are also included in the configuration, the maximum number of IOAMs might be smaller. Check with your Hewlett Packard Enterprise representative to determine your system's maximum for IOAMs. Supported Supported Supported Supported for token-ring ServerNet connectivity or System Signaling Seven (SS7) connectivity for up to 4 S-series I/O enclosures. Refer to (page 26). This file is required as of J06.6. Refer to Core Licensing (page 6). Supported as of J06.6. Refer to Power Regulator for NonStop i BladeSystems (page 29). 5

16 Figure Example of NonStop i BladeSystems (Front Views) Migrating NonStop i BladeSystems Migrating a NonStop i BladeSystem requires removing all server blades in the system and installing the server blades that are supported on the system to which you are migrating. No application changes or recompilation are required. Migration is performed by service providers and requires these minimum RVUs. NB54000c requires J06. or later RVUs NB54000c-cg requires J06.2 or later RVUs NB56000c and NB56000c-cg require J06.6 or later RVUs Core Licensing As of J06.3 and later RVUs, Hewlett Packard Enterprise provides a core licensing feature on BladeSystems NB54000c and NB54000c-cg. This feature is also supported on NB56000c and NB56000c-cg systems running J06.6 and later RVUs. The number of cores enabled on your system is determined by the core license file. For more information about this required file, refer to the NonStop Core Licensing Guide. 6 NonStop i BladeSystems Overview

17 NonStop i BladeSystem Standard and Optional Hardware c7000 Enclosure (page 7) NonStop i Server Blades (page 8) CLuster I/O Modules (CLIMs) (page 9) SAS Disk Enclosure (page 23) Fibre Channel Disk Module (FCDM) (page 24) Maintenance Switch (page 24) System Console (page 24) UPS and ERM (Optional) (page 37) Enterprise Storage System (Optional) (page 25) NonStop S-series I/O Enclosures (Optional) (page 25) Figure shows three example system configurations. c7000 Enclosure The c7000 enclosure unifies NonStop i server blades and redundant ServerNet switch interconnects in a 0U footprint and features: Up to 8 NonStop i Server Blades per c7000 enclosure configured in pairs. Two Onboard Administrator (OA) modules provide detection, identification, and management services for the NonStop i BladeSystem while also allowing you to monitor and control resources using the HPE Insight Display as described in the HP BladeSystem Onboard Administrator User Guide. Two Interconnect Ethernet switches download HSS bootcode via maintenance LAN. Two ServerNet switches provide ServerNet fabric connectivity. For information about the LEDs associated with the c7000 enclosure components, refer to the HPE BladeSystem c7000 Enclosure Setup and Installation Guide. NonStop i BladeSystem Standard and Optional Hardware 7

18 NonStop i Server Blades The NonStop i achieves full software fault tolerance by running the NonStop operating system on a NonStop i Server Blade. With the server blade's multiple core microprocessor architecture, a set of cores comprised of instruction processing units (IPUs) share the same memory map (except in low-level software) extending the traditional NonStop logical processor to a scalable multiprocessor. NOTE: Mixed systems are not supported except for the duration of an online migration from NB54000c to NB56000c or NB54000c-cg to NB56000c-cg. BladeSystem NB50000c running J06.04 or later NonStop i Server Blade Characteristics Up to 48GB of memory supported per BL860c full-height server blade with an Intel Itanium 900 dual-core processor and a ServerNet interface mezzanine card to provide ServerNet fabric connectivity. NB54000c running J06. or later Up to 48GB of memory supported per full-height BL860c i2 server blade with an Intel Itanium 9300 quad-core processor and a ServerNet interface mezzanine card to provide ServerNet fabric connectivity. All pre-j06.3 NB54000c and NB54000c-cg server blades with less than 64 GB of memory can be upgraded to 64GB of memory. For instructions, have your service provider refer to the Replacing a FRU in a NonStop BladeSystem NB54000c, NB54000c-cg, or NB56000c, NB56000c-cg Server Blade, or Adding Memory or a Server Blade service procedure. NB54000c running J06.3 or later Up to 64 GB of memory supported per full-height server blade with an Intel Itanium 9300 quad-core processor and a ServerNet interface mezzanine card to provide ServerNet fabric connectivity. NB54000c blades with up to 48 GB of memory and NB54000c blades with 64 GB of memory can coexist in the same system if the RVU is J06.3 or later NB56000c running J06.6 or later Up to 96 GB of memory supported per full-height BL860c i4 server blade with an Intel Itanium 9500 quad-core processor and a ServerNet interface mezzanine card to provide ServerNet fabric connectivity. 8 NonStop i BladeSystems Overview

19 CLuster I/O Modules (CLIMs) NonStop i BladeSystems support the IP CLIM, Telco CLIM and Storage CLIM which function as Ethernet or I/O adapters and are managed by the Cluster I/O Protocols (CIP) subsystem. A CLIM is identified by the number on the rear label; this same number is also listed as the part number in OSM. This illustration shows the front views of the NonStop i Gen8 and NonStop i R5 (Gen9) CLIMs. The Gen8 CLIM is supported on NB54000c and NB56000c. The Gen9 CLIM is supported on NB56000c. More information Earlier CLIM Models (G2, G5, and G6 CLIMs) (page 66) Cluster I/O Protocols (CIP) Configuration and Management Manual CLuster I/O Module (CLIM) Software Compatibility Guide NonStop i BladeSystem Standard and Optional Hardware 9

20 IP CLIM and Telco CLIM The IP CLIM and Telco CLIM are sometimes referred to as Networking CLIMs. These CLIMs function as ServerNet Ethernet adapters providing standard Gigabit Ethernet Network Interface Cards (NICs) to implement one of these CLIM configurations. Gen8 and Gen9 IP and Telco CLIM Option Five Ethernet Copper Ports Interface Slot Slot 2 Characteristics of NonStop i 5C IP and Telco CLIMs (Gen8 and Gen9) ServerNet PCIe interface card provides the ServerNet fabric connections GbE 2-port adapter copper NIC for customer interfaces Eth, eth2, eth3 Three GbE copper ports for customer data Eth0 is reserved and provides maintenance support 20 NonStop i BladeSystems Overview

21 Gen8 and Gen9 IP and Telco CLIM Option 2 Three Ethernet Copper and Two Optical Ports Interface Slot Slot 2 Characteristics of NonStop i 3C/2F IP and Telco CLIMs (Gen8 and Gen9) ServerNet PCIe interface card provides the ServerNet fabric connections GbE 2-port adapter optical NIC for customer interfaces Eth, eth2, eth3 Three GbE copper ports for customer data Eth0 is reserved and provides maintenance support RJ45 Cable Management A Cable Management panel is used for the RJ45 connections to the Networking CLIMs and is preinstalled in all new systems to provide easy access to customer-usable interfaces. Service providers, refer to the QMS Tech Doc and the CLuster I/O (CLIM) Installation and Configuration Guide (H06.6+, J06.04+). NonStop i BladeSystem Standard and Optional Hardware 2

22 Storage CLIM The Storage CLIM functions as an I/O adapter for the system and supports SAS disk drives and SAS tapes and optionally ESS and FC tape devices via 3 PCIe HBA slots: HBA in Slot Characteristics of NonStop i Storage CLIMs (Gen8 and Gen9) HBA (part of base configuration) provides the ServerNet fabric connections 2 SAS HBA with two 6 Gbps SAS ports or FC HBA with two 8 Gbps FC ports (must be ordered) 3 Optional order of SAS HBA with two 6 Gbps SAS ports or FC HBA with 8 Gbps FC ports 22 NonStop i BladeSystems Overview

23 SAS Disk Enclosure The SAS disk enclosure provides the storage capacity for the Storage CLIM and supports SAS HDDs and SAS SSDs. The D3700 SAS disk enclosure is supported by the Gen9 Storage CLIM. This enclosure holds SAS Smart Carrier HDDs and SSDs with redundant power and cooling. The D2700 SAS disk enclosure is supported by G6 and Gen8 Storage CLIMs. The D2700 SAS disk enclosure holds SAS universal carrier HDDs and SSDs with redundant power and cooling. As of J06.2 and later RVUs, you can partition some SAS HDDs and all SSDs in SAS disk enclosures connected to G6, Gen8, or Gen9 CLIMs. Only newer HDDs support disk partitioning. CAUTION: If the WRITECACHE attribute is enabled on an HDD or SSD disk volume that is connected to a Storage CLIM, using a rack-mounted HPE UPS to prevent data loss on that volume is recommended. The WRITECACHE enabled (WCE) option controls whether write caching is performed for disk writes. More information Earlier Storage CLIMs and SAS disk enclosures SCF Reference Manual for the Storage Subsystem (G06.28+, H06.05+, J06.03+) NonStop i BladeSystem Standard and Optional Hardware 23

24 HP D2600/D2700 Disk Enclosure User Guide HP D3600/D3700 Disk Enclosure User Guide UPS and Data Center Power Configurations (page 88) System Installation Specifications for NonStop i BladeSystems (page 40) IOAM Enclosure NOTE: The Fibre Channel to SCSI router is not supported on NonStop i BladeSystems. The IOAM enclosure is part of some NonStop i BladeSystem configurations. The IOAM enclosure uses Gigabit Ethernet 4-port ServerNet adapters (G4SAs) for networking connectivity and Fibre Channel ServerNet adapters (FCSAs) for Fibre Channel connectivity between the system and Fibre Channel disk modules (FCDMs), ESS, and Fibre Channel tape. More information Legacy Hardware (IOAM, FCDM, S-Series I/O Enclosure) (page 73) IOAM Enclosure Group-Module-Slot Numbering (page 77) System Installation Specifications for NonStop i BladeSystems (page 40) G6SE Enclosure The G6SE enclosure is part of some NonStop i configurations. The G6SE is a 6 port Ethernet solution. For more information about the G6SE, refer to the G6SE Ethernet Connectivity Guide for NonStop BladeSystems. Fibre Channel Disk Module (FCDM) The Fibre Channel disk module (FCDM) is a rack-mounted enclosure that can only be used with NonStop i BladeSystems that have IOAM enclosures. The FCDM connects to an FCSA in an IOAM enclosure. You can daisy-chain together up to four FCDMs with 4 drives in each one. For more information about FCDMs, refer to Legacy Hardware (IOAM, FCDM, S-Series I/O Enclosure) (page 73). Maintenance Switch The ProCurve maintenance switch provides the communication between the NonStop i BladeSystem through the Onboard Administrator, c7000 enclosure interconnect Ethernet switch, IP, Storage, and Telco CLIMs, G6SE enclosures, IOAM enclosures, the optional UPS, and the system consoles running OSM. The NonStop i BladeSystem requires multiple connections to the maintenance switch. Refer to the NonStop i BladeSystem Hardware Installation Manual for these connections. System Console A system console is a Windows Server purchased from Hewlett Packard Enterprise that runs maintenance and diagnostic software for NonStop i BladeSystems. When supplied with a new NonStop i BladeSystem, system consoles have factory-installed Hewlett Packard Enterprise and third-party software for managing the system. You can install software upgrades from the NonStop System Console Installer DVD. Two system consoles, a primary and a backup, are required to manage NonStop i BladeSystems. 24 NonStop i BladeSystems Overview

25 Enterprise Storage System (Optional) An Enterprise Storage System (ESS) is a collection of magnetic disks, their controllers, and a disk cache in one or more standalone racks. ESS connects to the NonStop i BladeSystem via the Storage CLIM's Fibre Channel HBA ports (direct connect), Fibre Channel ports on the IOAM enclosures (direct connect), or through a separate storage area network (SAN) using a Fibre Channel SAN switch (switched connect). For more information about these connection types, see your service provider. NOTE: The Fibre Channel SAN switch power cords might not be compatible with the rack PDU. Contact your service provider to order replacement power cords if needed. Cables and switches vary, depending on whether the connection is direct, switched, or a combination: Connection Direct connect Switched Combination of direct and switched Cables 2 Fibre Channel ports on IOAM (LC-LC) 2 Fibre Channel HBA interfaces on Storage CLIM (LC-MMF) 4 Fibre Channel ports (LC-LC) 4 Fibre Channel HBA interfaces on Storage CLIM (LC-MMF) 2 Fibre Channel ports for each direct connection Fibre Channel Switches 0 0 or more or more 4 Fibre Channel ports for each switched connection Customer must order the FC HBA interfaces on the Storage CLIM. For fault tolerance, the primary and backup paths to an ESS logical device (LDEV) must go through different Fibre Channel switches. Some storage area procedures, such as reconfiguration, can cause the affected switches to pause. If the pause is long enough, I/O failure occurs on all paths connected to that switch. If both the primary and the backup paths are connected to the same switch, the LDEV goes down. Refer to the documentation that accompanies the ESS. NonStop S-series I/O Enclosures (Optional) Up to four NonStop S-series I/O enclosures (Groups -4) can be connected to the ServerNet switches in a c7000 enclosure (Group 00 only). Refer to Legacy Hardware (IOAM, FCDM, S-Series I/O Enclosure) (page 73). NonStop i BladeSystem Standard and Optional Hardware 25

26 NonStop i BladeSystems Platform Configurations NonStop i BladeSystems support platform configuration options via OSM Low-Level Link. All new commercial and carrier grade BladeSystems arrive pre-configured with one of these options: Configuration Option Platform Configuration Platform Configuration 0 Description This option is for NB50000c, NB54000c, and NB56000c and supports: Processors that are configured sequentially: 0 7 in enclosure in enclosure 0 This option is the Flex Processor Bay Configuration which is supported on all NonStop BladeSystems except NB50000c and NB50000c-cg and which: Offers two choices for processor numbering: Sequential numbering Processors are numbered 0 through 7 in enclosure 00 and 8 through 5 in enclosure 0 Even/odd numbering Alternately, processors are numbered starting with 0 in enclosure 00 and in enclosure 0 and continue alternating between the two enclosures in an even/odd manner as shown in Figure 2 (page 27). NOTE: Even/odd numbering requires a second c7000 enclosure. If you want to add a second c7000 enclosure, ask your service provider to refer to the Adding a c7000 Enclosure in a NonStop BladeSystem service procedure. TIP: The platform configuration is pre-configured on a new system. However, if you want to change a NB54000c or NB56000c platform configuration to or from the Flex Processor Bay Configuration or change processor numbering from even/odd to sequential or vice-versa, ask your service provider to refer to the Changing a NonStop BladeSystem NB54000c, NB54000c-cg or NB56000c, NB56000c-cg to or from Flex Processor Bay Configuration service procedure. Platform Configuration 9 This option is for NonStop S-series I/O enclosure connections to NB50000c, NB54000c, and NB56000c and supports: Processors that are configured sequentially: 0 7 in enclosure in enclosure 0 Platform configuration 9 is the only configuration option that supports connections to NonStop S-series I/O enclosures. This configuration option does not support even/odd processor numbering. 26 NonStop i BladeSystems Overview

27 Figure 2 Example of a NonStop i BladeSystem with 6 Processors (Front View) Flexible Processor Bay Configuration NOTE: The above configuration is supported on NB54000c and NB56000c only. NB54000c and NB56000c blades cannot coexist within the same system except for the duration of an online migration. NonStop i BladeSystems Platform Configurations 27

28 2 Managing and Locating NonStop i BladeSystem Components This chapter describes the management tools for the NonStop i BladeSystem, how to locate and identify system components, power regulator settings, default naming conventions, and the group-module-slot numbering for system components. Management Tools for NonStop i BladeSystems NOTE: For information about changing the default passwords for NonStop i BladeSystem components and associated software, refer to Changing Customer Passwords (page 30). This subsection describes the management tools available on your NonStop BladeSystem. OSM Package The HPE Open System Management (OSM) product is the required system management tool for NonStop i BladeSystems. There are several OSM tools and online help for managing the systems. For more information, refer to the OSM Configuration Guide, the help within the OSM tool, or the OSM Service Connection User's Guide. For more information on using OSM tools to manage the NonStop Maintenance LAN and system console configurations, have your service provider refer to the Nonstop Dedicated Service LAN Installation and Configuration Guide. Onboard Administrator (OA) and Integrated Lights Out (ilo) The OA is the enclosure's management, processor, subsystem, and firmware base and supports the c7000 enclosure and NonStop i Server Blades. The OA software is integrated with OSM and the Integrated Lights Out (ilo) management interface. The ilo enables you to perform activities on the system from a remote location and provides anytime-access to system management information such as hardware health, event logs, and configuration to troubleshoot the server blades. The OA can generate a full inventory, status and configuration report of all the components the OA supports; this is the so called SHOW ALL report. For details on how to generate this report, refer to: Cluster I/O Protocols (CIP) Subsystem The Cluster I/O Protocols (CIP) subsystem provides a configuration and management interface for I/O on NonStop i BladeSystems. The CIP subsystem has several tools for monitoring and managing the subsystem. For more information about these tools and the CIP subsystem, refer to the Cluster I/O Protocols (CIP) Configuration and Management Manual. Subsystem Control Facility (SCF) Subsystem The Subsystem Control Facility (SCF) also provides monitoring and management of the CIP subsystem on the NonStop i BladeSystems. See the Cluster I/O Protocols (CIP) Configuration and Management Manual for more information about using these two subsystems. 28 Managing and Locating NonStop i BladeSystem Components

29 Technical Document for NonStop i BladeSystems Each new system includes a detailed Technical Document that serves as the connection map for the system and which describes: Rack included with the system and each enclosure installed in the rack Rack U location at the bottom edge of each enclosure Each cable with source, destination, connector type, and cable part number TIP: It is important to retain all NonStop i BladeSystem records in an Installation Document Packet, including the Technical Document for your system and any configurations forms. To add CLIM configuration forms to your Installation Document Packet, have your service provider copy the forms from the CLuster I/O Module (CLIM) Installation and Configuration Manual. Power Regulator for NonStop i BladeSystems NOTE: Power Regulator is not supported on NB50000c or NB50000c-cg. This feature requires: OSM server SPR T0682ACV or later on an NB54000c or NB54000c-cg running J06.4 and later RVUs. OSM server SPR T0682ADF or later on an NB56000c or NB56000c-cg running J06.6 and later RVUs. Power Regulator manages power modes. Power Regulator must be enabled via the OSM Service Connection's Enable/Disable Blade Power Regulator Management action for the system. Once enabled, these Power Regulator modes are available: Power Regulator Modes Static High Performance Mode (default) Dynamic Power Savings Mode Description In Static High Performance Mode, the CPU runs at the maximum performance/power consumption supported by your NonStop configuration. This mode ensures maximum performance, but does not provide any power savings. This is the default mode for all systems, including shipped systems. In Dynamic Power Savings Mode, IPUs 2 run at maximum power/performance until an IPU idles. Once the IPU idles (and as long as it does not need to off-load work from other IPUs) it moves to the Itanium power saving state until a task needs to run on it, whereupon the IPU executes instructions at maximum power/performance again. Static Low Power Mode CPU is the logical processor that consists of a set of one or more IPUs. 2 An IPU (Instruction Processing Unit) is a microprocessor core. In Static Low Power Mode, the CPU is set to a lower power state. This state saves power by having the CPU operate at a lower frequency, with resulting lower performance capacity. The performance impact is workload-dependent. NOTE: The OS Control Power Regulator mode is not supported on NonStop i systems. If the OS Control Power Regulator Mode is selected, the command fails and the existing power regulator setting is left unchanged. For information on using Power Regulator, refer to HPE SIM for NonStop Manageability and to NonStop Firmware Matrices for information on the Server Firmware bundle (system firmware only). Technical Document for NonStop i BladeSystems 29

30 Changing Customer Passwords NonStop i BladeSystems are shipped with default user names and default passwords for the Administrator for certain components and software. Once the system is set up, you should change these passwords to your own passwords. For instructions, refer your service provider to the NonStop i BladeSystem Hardware Installation Manual. Default Naming Conventions for NonStop i BladeSystem Resources With a few exceptions, default naming conventions are not necessary for system resources. However, default naming conventions have been preconfigured for the following resources to simplify initial configuration files and automatic generation of these resources. NOTE: The naming conventions for a CLIM are based on the group, module, slot, port, fiber values of the CLIM s X attachment point. Type of Object Naming Convention Example Description IB CLuster I/O Module (CLIM) Bgroup module slot port fiber B IB CLIM which has the X attachment point to the ServerNet switch port at group 00, module 2, slot 5, port 3, fiber 4 IP CLuster I/O Module (CLIM) Ngroup module slot port fiber N IP CLIM which has the X attachment point to the ServerNet switch port at group 00, module 2, slot 5, port 3, fiber 2 Storage CLIM Sgroup module slot port fiber S Storage CLIM which has the X attachment point to the ServerNet switch port at group 00, module 2, slot 5, port 3, fiber 3 Telco CLIM Ogroup module slot port fiber O Telco CLIM which has the X attachment point to the ServerNet switch port at group 00, module 2, slot 5, port 3, fiber 4 SAS disk volume $SASnumber $SAS20 Twentieth SAS disk volume in the system ESS disk volume $ESSnumber $ESS20 Twentieth ESS disk drive in the system Fiber Channel disk drive $FCnumber $FC0 Tenth Fibre Channel disk drive in the system Tape drive $TAPEnumber $TAPE0 First tape drive in the system Maintenance CIPSAM process $ZTCPnumber $ZTCP0 First maintenance CIPSAM process for the system Maintenance provider ZTCPnumber ZTCP0 First maintenance provider for the system, associated with the CIPSAM process $ZTCP0 Maintenance CIPSAM process $ZTCPnumber $ZTCP Second maintenance CIPSAM process for the system 30 Managing and Locating NonStop i BladeSystem Components

31 Type of Object Naming Convention Example Description Maintenance provider ZTCPnumber ZTCP Second maintenance provider for the system, associated with the CIPSAM process $ZTCP IPDATA CIPSAM process $ZTC number $ZTC0 First IPDATA CIPSAM process for the system IPDATA provider $ZTC number ZTC0 First IPDATA provider for the system Maintenance Telserv process $ZTNP number $ZTNP Second maintenance Telserv process for the system that is associated with the CIPSAM $ZTCP process Non-maintenance Telserv process $ZTN number $ZTN0 First non-maintenance Telserv process for the system that is associated with the CIPSAM $ZTC0 process Listener process $ZPRPnumber $ZPRP Second maintenance Listener process for the system that is associated with the CIPSAM $ZTCP process Non-maintenance Listener process $LSN number $LSN0 First non-maintenance Listener process for the system that is associated with the CIPSAM $ZTC0 process TFTP process Automatically created by WANMGR None None WANBOOT process Automatically created by WANMGR None None SWAN adapter Snumber S9 Nineteenth SWAN adapter in the system Possible Values of Disk and Tape LUNs The possible values of disk and tape LUN numbers depend on the type of the resource. For a SAS disk, the LUN number is calculated as base LUN + offset. base LUN is the base LUN number for the SAS enclosure. Its value can be 00, 200, 300, 400, 500, 600, 700, 800, or 900, and should be numbered sequentially for each of the SAS enclosures attached to the same CLIM. offset is the bay (slot) number of the disk in the SAS enclosure. For an ESS disk, the LUN number is calculated as base LUN + offset. base LUN is the base LUN number for the ESS port. Its value can be 000, 500, 2000, 2500, 3000, 3500, 4000, or 4500, and should be numbered sequentially for each of the ESS ports attached to the same CLIM. offset is the LUN number of the ESS LUN. Possible Values of Disk and Tape LUNs 3

32 For a physical Fibre Channel tape, the value of LUN number can be, 2, 3, 4, 5, 6, 7, 8, or 9, and should be numbered sequentially for each of the physical tapes attached to the same CLIM. For a VTS tape, the LUN number is calculated as base LUN + offset. base LUN is the base LUN number for the VTS port. Its value can be 5000, 500, 5020, 5030, 5040, 5050, 5060, 5070, 5080, or 5090, and should be numbered sequentially for each of the VTS ports attached to the same CLIM. offset is the LUN number of the VTS LUN. NonStop i BladeSystem Component Location and Identification Terminology These are terms used in locating and describing components: Term Rack Rack Offset Group Module Slot (or Bay or Position) Port Fiber Group-Module-Slot (GMS) Group-Module-Slot-Bay (GMSB) Group-Module-Slot-Port (GMSP) Group-Module-Slot-Port-Fiber (GMSPF) NonStop Server Blade Definition Structure where rackmountable components are assembled. The rack uses this naming convention: system-name-racknumber The physical location of components installed in a rack, measured in U values numbered to 42, with U at the bottom of the rack. A U is.75 inches (44 millimeters). A subset of a system that contains one or more modules. A group does not necessarily correspond to a single physical object, such as an enclosure. A subset of a group that is usually contained in an enclosure. A module contains one or more slots (or bays). A module can consist of components sharing a common interconnect, such as a backplane, or it can be a logical grouping of components performing a particular function. A subset of a module that is the logical or physical location of a component within that module. A connector to which a cable can be attached and which transmits and receives data. Number (one to four) of the fiber pair (LC connector) within an MTP-LC fiber cable. An MTP-LC fiber cable has a single MTP connector on one end and four LC connectors, each containing a pair of fibers, at the other end. The MTP connector connects to the ServerNet switch in the c7000 enclosure and the LC connectors connect to the CLIM A notation method used by hardware and software in NonStop i BladeSystems for organizing and identifying the location of certain hardware components. A server blade that provides processing and ServerNet connections. 32 Managing and Locating NonStop i BladeSystem Components

33 On NonStop i BladeSystems, locations of the modular components are identified by: Physical location: Rack number Rack offset Logical location: group, module, and slot (GMS) notation as defined by their position on the ServerNet rather than the physical location OSM uses GMS notation in many places, including the Tree view and Attributes window, and it uses rack and offset information to create displays of the server and its components. Rack and Offset Physical Location Rack name and rack offset identify the physical location of components in a NonStop i BladeSystem. The rack name is located on an external label affixed to the rack, which includes the system name plus a 2-digit rack number. Rack offset is labeled on the rails in each side of the rack. These rails are measured vertically in units called U, with one U measuring.75 inches (44 millimeters). The rack is 42U with U located at the bottom and 42U at the top. The rack offset is the lowest number on the rack that the component occupies. ServerNet Switch Group-Module-Slot Numbering Group (00-0): Group 00 is the first c7000 processor enclosure containing logical sequential processors 0-7 or even processors 0, 2, 4, 6, 8, 0, 2, and 4. Group 0 is the second c7000 processor enclosure containing logical sequential processors 8-5 or odd processors, 3, 5, 7, 9,, 3, and 5. Module (2-3): Module 2 is the X fabric. Module 3 is the Y fabric. Slot (5 or 7): Slot 5 contains the double-wide ServerNet switch for the X fabric. Slot 7 contains the double-wide ServerNet switch for the Y fabric. NOTE: There are two types of c7000 ServerNet switches: Standard I/O and High I/O. For more information and illustrations of the ServerNet switch ports, refer to I/O Connections (Standard and High I/O ServerNet Switch Configurations) (page 83). Port (-8): Ports through 2 support the inter-enclosure links. Port is marked GA. Port 2 is marked GB. Ports 3 through 8 support the I/O links (IP CLIM, Storage CLIM, Telco CLIM, IB CLIM, IOAM, or NonStop S-series I/O enclosures). If NonStop S-series I/O enclosures are present, CLIMs cannot be connected to port 3 of the ServerNet switches in a c7000 enclosure. NonStop i BladeSystem Component Location and Identification 33

34 NOTE: IOAMs must use Ports 4 through 7. These ports support 4-way IOAM links. For information on G6SE connections to the ServerNet switch, refer to the G6SE Ethernet Connectivity Guide for NonStop BladeSystems. Ports 9 and 0 support the cross links between two ServerNet switches in the same enclosure. Ports and 2 support the links to a cluster switch. SH on Port stands for short haul. LH on Port 2 stands for long haul. Ports 3 through 8 are only supported if your BladeSystem participates in a BladeCluster and uses the BladeCluster ServerNet switch. Refer to the BladeCluster Solution Manual for information about the BladeCluster Solution. Fiber (-4) These fibers support up to 4 ServerNet links on ports 3-8 of the c7000 enclosure ServerNet switch. NonStop i Server Blade Group-Module-Slot Numbering When the NonStop i server blades are powered on and functioning, default numbering is either sequential as shown in the table below or even/odd as shown in Even/Odd processor numbering Sequential Processor Numbering GMS Numbering For the Logical Processors Sequential Numbering: Processor ID Group* Module Slot* *In the OSM Service Connection, the term Enclosure is used for the group and the term Bay is used for the slot. 34 Managing and Locating NonStop i BladeSystem Components

35 Even/Odd Processor Numbering GMS Numbering For the Logical Processors Even/Odd Numbering: Processor ID Group* Module Slot* *In the OSM Service Connection, the term Enclosure is used for the group and the term Bay is used for the slot. Figure 2 (page 27) shows the even/odd processor numbering of server blades in a c7000 enclosure. CLIM Enclosure Group-Module-Slot-Port-Fiber Numbering This table shows the valid values for GMSPF numbering for the X ServerNet switch connection point to a CLIM: Group Module Slots Ports Fibers ServerNet switch , 3 5, 7 3 to 8-4 G6SE Enclosure Group-Module-Slot-Port Numbering For G6SE enclosure GMSP numbering, refer to the G6SE Ethernet Connectivity Guide for NonStop BladeSystems. IOAM Enclosure Group-Module-Slot Numbering Refer to IOAM Enclosure Group-Module-Slot Numbering (page 77). Fibre Channel Disk Module Group-Module-Slot Numbering Refer to Fibre Channel Disk Module Group-Module-Slot Numbering (page 75). NonStop i BladeSystem Component Location and Identification 35

36 3 Site Preparation Guidelines NB50000c, NB54000c, and NB56000c This section describes power, environmental, and space considerations for your site. Rack Power and I/O Cable Entry Depending on the rack order and the routing of the AC power feeds at the site, AC power cords for the PDUs exit either: Top: Power and I/O cables are routed from above the rack. Bottom: Power and I/O cables are routed from below the rack. Emergency Power-Off Switches Emergency power off (EPO) switches are required by local codes or other applicable regulations when computer equipment contains batteries capable of supplying more than 750 volt-amperes (VA) for more that five minutes. Systems that have these batteries also have internal EPO hardware for connection to a site EPO switch or relay. In an emergency, activating the EPO switch or relay removes power from all electrical equipment in the computer room (except that used for lighting and fire-related sensors and alarms). EPO Requirement for NonStop BladeSystems NonStop i BladeSystems without an optional UPS (such as an R2000/3 and R5000 UPS) installed in the modular rack do not contain batteries capable of supplying more than 750 volt-amperes (VA) for more that five minutes, so they do not require connection to a site EPO switch. EPO Requirement for R5000 UPS NOTE: For a single-phase power configuration, two R5000 UPS's are required. The rack-mounted R5000 UPS is supported for a single-phase power configuration. Each UPS contains batteries, has an EPO circuit, and can be optionally installed in a rack. For site EPO switches or relays, consult your Hewlett Packard Enterprise site preparation specialist or electrical engineer regarding requirements. If an EPO switch or relay connector is required for your site, contact your Hewlett Packard Enterprise representative or refer to the appropriate manual in UPS and ERM (Optional) (page 37). EPO Requirement for R2000/3 UPS The rack-mounted R2000/3, UPS is supported for a three-phase power configuration. This UPS contains batteries, has a remote EPO (REPO) port, and can be optionally installed in a rack. For site EPO switches or relays, consult your Hewlett Packard Enterprise site preparation specialist or electrical engineer regarding requirements. If an EPO switch or relay connector is required for your site, contact your Hewlett Packard Enterprise representative or refer to the HPE 3 Phase UPS User Guide for connectors and wiring for the R2000/3 UPS. For information about the R2000/3 UPS's management module, refer to the HPE UPS Management Module User Guide. 36 Site Preparation Guidelines NB50000c, NB54000c, and NB56000c

37 Electrical Power and Grounding Quality Proper design and installation of a power distribution system for a NonStop i BladeSystem requires specialized skills, knowledge, and understanding of appropriate electrical codes and the limitations of the power systems for computer and data processing equipment. For power and grounding specifications, refer to Enclosure AC Input (page 63). Power Quality This equipment is designed to operate reliably over a wide range of voltages and frequencies, described in Enclosure AC Input (page 63). Damage can occur if ranges are exceeded and severe electrical disturbances can exceed the design specifications of the equipment. Common sources of such disturbances are: Fluctuations occurring within the facility s distribution system Utility service low-voltage conditions (such as sags or brownouts) Wide and rapid variations in input voltage levels or input power frequency Electrical storms or large inductive sources (such as motors and welders) Faults in the distribution system wiring (such as loose connections) To protect the system from electrical disturbances, use a dedicated power distribution system, power conditioning equipment, and lightning arresters on power cables. For assistance, consult with your HP site preparation specialist or power engineer. Grounding Systems The site building must provide a power distribution safety ground/protective earth for each AC service entrance to all NonStop i BladeSystem equipment. This safety grounding system must comply with local codes and any other applicable regulations for the installation locale. For proper grounding/protective earth connection, consult with your Hewlett Packard Enterprise site preparation specialist or power engineer. Power Consumption To calculate the total power consumption for the hardware installed in the rack, refer to Enclosure Power Loads (page 64). UPS and ERM (Optional) A rack-mounted uninterruptible power supply (UPS) is optional but recommended to provide power during power failures when a site UPS is not available. For information on using OSM to manage a site UPS, refer to the OSM Configuration Guide. HPE supports these rack-mounted UPS modules which support up to two HPE ERMs per UPS; no mixing of UPS and ERM types. Supported UPS Single-phase R5000 Three-phase R2000/3 UPS Manuals HPE UPS R5000 User Guide: HPE UPS Network Module User Guide: HPE 3 Phase UPS User Guide: HPE UPS Management Module User Guide: More information Power Specifications UPS and ERM Checklist Electrical Power and Grounding Quality 37

38 UPS and ERM Checklist Verify: UPS s and ERMs are in the lowest portion of the system to avoid tipping and stability issues. No more than two HPE ERMs are used per UPS; no mixing of UPS or ERM types. The manufacturing default setting ride-through time for the optional HPE-supported UPS has been changed to an appropriate value for the system. Service providers can refer to the NonStop i BladeSystem Hardware Installation Manual for these instructions Your UPS configuration is supported. See UPS and Data Center Power Configurations (page 88). Cooling and Humidity Control Weight Flooring Cooling airflow through each enclosure in the system is front-to-back. Because of high heat densities and hot spots, an accurate assessment of air flow around and through the system equipment and specialized cooling design is essential for reliable system operation. For an airflow assessment, consult with your Hewlett Packard Enterprise cooling consultant or your heating, ventilation, and air conditioning (HVAC) engineer. NOTE: Failure of site cooling with the system continuing to run can cause rapid heat buildup and excessive temperatures within the hardware. Excessive internal temperatures can result in full or partial system shutdown. Ensure that the site s cooling system remains fully operational when the system is running. Use the Heat Dissipation Specifications and Worksheet NB50000c, NB54000c, and NB56000c (page 74) to calculate the total heat dissipation for the hardware installed in each rack. For air temperature levels at the site, refer to Operating Temperature, Humidity, and Altitude (page 75). Total weight must be calculated based on what is in the specific rack, as described in Rack and Enclosure Weights With Worksheet (page 70). NonStop i BladeSystems can be installed either on the site s floor with the cables entering from above the equipment or on raised flooring with power and I/O cables entering from underneath. Because cooling airflow through each enclosure in the racks is front-to-back, raised flooring is not required for system cooling. The site floor structure and any raised flooring (if used) must be able to support the weight of the installed system, individual racks, and enclosures as they are moved into position. To determine the total weight of the installation, refer to Rack and Enclosure Weights With Worksheet (page 70). For your site s floor system, consult with your HPE site preparation specialist or an appropriate floor system engineer. If raised flooring is to be used, the rack is optimized for placement on 24-inch floor panels. 38 Site Preparation Guidelines NB50000c, NB54000c, and NB56000c

39 Dust and Pollution Control NonStop i BladeSystems do not have air filters. Any computer equipment can be adversely affected by dust and microscopic particles in the site environment. Airborne dust can blanket electronic components on printed circuit boards, inhibiting cooling airflow and causing premature failure from excess heat, humidity, or both. Metallically conductive particles can short circuit electronic components. Tape drives and some other mechanical devices can experience failures resulting from airborne abrasive particles. For recommendations to keep the site as free of dust and pollution as possible, consult with your heating, ventilation, and air conditioning (HVAC) engineer or your site preparation specialist. Zinc Particulates Over time, fine whiskers of pure metal can form on electroplated zinc, cadmium, or tin surfaces such as aged raised flooring panels and supports. If these whiskers are disturbed, they can break off and become airborne, possibly causing computer failures or operational interruptions. This metallic particulate contamination is a relatively rare but possible threat. Kits are available to test for metallic particulate contamination, or you can request that your site preparation specialist or HVAC engineer test the site for contamination before installing any electronic equipment. Space for Receiving and Unpacking a NonStop i BladeSystem WARNING! A fully populated rack is unstable when moving down the unloading ramp from its shipping pallet. A falling rack can cause serious or fatal personal injury. Ensure: There is adequate space to receive and unpack the system from shipping cartons and pallets and to remove equipment using supplied ramps. For physical dimensions of the system equipment, refer to Dimensions and Weights (page 67). Enough personnel are present to remove and transport each rack to the installation site. Tiled or carpeted pathways have temporary hard floor covering to facilitate moving the racks which have small casters. Door and hallway width and height, the floor and elevator loading, accommodate the system equipment, personnel, and lifting or moving devices. If necessary, enlarge or remove any obstructing doorway or wall. Operational Space for a NonStop i BladeSystem Ensure: Site layout plan uses the equipment dimensions, door swing, and service clearances listed in Dimensions and Weights (page 67) and takes advantage of existing lighting and electrical outlets. Airflow direction and current or future air conditioning ducts are not obstructed. Eliminate any obstructions to equipment intake or exhaust air flow. Refer to Cooling and Humidity Control (page 38). Adequate space planning to allow for future equipment. Site layout plan includes provisions for things such as channels or fixtures used for cable routing, cables, patch panels, and storage areas. Dust and Pollution Control 39

40 4 System Installation Specifications for NonStop i BladeSystems Racks This section provides specifications necessary for system installation planning. NOTE: All specifications provided in this section assume that each enclosure in the rack is fully populated. The maximum current for each AC service depends on the number and type of enclosures installed in the rack. Power, weight, and heat loads are less when enclosures are not fully populated; for example, a Fibre Channel disk module with fewer disks. The rack is an EIA standard 9-inch, 42U rack for mounting modular components. The rack comes equipped with front and rear doors and includes a rear extension that makes it deeper than some industry-standard racks. The PDUs described in Power Distribution Unit (PDU) Types for an Intelligent Rack (page 40) are mounted along the rear extension without occupying any U-space in the rack and are oriented inward, facing the components within the rack. NOTE: For instructions on grounding the Intelligent rack using the HPE Intelligent Rack Ground Bonding Kit (BW89A), ask your service provider to refer to the instructions in the HP Rack Options Installation Guide or to: For instructions on grounding the G2 rack using the Rack Grounding Kit (AF074A), ask your service provider to refer to the instructions in the HPE 0000 G2 Series Rack Options Installation Guide located here: Power Distribution for NonStop i BladeSystems in Intelligent Racks NOTE: This section describes power distribution for NonStop i BladeSystems in the Intelligent rack. For information on earlier power configurations used in the G2 rack, see Power Configurations for NonStop BladeSystems in G2 Racks (page 9). This topic describes: Power Distribution Unit (PDU) Types for an Intelligent Rack (page 40) AC Power Feeds in an Intelligent Rack (page 54) Power Distribution Unit (PDU) Types for an Intelligent Rack The Intelligent rack supports Intelligent PDUs (ipdus) and Modular PDUs. Both PDU types use a core and extension bar design with these characteristics: PDU cores power the extension bars and c7000 enclosure. PDU cores are mounted at the lowest possible U location in the rack. Two PDUs are mounted in the same U location (rear and front). Extension bars are mounted on the rear vertical rails of the rack. Rear-mounted PDU cores connect to the extension bars on the right side of the rack. Front-mounted PDU cores connect to the extension bars on the left side of the rack. If the rack is equipped with a UPS, the UPS outputs connect to the front-mounted PDU cores. NOTE: An Intelligent rack with a c7000 enclosure requires a four PDU core configuration. Racks without a c7000 enclosure use a two PDU core configuration. 40 System Installation Specifications for NonStop i BladeSystems

41 This tables lists the PDUs, supported configurations, and links to examples that use a 42U rack. PDU Types ipdu Modular Supported PDU Configurations Four PDU cores without UPS Four PDU cores with UPS Two PDU cores without UPS Two PDU cores with UPS Four PDU cores without UPS Four PDU cores with UPS Two PDU cores without UPS Two PDU cores with UPS Examples of Configurations Four Intelligent PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase) (page 42) Four Intelligent PDUs With Single-Phase UPS (NA/JPN and INTL) (page 43) Four Intelligent PDUs With Three-Phase UPS (NA/JPN and INTL) (page 44) Two Intelligent PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase) (page 45) Two Intelligent PDUs With Single-Phase UPS (NA/JPN and INTL) (page 46) Two Intelligent PDUs With Three-Phase UPS (NA/JPN and INTL) (page 47) Four Modular PDUs Without UPS (NA and JPN, Single-Phase and Three-Phase) (page 48) Four Modular PDUs With Single-Phase UPS (NA/JPN and INTL) (page 49) Four Modular PDUs With Three-Phase UPS (NA/JPN and INTL) (page 50) Two Modular PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase) (page 5) Two Modular PDU Connections With Single-Phase UPS (NA/JPN and INTL) (page 52) Two Modular PDU Connections With Three-Phase UPS (NA/JPN and INTL) (page 53) Power Distribution for NonStop i BladeSystems in Intelligent Racks 4

42 Four Intelligent PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase) This illustration shows the power configuration for 4 ipdus (without UPS) in an Intelligent rack. For detailed power specifications and connector types, refer to Power Specifications (page 6). Figure 3 Four ipdus Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase) 42 System Installation Specifications for NonStop i BladeSystems

43 Four Intelligent PDUs With Single-Phase UPS (NA/JPN and INTL) This illustration shows the power configuration for 4 ipdus and 2 single-phase UPS's in an Intelligent rack. For detailed power specifications and connector types, refer to Power Specifications (page 6). Figure 4 Four ipdus With Single-Phase UPS (NA/JPN and INTL) Power Distribution for NonStop i BladeSystems in Intelligent Racks 43

44 Four Intelligent PDUs With Three-Phase UPS (NA/JPN and INTL) This illustration shows the power configuration for 4 ipdus and 2 single-phase UPS's in an Intelligent rack. For detailed power specifications and connector types, refer to Power Specifications (page 6). Figure 5 Four ipdus With Three-Phase UPS (NA/JPN and INTL) 44 System Installation Specifications for NonStop i BladeSystems

45 Two Intelligent PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase) This illustration shows the connections for two ipdus in an Intelligent rack without a UPS. For detailed power specifications and connector types, refer to Power Specifications (page 6). Figure 6 Two Intelligent PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase) Power Distribution for NonStop i BladeSystems in Intelligent Racks 45

46 Two Intelligent PDUs With Single-Phase UPS (NA/JPN and INTL) This illustration shows the power configuration for 2 ipdus and a single-phase UPS in an Intelligent rack. For detailed power specifications and connector types, refer to Power Specifications (page 6). Figure 7 Two Intelligent PDUs With Single-Phase (NA/JPN and INTL) 46 System Installation Specifications for NonStop i BladeSystems

47 Two Intelligent PDUs With Three-Phase UPS (NA/JPN and INTL) This illustration shows the power configuration for 2 ipdus and a three-phase UPS in an Intelligent rack. For detailed power specifications and connector types, refer to Power Specifications (page 6). Figure 8 Two Intelligent PDUs With Three-Phase UPS (NA/JPN and INTL) Power Distribution for NonStop i BladeSystems in Intelligent Racks 47

48 Four Modular PDUs Without UPS (NA and JPN, Single-Phase and Three-Phase) This illustration shows the power configuration for four modular PDUs in an Intelligent rack without a UPS. For detailed power specifications and connector types, refer to Power Specifications (page 6). Figure 9 Four Modular PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase) 48 System Installation Specifications for NonStop i BladeSystems

49 Four Modular PDUs With Single-Phase UPS (NA/JPN and INTL) This illustration shows the power configuration for 4 modular PDUs and 2 single-phase UPS's in an Intelligent rack. For detailed power specifications and connector types, refer to Power Specifications (page 6). Figure 0 Four Modular PDUs With Single-Phase UPS (NA/JPN and INTL) Power Distribution for NonStop i BladeSystems in Intelligent Racks 49

50 Four Modular PDUs With Three-Phase UPS (NA/JPN and INTL) This illustration shows the power configuration for 4 modular PDUs and a three-phase UPS in an Intelligent rack. For detailed power specifications and connector types, refer to Power Specifications (page 6). Figure Four Modular PDUs With Three-Phase UPS (NA/JPN and INTL) 50 System Installation Specifications for NonStop i BladeSystems

51 Two Modular PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase) This illustration shows the power configuration for 2 modular PDUs without a UPS in an Intelligent rack. For detailed power specifications and connector types, refer to Power Specifications (page 6). Figure 2 Two Modular PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase) Power Distribution for NonStop i BladeSystems in Intelligent Racks 5

52 Two Modular PDU Connections With Single-Phase UPS (NA/JPN and INTL) This illustration shows the connections for 2 modular PDUs with a single-phase UPS in an Intelligent rack. For detailed power specifications and connector types, refer to Power Specifications (page 6). Figure 3 Two Modular PDUs With a Single-Phase UPS (NA/JPN and INTL) 52 System Installation Specifications for NonStop i BladeSystems

53 Two Modular PDU Connections With Three-Phase UPS (NA/JPN and INTL) This illustration shows the connections for 2 modular PDUs with a three-phase UPS in an Intelligent rack. For detailed power specifications and connector types, refer to Power Specifications (page 6). Figure 4 Two Modular PDUs With a Three-Phase UPS (NA/JPN and INTL) Power Distribution for NonStop i BladeSystems in Intelligent Racks 53

54 AC Power Feeds in an Intelligent Rack Systems can be ordered with the AC power cords for the PDU installed either: Top: Power and I/O cables are routed from above the rack. Bottom: Power and I/O cables are routed from below the rack. AC Power Feeds... Examples Without UPS Example of Bottom AC Power Feed in an Intelligent Rack (Without UPS) (page 55) Example of Top AC Power Feed in an Intelligent Rack (Without UPS) (page 56) With Single-Phase UPS Example of Top AC Power Feed in an Intelligent Rack (With Single-Phase UPS) (page 57) Example of Bottom AC Power Feed in an Intelligent Rack (With Single-Phase UPS) (page 58) With Three-Phase UPS Example of Top AC Power Feed in an Intelligent Rack (With Three-Phase UPS) (page 59) Example of Bottom AC Power Feed in an Intelligent Rack (With Three-Phase UPS) (page 60) NOTE: The example power feed illustrations on the following pages show the connections to two PDUs and one UPS. If you have a power configuration with four PDUs and two UPS's, you will need to make additional connections. 54 System Installation Specifications for NonStop i BladeSystems

55 Figure 5 Example of Bottom AC Power Feed in an Intelligent Rack (Without UPS) Power Distribution for NonStop i BladeSystems in Intelligent Racks 55

56 Figure 6 Example of Top AC Power Feed in an Intelligent Rack (Without UPS) 56 System Installation Specifications for NonStop i BladeSystems

57 Figure 7 Example of Top AC Power Feed in an Intelligent Rack (With Single-Phase UPS) Power Distribution for NonStop i BladeSystems in Intelligent Racks 57

58 Figure 8 Example of Bottom AC Power Feed in an Intelligent Rack (With Single-Phase UPS) 58 System Installation Specifications for NonStop i BladeSystems

59 Figure 9 Example of Top AC Power Feed in an Intelligent Rack (With Three-Phase UPS) Power Distribution for NonStop i BladeSystems in Intelligent Racks 59

60 Figure 20 Example of Bottom AC Power Feed in an Intelligent Rack (With Three-Phase UPS) Each PDU is wired to distribute the load segments to its receptacles. 60 System Installation Specifications for NonStop i BladeSystems

61 AC Input Power for Intelligent Racks This topic provides power specifications for AC input power in NonStop i BladeSystem Intelligent racks. Power Specifications Region North America/Japan International Phase Single-Phase Three-Phase Single-Phase Three-Phase Refer to... Table 4 (page 6) Table 5 (page 62) Table 6 (page 62) Table 7 (page 63) CAUTION: Be sure the hardware configuration and resultant power loads of each rack within the system do not exceed the capacity of the branch circuit according to applicable electrical codes and regulations. Select circuit breaker ratings according to local codes and any applicable regulations for the circuit capacity. Note that circuit breaker ratings vary if your system includes an optional rack-mounted UPS. Table 4 North America/Japan Single-Phase Power Specifications R5000 -phase UPS ipdu -phase Modular PDU -phase Output Load 4500 W 24 A 24 A Input Voltage V V V Input Connector NEMA L6-30P NEMA L6-30P NEMA L6-30P Output Voltage V N/A N/A Output Connectors x L6-30R 4 x C9 4 x C3 6 x C9 (20 x C3) 4 x C9 (28 x C3) Notes UPS outputs are connected to the compatible PDU inputs. AC Input Power for Intelligent Racks 6

62 Table 5 North America/Japan Three-Phase Power Specifications R phase UPS ipdu 3-phase Modular PDU 3-phase Output Load 2 kw 24 A 24 A Input Voltage 208V 3P Wye 208V 3P Delta 208V 3P Delta Input Connector IEC P9 NEMA L5-30P NEMA L5-30P Output Voltage 208V 3P Delta N/A N/A Output Connectors 2 x NEMA L5-30R 6 x C9 (20 x C3) 6 x C9 (42 x C3) Notes UPS outputs are connected to the compatible PDU inputs. Table 6 International Single-Phase Power Specifications R5000 -phase UPS ipdu -phase Modular PDU -phase Output Load 4500 W 32 A 32 A Input Voltage V V V Input Connector IEC P6 (32 A) IEC P6 (32 A) IEC P6 (32 A) Output Voltage V N/A N/A Output Connectors x IEC R6 4 x C9 4 x C3 6 x C9 (20 x C3) 4x C9 (28 x C3) Notes UPS outputs are connected to the compatible PDU inputs. 62 System Installation Specifications for NonStop i BladeSystems

63 Table 7 International Three-Phase Power Specifications R phase UPS ipdu 3-phase Modular PDU 3-phase Output Load 2 kw 6 A/phase 6 A/phase Input Voltage V 3P Wye V 3P Wye V 3P Wye Input Connector IEC P6 IEC309 56P6 IEC309 56P6 Output Voltage 400V 3P Wye N/A N/A Output Connectors 2 x IEC C6 6 x C9 (20 x C3) 6 x C9 (20 A) (42 x C3) Notes UPS outputs are connected to the compatible PDU inputs. Enclosure AC Input NOTE: For instructions on the G2 rack by using the Rack Grounding Kit (AF074A), ask your service provider to refer to the instructions in the HPE 0000 G2 Series Rack Options Installation Guide. NOTE: For instructions on grounding the Intelligent rack using the HPE Intelligent Rack Ground Bonding Kit (BW89A), ask your service provider to refer to the instructions in the HP Rack Options Installation Guide or to: Enclosures (IP CLIM, IOAM enclosure, and so forth) require: Specification Nominal input voltage Voltage range Nominal line frequency Frequency ranges Number of phases Value 200/208/220/230/240 V AC RMS V AC 50 or 60 Hz Hz or Hz For G6SE enclosure specifications, refer to the G6SE Ethernet Connectivity Guide for NonStop BladeSystems. AC Input Power for Intelligent Racks 63

64 Single-phase c7000 enclosures require: Specification Value Voltage range Nominal line frequency Frequency ranges Number of phases VAC 50 or 60 Hz Hz or Hz Enclosure Power Loads The total power and current load for a rack depends on the number and type of enclosures installed in it. Therefore, the total load is the sum of the loads for all installed enclosures. In normal operation, the AC power is split equally between the power feeds on the two sides (left and right) of the rack. However, if AC power fails on one side of the rack, the power feed(s) on the remaining side must carry the power for all enclosures in that rack. Power and current specifications for each type of enclosure are: Enclosure Type AC Power Lines per Enclosure Typical Power Consumption (VA) 2 Maximum Power Consumption (VA) 3 Peak Inrush Current (Amps) NB50000c Server Blade: BL860 NonStop server blades NB54000c Server Blade: BL860c i2 server blade 5 (6 GB RAM) BL860c i2 server blade (24 GB RAM) BL860c i2 server blade (32 GB RAM) BL860c i2 server blade (48 GB RAM) BL860c i2 server blade (64 GB RAM) NB56000c Server Blade: BL860c i4 server blade 6 (6 GB RAM) BL860c i4 server blade (32 GB RAM) BL860c i4 server blade (48 GB RAM) System Installation Specifications for NonStop i BladeSystems

65 Enclosure Type AC Power Lines per Enclosure Typical Power Consumption (VA) 2 Maximum Power Consumption (VA) 3 Peak Inrush Current (Amps) BL860c i4 server blade (64 GB RAM) BL860c i4 server blade (96 GB RAM) Common products used by NB50000c, NB54000c, and NB56000c: c7000 R enclosure (3-phase) and c7000 R2 enclosure (3-phase) c7000 R enclosure (single-phase) and c7000 R2 enclosure (single-phase) 8, c7000 R3 enclosure CLIM, G2 or G G6 Storage CLIM Gen8 Storage CLIM Gen9 Storage CLIM G6 Networking CLIM (IP, Telco, or IB) Gen8 Networking CLIM, 5 copper ports (IP or Telco) Gen8 Networking CLIM, 3 copper/2 optical ports (IP or Telco) Gen9 Networking CLIM, 5 copper ports (IP or Telco) Gen9 Networking CLIM, 3 copper/2 optical ports (IP or Telco) MSA70 SAS disk enclosure, empty D3700 SAS disk enclosure, empty D2700 SAS disk enclosure, empty SAS HDD 2.5 inches, 0k rpm SAS HDD 2.5 inches, 5k rpm Enclosure Power Loads 65

66 Enclosure Type AC Power Lines per Enclosure Typical Power Consumption (VA) 2 Maximum Power Consumption (VA) 3 Peak Inrush Current (Amps) 200GB SAS 2.5 SSD, Gen GB SAS 2.5 SSD, Gen IOAM enclosure Fibre Channel disk module (no disks) Fibre Channel disk drive Rack-mounted system console (BLCR4) Rack-mounted system console (NSCR20) Rack-mounted keyboard and monitor Maintenance switch (Ethernet) See Three-Phase Power Setup in a G2 Rack, Monitored PDUs (page 43) for c7000 enclosure power feed requirements 2 Typical = measured at 22C ambient temp 3 Maximum = measured at 35C ambient temp 4 All BL860 server blades measured with one.6ghz dual-core processor, 6 GB RAM, and one ServerNet Mezzanine Card. 5 All BL860c i2 server blades measured with.73ghz quad-core processor and ServerNet mezzanine card. 6 All BL860c i4 server blades measured with.73ghz quad-core processor and ServerNet mezzanine card. 7 Measured with 6 power supplies, 0 system fans, 2 GbE2c switches, and 2 OnBoard Administrator Modules. 8 Measured with 6 power supplies, 0 system fans, 2 GbE2c switches, and 2 OnBoard Administrator Modules. 9 The c7000 R3 enclosure is compatible with NonStop i BladeSystem NB54000c and NB56000c. 0 Measured with 6 power supplies, 0 system fans, 2 625G switches, and 2 OnBoard Administrator Modules. Measured with 0 Fibre Channel ServerNet Adapters installed and active. Each FCSA or G4SA consumes 30W. 2 Maintenance switch has only one AC plug. 66 System Installation Specifications for NonStop i BladeSystems

67 Dimensions and Weights Plan View of the 42U Racks Plan View of G2 Rack Plan View of Intelligent Rack Service Clearances for Racks Front: 3 feet (9.4 centimeters) Rear: 3 feet (9.4 centimeters) Dimensions and Weights 67

68 Unit Sizes Enclosure Type Rack c7000 enclosure CLIMs SAS disk enclosures CLIM patch panel G6SE enclosure IOAM enclosure Fibre Channel disk module (FCDM) Maintenance switch (Ethernet) R5000 UPS (single-phase power) ERM (AF464A) used with R5000 UPS R2000/3 UPS (three-phase power) ERM for three-phase power (AF434A ) Rack-mounted system console and rack-mounted keyboard and monitor Height (U) U G2 Rack Physical Specifications Item Height Width Depth Weight in. cm in. cm in. cm Rack Depends on the enclosures installed. Refer to Rack and Enclosure Weights With Worksheet (page 70) Front door Left-rear door Right-rear door Shipping (palletized) System Installation Specifications for NonStop i BladeSystems

69 42U Intelligent Rack Physical Specifications Item Height Width Depth Weight in. cm in. cm in. cm Rack Shipping (palletized) Depends on the enclosures installed. Refer to Rack and Enclosure Weights With Worksheet (page 70). Enclosure Dimensions Enclosure Type Height Width Depth in cm in cm in cm c7000 enclosure (-phase) CLIMs (all models) MSA70 SAS disk enclosure D3700 SAS disk enclosure D2700 SAS disk enclosure CLIM patch panel G6SE enclosure IOAM enclosure Fibre Channel disk module Maintenance switch (Ethernet) Rack-mounted system console with keyboard and display Modular PDU (in Intelligent rack) Dimensions and Weights 69

70 Enclosure Type Height Width Depth in cm in cm in cm Intelligent PDU (in Intelligent rack) R5000 UPS (single-phase power) ERM (AF464A) for single-phase power with R5000 UPS R2000/3 UPS (three-phase power) ERM for three-phase power (AF434A) Rack and Enclosure Weights With Worksheet The total weight of each rack is the sum of the weight of the rack plus each enclosure installed in it. Use this worksheet to determine the total weight: Enclosure Type Number of Enclosures Weight lbs kg Total lbs kg 42U Intelligent rack U G2 Rack (three-phase with modular PDUs) U G2 Rack (three-phase with monitored PDUs) c7000 R enclosure, (-phase power) c7000 R enclosure, (3-phase power) c7000 R2 enclosure (single-phase) System Installation Specifications for NonStop i BladeSystems

71 Enclosure Type Number of Enclosures Weight lbs kg Total lbs kg c7000 R3 enclosure (single-phase) 26 8 G6SE enclosure 68 3 (fully loaded) IOAM enclosure Fibre Channel disk module (FCDM) Blade ServerNet switch 7 3 BL860c Server Blade Processor (NB50000c) 22 0 BL860c i2 Server Blade Processor (NB54000c) 22 0 BL860c i4 Server Blade Processor (NB56000c) 22 0 G2 or G5 CLIM G6 CLIM Gen8 CLIM Gen9 CLIM MSA70 SAS disk enclosure, empty D2700 SAS disk enclosure, empty 38 7 D3700 SAS disk enclosure, empty 38 7 SAS HDD, Gb SAS protocol.20 SAS HDD, Gb SAS protocol GB SAS 2.5 SSD, Gen GB SAS 2.5 SSD, Gen Disk blank..04 CLIM patch panel Dimensions and Weights 7

72 Enclosure Type Number of Enclosures Weight lbs kg Total lbs kg Maintenance switch (Ethernet) 6 3 Rack-mounted system console with keyboard and display 4 8 Modular PDU core (in Intelligent rack) 2 NOTE: One modular PDU core weighs 2 pounds. A 4 modular PDU core configuration in an Intelligent rack would weigh 60 lbs (48 lbs for the PDU cores + 2 lbs for extension bars). 5.4 Extension bar (for Modular PDU in Intelligent rack) Intelligent PDU core (in Intelligent rack) 20 NOTE: One ipdu core weights 20 lbs. A 4 ipdu core configuration would weigh 00 lbs (80 lbs for ipdu cores + 20 lbs for extension bars. 9. Extension bar (for Intelligent PDU in Intelligent rack) 2.5. R5000 UPS (single-phase power) ERM (AF464A) single-phase power only used with R5000 UPS ERM (AF47A) single-phase power only used with R5500XR UPS R2000/3 UPS (three-phase power) 307 (with batteries) 35 (without batteries) 39.2 (with batteries) 59.8 (without batteries) 72 System Installation Specifications for NonStop i BladeSystems

73 Enclosure Type Number of Enclosures Weight lbs kg Total lbs kg Extended runtime module (ERM) for three-phase power (AF434A) ERM for single-phase power (AF47A) Total Maximum payload weight for the 42U Intelligent rack: 3000 lbs (360 kg). For examples of calculating the weight for various enclosure combinations, refer to Calculating Specifications for Enclosure Combinations NB50000c (page 77). Rack Stability Rack stabilizers are required when you have less than four racks bayed together. NOTE: Rack stability is of special concern when equipment is routinely installed, removed, or accessed within the rack. Stability is addressed through the use of leveling feet, baying kits, fixed stabilizers, and/or ballast. Use baying kits to bay Intelligent racks to Intelligent racks of the same height. In all cases, a rack cannot be bayed with another rack of a different height. For information about best practices for racks or grounding the racks using a rack ground bonding kit, your service provider can refer to: HP Rack Options Installation Guide or to: HPE 0000 G2 Series Rack Options Installation Guide located here: support/0000_g2_series_rack_manuals Rack Stability 73

74 Environmental Specifications Heat Dissipation Specifications and Worksheet NB50000c, NB54000c, and NB56000c Enclosure Type Number Installed Unit Heat (BTU/hour, Typical Heat Dissipation) Unit Heat (BTU/hour, Maximum Heat Dissipation) Total (BTU/hour) c7000 R enclosure (single-phase) and c7000 R2 enclosure (single-phase) c7000 R3 enclosure (single-phase) BL860c server blade BL860ci2 server blade BL860ci4 server blade G2 or G5 CLIM (Storage or Networking) CLIM, G6 (Storage or Networking) Gen8 Storage CLIM Gen9 Storage CLIM Gen8 Networking CLIM, 5 copper ports (IP or Telco) Gen8 Networking CLIM, 3 copper ports/2 optical ports (IP or Telco) Gen9 Networking CLIM, 5 copper ports (IP or Telco) Gen9 Networking CLIM, 3 copper ports/2 optical ports (IP or Telco) SAS disk enclosure, empty SAS HDD, 2.5 inches, 0k rpm 7 30 SAS HDD, 2.5 inches, 5k rpm GB SAS 2.5 SSD, Gen GB SAS 2.5 SSD, Gen System Installation Specifications for NonStop i BladeSystems

75 Enclosure Type Number Installed Unit Heat (BTU/hour, Typical Heat Dissipation) Unit Heat (BTU/hour, Maximum Heat Dissipation) Total (BTU/hour) IOAM Fibre Channel disk module (FCDM) Maintenance switch (Ethernet) Rack-mounted system console (BLCR4) with keyboard and monitor Measured with 0 Fibre Channel ServerNet adapters installed and active. 2 Measured with 4 disk drives installed and active. 3 Maintenance switch has only one plug. Operating Temperature, Humidity, and Altitude Specification Operating Range Recommended Range Maximum Rate of Change per Hour Temperature (IOAM, rack-mounted system console, and maintenance switch) 4 to 95 F (5 to 35 C) 68 to 72 F (20 to 25 C) 9 F (5 C) Repetitive 36 F (20 C) Nonrepetitive Temperature (c7000, CLIMs, SAS disk enclosures, and Fibre Channel disk module) 50 to 95 F (0 to 35 C) -.8 F ( C) Repetitive 5.4 F (3 C) Nonrepetitive Humidity (all except c7000 enclosure) 5% to 80%, noncondensing 40% to 50%, noncondensing 6%, noncondensing Humidity (c7000 enclosure) 20% to 80%, noncondensing 40% to 55%, noncondensing 6%, noncondensing Altitude 2 0 to 0,000 feet (0 to 3,048 meters) Operating and recommended ranges refer to the ambient air temperature and humidity measured 9.7 in. (50 cm) from the front of the air intake cooling vents. 2 For each 000 feet (305 m) increase in altitude above 0,000 feet (up to a maximum of 5,000 feet), subtract.5 F (0.83 C) from the upper limit of the operating and recommended temperature ranges. Nonoperating Temperature, Humidity, and Altitude Temperature: Up to 72-hour storage: - 40 to 5 F (-40 to 66 C) Up to 6-month storage: -20 to 3 F (-29 to 55 C) - Reasonable rate of change with noncondensing relative humidity during the transition from warm to cold Relative humidity: 0% to 80%, noncondensing Altitude: 0 to 40,000 feet (0 to 2,92 meters) - Environmental Specifications 75

76 Cooling Airflow Direction NOTE: Because the front door of the enclosure must be adequately ventilated to allow air to enter the enclosure and the rear door must be adequately ventilated to allow air to escape, do not block the ventilation apertures of a NonStop i BladeSystem. Each NonStop i BladeSystem includes 0 Active Cool fans that provide high-volume, high pressure airflow at even the slowest fan speeds. Air flow for each NonStop i BladeSystem enters through a slot in the front of the c7000 enclosure and is pulled into the interconnect bays. Ducts allow the air to move from the front to the rear of the enclosure where it is pulled into the interconnects and the center plenum. The air is then exhausted out the rear of the enclosure. Blanking Panels If the NonStop i BladeSystem is not completely filled with components, the gaps between these components can cause adverse changes in the airflow, negatively impacting cooling within the rack. You must cover any gaps with blanking panels. In high density environments, air gaps in the enclosure and between adjacent enclosures should be sealed to prevent recirculation of hot-air from the rear of the enclosure to the front. Typical Acoustic Noise Emissions 84 db(a) (sound pressure level at operator position) Tested Electrostatic Immunity Contact discharge: 8 KV Air discharge: 20 KV 76 System Installation Specifications for NonStop i BladeSystems

77 Calculating Specifications for Enclosure Combinations NB50000c Power and thermal calculations assume that each enclosure in the rack is fully populated. The power and heat load is less when enclosures are not fully populated, such as a SAS disk enclosure with fewer disk drives. AC power calculations assume that the power feed(s) on one side of the rack (left or right) deliver all power to the rack. In normal operation, the power is split equally between the two sides. However, calculate the power load to assume delivery from only one side to allow the system to continue to operate if power to one of the sides fails. Example of Rack Load Calculations NB50000c (page 77) lists the weight, power, and thermal calculations for a system with: One c7000 enclosure Eight NonStop server blades Two G2 or G5 IP or Storage CLIMs Two MSA70 SAS disk enclosures, containing 25 hard disk drives (HDDs) with 3 Gb SAS protocol, 0k rpm per each HDD One IOAM enclosure Two Fibre channel disk modules One rack-mounted system console with keyboard/monitor units One maintenance switch One CLIM patch panel One 42U rack (single-phase) For a total thermal load for a system with multiple racks, add the heat outputs for all the racks in the system. Table 8 Example of Rack Load Calculations NB50000c Component Quantity Height (U) Weight (lbs) (kg) Total Volt-amps (VA) Typical Power Consumption Maximum Power Consumption BTU/hour 2 Typical Heat Dissipation Maximum Heat Dissipation c7000 R enclosure BLc860c Server Blade G2 or G5 CLIM MSA70 SAS disk enclosure, containing 25 HDDs with 3 Gb protocol, 0k rpm per HDD IOAM enclosure Fibre Channel disk module Rack-mounted System Calculating Specifications for Enclosure Combinations NB50000c 77

78 Table 8 Example of Rack Load Calculations NB50000c (continued) Console (BLCR4) (includes keyboard and monitor) Maintenance switch CLIM patch panel Single-phase G2 rack Total Decrease the apparent power VA specification by 300VA for each empty NonStop Server Blade slot. For example, a c7000 that only has four NonStop Server Blades installed would be rated 3400 VA minus 200 VA (4 server blades x 300 VA) = 2200 VA apparent power. 2 Decrease the BTU/hour specification by 023 BTU/hour for each empty NonStop Server Blade slot. For example, a c7000 that only has four NonStop Server Blades installed would be rated 60 BTU/hour minus 4092 BTU/hour (4 server blades x 023 BTU/hour) = 7509 BTU/hour. 78 System Installation Specifications for NonStop i BladeSystems

79 Calculating Specifications for Enclosure Combinations NB54000c Power and thermal calculations assume that each enclosure in the cabinet is fully populated. The power and heat load is less when enclosures are not fully populated, such as a Fibre Channel disk module with fewer disk drives. AC power calculations assume that the power feed(s) on one side of the right (left or right) deliver all power to the rack. In normal operation, the power is split equally between the two sides. However, calculate the power load to assume delivery from only one side to allow the system to continue to operate if power to one of the sides fails. Example of Rack Load Calculations NB54000c (page 79) lists the weight, power, and thermal calculations for a system with: One c7000 R2 enclosure Eight 48 GB NonStop server blades Two G6 IP or Storage CLIMs Two D2700 SAS disk enclosures, containing 25 hard disk drives (HDDs) with 300 GB, 0k rpm, 6 GB SAS protocol per HDD One IOAM enclosure Two Fibre channel disk modules One rack-mounted system console with keyboard/monitor units One maintenance switch One CLIM patch panel One 42U rack (single-phase) For a total thermal load for a system with multiple racks, add the heat outputs for all the racks in the system. Table 9 Example of Rack Load Calculations NB54000c Component Quantity Height (U) Weight (lbs) (kg) Total Volt-amps (VA) Typical Power Consumption Maximum Power Consumption BTU/hour 2 Typical Heat Dissipation Maximum Heat Dissipation c7000 R2 enclosure BLC860c i2 Server Blade G6 CLIM D2700 SAS disk enclosure, containing 25 HDDs with 300 GB, 0k rpm, 6 GB SAS protocol per HDD IOAM enclosure Fibre Channel disk module Calculating Specifications for Enclosure Combinations NB54000c 79

80 Table 9 Example of Rack Load Calculations NB54000c (continued) Rack-mounted System Console (BLCR4) (includes keyboard and monitor) Maintenance switch CLIM patch panel Rack (single-phase with G2 rack) Total Decrease the apparent power VA specification by 300VA for each empty NonStop Server Blade slot. For example, a c7000 that only has four NonStop Server Blades installed would be rated 3400 VA minus 200 VA (4 server blades x 300 VA) = 2200 VA apparent power. 2 Decrease the BTU/hour specification by 023 BTU/hour for each empty NonStop Server Blade slot. For example, a c7000 that only has four NonStop Server Blades installed would be rated 60 BTU/hour minus 4092 BTU/hour (4 server blades x 023 BTU/hour) = 7509 BTU/hour. Calculating Specifications for Enclosure Combinations NB56000c Power and thermal calculations assume that each enclosure in the rack is fully populated. The power and heat load is less when enclosures are not fully populated, such as a Fibre Channel disk module with fewer disk drives. AC power calculations assume that the power feed(s) on one side of the rack (left or right) deliver all power to the rack. In normal operation, the power is split equally between the two sides. However, calculate the power load to assume delivery from only one side to allow the system to continue to operate if power to one of the sides fails. Example of Rack Load Calculations NB56000c (page 8) lists the weight, power, and thermal calculations for a system with: One c7000 R3 enclosure Eight 48 GB NonStop Server Blades Two Gen8 IP or Storage CLIMs Two D2700 SAS disk enclosures, containing 25 hard disk drives (HDDs) with 300 GB, 0k rpm, 6 GB SAS protocol per HDD One IOAM enclosure Two Fibre channel disk modules One rack-mounted system console with keyboard/monitor units One maintenance switch One CLIM patch panel One 42U rack (single-phase) 80 System Installation Specifications for NonStop i BladeSystems

81 Table 0 Example of Rack Load Calculations NB56000c Component Quantity Height (U) Weight (lbs) (kg) Total Volt-amps (VA) Typical Power Consumption Maximum Power Consumption BTU/hour 2 Typical Heat Dissipation Maximum Heat Dissipation c7000 R3 enclosure BLC860c i4 Server Blade Gen8 IP or Storage CLIMs D2700 SAS disk enclosure, containing 25 HDDs with 300 GB, 0k rpm, 6 GB SAS protocol per HDD IOAM enclosure Fibre Channel disk module Rack-mounted System Console (BLCR4) (includes keyboard and monitor) Maintenance switch CLIM patch panel Rack (single-phase G2 rack) Total Decrease the apparent power VA specification by 200VA for each empty NonStop Server Blade slot. For example, a c7000 that only has four NonStop Server Blades installed would be rated 2700 VA minus 800 VA (4 server blades x 200 VA) = 900 VA apparent power. 2 Decrease the BTU/hour specification by 682 BTU/hour for each empty NonStop Server Blade slot. For example, a c7000 that only has four NonStop Server Blades installed would be rated 9209 BTU/hour minus 2728 BTU/hour (4 server blades x 682 BTU/hour) = 648 BTU/hour. For a total thermal load for a system with multiple racks, add the heat outputs for all the racks in the system. Calculating Specifications for Enclosure Combinations NB56000c 8

82 5 System Configuration Guidelines NB50000c, NB54000c, and NB56000c This chapter provides configuration guidelines for a NonStop BladeSystem. Internal ServerNet Interconnect Cabling Dedicated Service LAN Cables The NonStop i BladeSystem can use Category 5e or Category 6, unshielded twisted-pair Ethernet cables for the internal dedicated service LAN and for connections between the application LAN equipment and IP CLIM, Telco CLIM, or IOAM enclosure. ServerNet Fabric and Supported Connections The ServerNet X and Y fabrics for the NonStop i BladeSystem are provided by the double-wide ServerNet switch in the c7000 enclosure. Each c7000 enclosure requires two ServerNet switches for fault tolerance and each switch has four ServerNet connection groups: ServerNet Cluster Connections ServerNet Fabric Cross-Link Connections Interconnections between c7000 enclosures I/O Connections (Standard I/O and High I/O options) The I/O connectivity to each of these groups is provided by one of two ServerNet switch options: either Standard I/O or High I/O. ServerNet Cluster Connections At J06.03, only standard ServerNet cluster connections via cluster switches using connections to both types of ServerNet-based cluster switches (6770 and 6780) is supported. There are two small form-factor pluggable (SFP) ports on each c7000 enclosure ServerNet switch: a single mode fiber (SMF) port (port 2) and a multi mode fiber (MMF) port (port ) for the two ServerNet style connections. Only one of these ports can be used at a time and only one connection per fabric (from the appropriate ServerNet switch for that fabric in group 00) to the system's cluster fabric is supported. ServerNet cluster connections on NonStop i BladeSystems follow the ServerNet cluster and cable length rules and restrictions. For more information, refer to these manuals: ServerNet Cluster Supplement for NonStop BladeSystems For 6770 switches and star topologies: ServerNet Cluster Manual For 6780 switches and layered topology: ServerNet Cluster 6780 Planning and Installation Guide BladeCluster Solution Connections As of J06.04, you can cluster BladeSystems to participate in a BladeCluster Solution. A BladeCluster Solution is comprised of network topologies that interconnect NonStop BladeSystems and NonStop NS6000 series systems as nodes, which can also cluster with 6770/6780 ServerNet clusters. For more information, see the BladeCluster Solution Manual. ServerNet Fabric Cross-Link Connections A pair of small form-factor pluggable (SFPs) with standard LC-Duplex connectors are provided to allow for the ServerNet fabric cross-link connection. Connections are made to ports 9 and 0 (labeled X and X2) on the c7000 enclosure ServerNet switch. 82 System Configuration Guidelines NB50000c, NB54000c, and NB56000c

83 Interconnections Between c7000 Enclosures A single c7000 enclosure can contain eight NonStop Server Blades. Two c7000 enclosures are interconnected to create a 6 processor system. These interconnections are provided by two quad optic ports: ports and 2 (labeled GA and GB) located on the c7000 enclosure ServerNet switches in the 5 and 7 interconnect bays. The GA port on the first c7000 enclosure is connected to the GA port on the second c7000 enclosure (same fabric) and then likewise the GB port to the GB port. These connections provide eight ServerNet cross-links between the two sets of eight NonStop processors and the ServerNet routers on the c7000 enclosure ServerNet switch. I/O Connections (Standard and High I/O ServerNet Switch Configurations) NOTE: If your BladeSystems participate in a BladeCluster, you must use the BladeCluster ServerNet switch. Refer to the BladeCluster Solution Manual. There are two types of c7000 enclosure ServerNet switches: Standard I/O and High I/O. Each pair of ServerNet switches in a c7000 enclosure must be identical, either Standard I/O or High I/O. However, you can mix ServerNet switches between enclosures. The main difference between the Standard I/O or High I/O switches is the number and type of quad optics modules that are installed for I/O connectivity. The Standard I/O ServerNet switch has three quad optic modules: ports 3, 4, and 8 (labeled GC, EA, and EE) for a total of 2 ServerNet links as shown following: Figure 2 ServerNet Switch Standard I/O Supported Connections The High I/O ServerNet switch has six quad optic modules ports 3, 4, 5, 6, 7, and 8 (labelled GC, EA, EB, EC, and ED) for a total of 24 ServerNet links as shown following. If both c7000 enclosures in a 6 processor system contain High I/O ServerNet switches, there are a total of 48 ServerNet connections for I/O. ServerNet Fabric and Supported Connections 83

84 Figure 22 ServerNet Switch High I/O Supported Connections Connections to IOAM Enclosures Refer to Legacy Hardware (IOAM, FCDM, S-Series I/O Enclosure) (page 73). Connections to G6SE Enclosures For general information on connecting to G6SE enclosures, refer to the G6SE Ethernet Connectivity Guide for NonStop BladeSystems. For detailed information, have your service provider refer to the G6SE Service Provider Supplement for NonStop BladeSystems. Connections to CLIMs NOTE: If NonStop S-series I/O enclosures are present, CLIMs cannot be connected to port 3 of the ServerNet switches in a c7000 enclosure. The NonStop i BladeSystem supports a maximum of 48 CLIMs per 6 processor system. A CLIM uses either one or two connections per ServerNet fabric. The Storage CLIM typically uses two connections per fabric to achieve high disk bandwidth. A networking CLIM (IP, Telco, or IB CLIM) typically uses one connection per ServerNet fabric. For I/O connections, a breakout cable is used on the back panel of the c7000 enclosure ServerNet switch to convert to standard LC-Duplex style connections. Connections to NonStop S-series I/O Enclosures Refer to Legacy Hardware (IOAM, FCDM, S-Series I/O Enclosure) (page 73). 84 System Configuration Guidelines NB50000c, NB54000c, and NB56000c

85 Factory-Default Disk Volume Locations for SAS Disk Devices SAS disk enclosures connect to Storage CLIMs via SAS cables. For details on cable types, refer to Cable Types and Connectors (page 2). NOTE: To determine compatibility of Storage CLIM models and SAS disk enclosure models, refer to SAS Disk Enclosure (page 23). This illustration shows where the factory-default locations for the primary and mirror system disk volumes reside in separate disk enclosures: Factory-Default Disk Volume Locations for SAS Disk Devices 85

86 Part II NonStop i BladeSystems Carrier-Grade (CG) CAUTION: Information provided here is for reference and planning. Only authorized service providers with specialized training can install or service the NonStop i BladeSystem.

87 6 NonStop i BladeSystems Carrier Grade Overview This part of the manual describes NonStop Carrier Grade BladeSystems, which are used in Central Office and telecommunication environments. NonStop Carrier Grade BladeSystems use a seismic rack with DC input power and use different hardware components, which are described in NonStop i BladeSystem NB50000c-cg, NB54000c-cg, and NB56000c-cg Hardware (page 90). Characteristics of NonStop Carrier Grade BladeSystems are described in: Characteristics of an NB50000c-cg (page 87) Characteristics of an NB54000c-cg (page 88) Characteristics of an NB56000c-cg (page 89) NOTE: NonStop i BladeSystems (CG) are also part of solution packages provided by the Hewlett Packard Enterprise OpenCall software (OCS) group. This manual does not describe installing or configuring OCS solutions. Table Characteristics of an NB50000c-cg Supported RVU Processor/Processor model Rack Main memory Maximum processors Supported processor configurations Maximum CLuster I/O Modules (CLIMs) Minimum CLIMs for fault-tolerance Disk Storage Maximum MSA 2U2 CG SAS disk drives per Storage CLIM pair Maximum 24CG SAS disk drives per Storage CLIM pair 5344-SE DAT units (2 DAT 60 drives each) System console (optional) Fibre Channel disk modules/ioam enclosures/swans NonStop S-series I/O enclosures Enterprise Storage System (ESS) Connection to ServerNet clusters Connection to BladeCluster Solution J06.04 and later Intel Itanium/NSE-M NOTE: NB50000c-cg, NB54000c-cg, and NB56000c-cg server blades cannot coexist in the same system. 36U seismic rack 8 GB to 48 GB per logical processor 6 2, 4, 6, 8, 0, 2, 4, or 6 48 CLIMs (total of all types) in a 6 processor system. Maximum 20 CLIMs for Telco CLIM CG 2 Storage CLIMs CG 2 IP CLIMs CG 2 Telco CLIMs CG CG SAS Disk Enclosures (page 92). 48 (in 4 disk enclosures) 96 (in 4 disk enclosures) Supported. Connects to Storage CLIM. AC-powered consoles only. No rack-mounted consoles. Not supported Supported for token-ring ServerNet connectivity or System Signaling Seven (SS7) connectivity for up to 4 S-series I/O enclosures. Supported but not intended for CG environments Supported but not intended for CG environments Supported 87

88 Table 2 Characteristics of an NB54000c-cg Supported RVU CLIM DVD (Minimum DVD version required for RVU) Processor/Processor Model Rack Main memory Minimum/maximum processors Supported platform configuration Maximum CLuster I/O Modules (CLIMs) Minimum CLIMs for fault-tolerance Disk Storage Maximum SAS disk drives per Storage CLIM pair Maximum 24CG SAS disk drives per Storage CLIM pair 5344-SE DAT units (2 DAT 60 drives each) System console (optional) Fibre Channel disk modules/ioam enclosures/swans NonStop S-series I/O enclosures Enterprise Storage System (ESS) Connection to ServerNet clusters Connection to BladeCluster Solution Power Regulator J06.2 and later Refer to the CLuster I/O Module (CLIM) Software Compatibility Guide. NOTE: This file is preinstalled on new NB54000c/NB54000c-cg systems. Intel Itanium/NSE-AB NOTE: NB50000c-cg and NB54000c-cg server blades cannot coexist in the same system. 36U seismic rack 6 GB to 64 GB per logical processor (64 GB supported on J06.3 and later RVUs) 2 to 6 3 configuration options that support 2, 4, 6, 8, 0, 2, 4, or 6 processors. In particular, the Flex Processor Bay configuration option offers processor numbering either sequentially or in even/odd format. For more information, see NonStop i BladeSystems Platform Configurations (page 26). 48 CLIMs (total of all types) in a 6 processor system. Maximum 20 CLIMs for Telco CLIM CG 2 Storage CLIMs CG 2 IP CLIMs CG 2 Telco CLIMs CG CG SAS Disk Enclosures (page 92) 48 (in 4 disk enclosures) 96 (in 4 disk enclosures) Supported. Connects to Storage CLIM. AC-powered consoles only. No rack-mounted consoles. Not supported Supported for token-ring ServerNet connectivity or System Signaling Seven (SS7) connectivity for up to 4 S-series I/O enclosures. Supported but not intended for CG environments Supported but not intended for CG environments Supported Supported as of J06.4 or later RVUs. For more information, refer to Power Regulator for NonStop i BladeSystems (page 29). IMPORTANT: There are license considerations for NonStop BladeSystems NB54000c-cg and NB56000c-cg. Refer to the NonStop Core Licensing Guide. 88 NonStop i BladeSystems Carrier Grade Overview

89 Table 3 Characteristics of an NB56000c-cg Supported RVU CLIM DVD (Minimum DVD version required for RVU) Processor/Processor Model Rack Main memory Minimum/maximum processors Supported platform configuration Maximum CLuster I/O Modules (CLIMs) Minimum CLIMs for fault-tolerance Disk Storage Maximum MSA 2U2 CG SAS disk drives per Storage CLIM pair Maximum 24CG SAS disk drives per Storage CLIM pair 5344-SE DAT units (2 DAT 60 drives each) System console (optional) Fibre Channel disk modules/ioam enclosures/swans NonStop S-series I/O enclosures Enterprise Storage System (ESS) Connection to ServerNet clusters Connection to BladeCluster Solution Power Regulator J06.6 and later Refer to the CLuster I/O Module (CLIM) Software Compatibility Guide. NOTE: This file is preinstalled on new NB56000c/NB56000c-cg systems. Intel Itanium/NSE-AF NOTE: NB50000c-cg, NB54000c-cg, and NB56000c-cg server blades cannot coexist in the same system. 36U seismic rack 6 GB to 96 GB per logical processor (96 GB supported on J06.6 and later RVUs) 2 to 6 3 configuration options that support 2, 4, 6, 8, 0, 2, 4, or 6 processors. In particular, the Flex Processor Bay configuration option offers processor numbering either sequentially or in even/odd format. For more information, see NonStop i BladeSystems Platform Configurations (page 26). 48 CLIMs (total of all types) in a 6 processor system. Maximum 20 CLIMs for Telco CLIM CG 2 Storage CLIMs CG 2 IP CLIMs CG 2 Telco CLIMs CG CG SAS Disk Enclosures (page 92). 48 (in 4 disk enclosures) 96 (in 4 disk enclosures) Supported. Connects to Storage CLIM. AC-powered consoles only. No rack-mounted consoles. Not supported Supported for token-ring ServerNet connectivity or System Signaling Seven (SS7) connectivity for up to 4 S-series I/O enclosures. Supported but not intended for CG environments Supported but not intended for CG environments Supported Supported as of J06.6 or later RVUs. For more information, refer to Power Regulator for NonStop i BladeSystems (page 29). 89

90 NEBS Required Statements NonStop BladeSystems NB50000c-cg, NB54000c-cg, and NB56000c-cg are designed for installation into a Central Office or similar telecommunications environment. NonStop BladeSystems NB50000c-cg, NB54000c-cg, and NB56000c-cg are suitable for installation as part of a Common Bonding Network (CBN). To ensure proper electrical contact when grounding a BladeSystem CG rack: Use a ground cable of the same or larger size than the largest DC input power conductors, using a 40º C correction factor. Use a cable constructed of copper, 75 C minimum rated. Use only Listed two-hole copper compression-type lugs for the ground connector. Before making any crimp connections, coat bare wire and base connectors with antioxidant. Use star washers between the lug and rack ground rail to ensure proper ground contact and anti-rotation. The Battery Return (BR) Input Terminals are considered to be an Isolate DC Return (DC-I). NonStop BladeSystems NB50000c-cg, NB54000c-cg, and NB56000c-cg are suitable for connection to intrabuilding or non-exposed wiring or cabling only. Unshielded, twisted-pair (UTP) cables may be used for NEBS and non-nebs installation. WARNING! The intrabuilding port(s) of the equipment or subassembly is suitable for connection to intrabuilding or unexposed wiring or cabling only. The intrabuilding port(s) of the equipment or subassembly MUST NOT be metallically connected to interfaces that connect to the OSP or its wiring. These interfaces are designed for use as intrabuilding interfaces only (Type 2 or Type 4 ports as described in GR-089-CORE, Issue 5) and require isolation from the exposed OSP cabling. The addition of Primary Protectors is not sufficient protection in order to connect these interfaces metallically to OSP wiring. This symbol indicates Hewlett Packard Enterprise systems and peripherals that contain assemblies and components that are sensitive to electrostatic discharge (ESD). Carefully observe the precautions and recommended procedures in this document to prevent component damage from static electricity. NonStop i BladeSystem NB50000c-cg, NB54000c-cg, and NB56000c-cg Hardware Seismic Rack (page 9) c7000 CG Enclosure (page 9) IP CLIM CG and Telco CLIM CG (page 92) Storage CLIM CG (page 92) CG SAS Disk Enclosures (page 92) 5344-SE DAT Tape Unit (page 92) HPE NonStop 240A Breaker Panel (page 93) HPE NonStop 80A Fuse Panel CG (page 95) 90 NonStop i BladeSystems Carrier Grade Overview

91 HPE NonStop System Alarm Panel (page 96) CG Maintenance Switch (page 97) System Console (page 97) NonStop S-series CO I/O Enclosures (Optional) (page 97) TIP: The site-preparation guidelines for carrier-grade NonStop i BladeSystems are the same as described in Site Preparation Guidelines NB50000c, NB54000c, and NB56000c (page 36). Seismic Rack See Seismic Rack Specifications NB50000c-cg, NB54000c-cg, and NB56000c-cg (page 00). c7000 CG Enclosure The c7000 CG enclosure provides similar functionality and features as the AC power c7000 Enclosure (page 7) except it has a U seismic brace above the c7000 CG enclosure, a -48V DC power input module, DC power supplies, and three separate ground connections between the chassis and the rack. For connection and grounding instructions, refer your service provider to the NonStop i BladeSystem Hardware Installation Manual. NOTE: A carrier-grade BladeSystem in a BladeCluster uses the BladeCluster ServerNet (Cluster High I/O) CG switch and the Advanced Cluster Hub (ACH) CG as described in the BladeCluster Solution Manual. NonStop i BladeSystem NB50000c-cg, NB54000c-cg, and NB56000c-cg Hardware 9

92 IP CLIM CG and Telco CLIM CG The IP CLIM CG and Telco CLIM CG provide the same functionality and features as the IP CLIM, Telco CLIM including the RJ45 Cable Management Panel. Notable differences are the carrier-grade CLIMs have DC power supplies and a ground connection between the CLIM chassis and the rack. For CLIM physical specifications, refer to System Installation Specifications NB50000c-cg, NB54000c-cg, and NB56000c-cg (page 98). Storage CLIM CG The Storage CLIM CG provides similar functionality and features as the Storage CLIM except the Storage CLIM CG has DC power supplies and a ground connection between the CLIM chassis and the rack. For the physical specifications for this CLIM, refer to System Installation Specifications NB50000c-cg, NB54000c-cg, and NB56000c-cg (page 98). CG SAS Disk Enclosures The CG SAS disk enclosure provides the storage capacity for the Storage CLIM CG. This enclosure holds SAS drives with redundant power and cooling. Some earlier configurations may use a SAS disk enclosure with 2 drives. Figure 23 Example CG SAS Disk Enclosure, Front View Figure 24 Example CG SAS Disk Enclosure, Back View For the physical specifications for this enclosure, see System Installation Specifications NB50000c-cg, NB54000c-cg, and NB56000c-cg (page 98). Enterprise Storage System (ESS) An Enterprise Storage System (ESS) is a collection of magnetic disks, their controllers, and a disk cache in one or more standalone racks. Like the commerical NonStop i BladeSystem AC system, the NonStop i BladeSystem CG system supports connecting to ESS; however, ESS is not intended for a CG power environment SE DAT Tape Unit The 5344-SE DAT tape unit is U high and supports two DAT60 internal tape drives. The tape drives are neither dual powered nor dual ported: Each tape drive is independently powered and has its own SAS input. 92 NonStop i BladeSystems Carrier Grade Overview

93 For the physical specifications for this enclosure, see System Installation Specifications NB50000c-cg, NB54000c-cg, and NB56000c-cg (page 98). For detailed information about this tape unit, see the SE Tape Drive Installation and User's Guide. HPE NonStop 240A Breaker Panel The dual-input, 240A breaker panel always occupies the top 2U of the rack and has 4 outputs (7 outputs per rail). The breaker panel has two sides: A and B, which are electrically independent. Power cabling is accessed from the rear of the rack. You must order the breakers for all products, except the c7000 enclosure. NonStop i BladeSystem NB50000c-cg, NB54000c-cg, and NB56000c-cg Hardware 93

94 Breaker Panel Specifications for NonStop i BladeSystem CG 3 (maximum 80A rating) outputs per rail connect to a single c7000 CG enclosure If the c7000 CG is not connected, outputs -3 can be used for components with smaller breakers such as a CLIM (5A), SAS disk enclosure (30A), alarm panel (2A) 4 (maximum 50A rating) outputs per power rail connect to other CG components CAUTION: Maximum 240A output load Outputs 4-7 are not rated for 80A breakers. Do not install 80A breakers in these outputs Nominal input voltage -48/-60 VDC Operating input voltage -36VDC to -72VDC Maximum 240A input rating with two inputs Appropriate site branch circuit protection for maximum total load rating 94 NonStop i BladeSystems Carrier Grade Overview

95 HPE NonStop 80A Fuse Panel CG NOTE: The descriptions in this manual are provided for reference only. For your safety, only authorized Hewlett Packard Enterprise service providers can work with the fuse panel and other DC power components. The dual-input, 80A fuse panel provides power protection for the NonStop i BladeSystem components. The fuse panel has two sides: A and B, which are electrically independent, distributing power to four individually fused TPA outputs (rated 50A) per rail and to five individually fused GMT outputs (rated 5A) per rail. The fuse panel is accessed from the front of the rack and power cabling is accessed from the rear of the rack. Fuses are configured by manufacturing per order specifications. More information NonStop i BladeSystem Hardware Installation Manual (service providers only) Fuse Panel Power Specifications for NonStop i BladeSystems CG (page 95) DC Power Distribution NB50000c-cg, NB54000c-cg, and NB56000c-cg (page 98) Fuse Panel Power Specifications for NonStop i BladeSystems CG 4 TPA (maximum 50A rating) outputs per power rail 5 GMT (maximum 5A rating) outputs per power rail Maximum 80A output load Nominal input voltage -48/-60 VDC Operating input voltage -36VDC to -72VDC Maximum 80A input rating with two inputs Two DC power lines, 70W per line, 40W maximum Heat Dissipation unit heat: one line 239 x (fuse panel load /60) BTU per hour, two lines 478 maximum BTU/hour NOTE: The fuse panel load is the total nominal current on both sides NonStop i BladeSystem NB50000c-cg, NB54000c-cg, and NB56000c-cg Hardware 95

96 HPE NonStop System Alarm Panel The system alarm panel provides SNMP, visual, and audible alarm indicators and relays for generating four levels of alarms for hardware errors in a NonStop i CG system. Alarm levels are: Alarm Level Critical Major Minor Power Idle Description A severe service-affecting condition has occurred. Immediate corrective action is required. A serious disruption of service, a malfunction, or a FRU failure has occurred. Immediate corrective action is required. A non-service-affecting condition has occurred. This alarm does not require immediate corrective action. A power fault has occurred. No alarms are active. For connections, have your service provider refer to the Technical Document for the system or to the NonStop i BladeSystem Hardware Installation Manual. 96 NonStop i BladeSystems Carrier Grade Overview

97 CG Maintenance Switch The NonStop i Maintenance Switch CG provides the communication network between system components. NonStop i CG systems use the GarrettCom Magnum 6K25 Fiber Switch with two DC power inputs, 24 Ethernet ports labeled A through A8, B through B8, and C through C8, and a ground connection between the rack and the switch chassis. More information NonStop X System Hardware Installation Manual System Installation Specifications NB50000c-cg, NB54000c-cg, and NB56000c-cg (page 98) DC Power Distribution NB50000c-cg, NB54000c-cg, and NB56000c-cg (page 98) System Console The NonStop i BladeSystem CG uses an AC power System Console (page 24) that is supported in a separate rack or in the desktop model. An AC power system console cannot be mounted in the seismic rack. NOTE: The NonStop system console must be configured with some ports open. For more information, see the NonStop System Console Installer Guide. NonStop S-series CO I/O Enclosures (Optional) Up to four NonStop S-series CO I/O enclosures (Groups -4) can be connected to the ServerNet switches in a c7000 enclosure (Group 00 only). For more information, refer to Legacy Hardware (IOAM, FCDM, S-Series I/O Enclosure) (page 73). NonStop i BladeSystem NB50000c-cg, NB54000c-cg, and NB56000c-cg Hardware 97

98 7 System Installation Specifications NB50000c-cg, NB54000c-cg, and NB56000c-cg This section provides specifications necessary for system installation planning. DC Power Distribution NB50000c-cg, NB54000c-cg, and NB56000c-cg Figure 25 DC Power Distribution for Sample Single Rack System Figure 26 Sample Power Distribution for Seismic Rack and Carrier Grade rack 98 System Installation Specifications NB50000c-cg, NB54000c-cg, and NB56000c-cg

99 Enclosure Power Loads NB50000c-cg, NB54000c-cg, and NB56000c-cg Enclosure Type Power Lines per Enclosure Typical Power Consumption (VA) Maximum Power Consumption (VA) 2 NB50000c-cg Server Blades: BL860c server blade NB54000c-cg Server Blades: BL860c i2 server blade 4 (6 GB RAM) BL860c i2 server blade (24 GB RAM) BL860c i2 server blade (32 GB RAM) BL860c i2 server blade (48 GB RAM) BL860c i2 server blade (64 GB RAM) NB56000c-cg Server Blades: BL860c i4 server blade 5 (6 GB RAM) BL860c i4 server blade (32 GB RAM) BL860c i4 server blade (48 GB RAM) BL860c i4 server blade (64 GB RAM) BL860c i4 server blade (96 GB RAM) Common products used by NB50000c-cg, NB54000c-cg, and NB56000c-cg CG c7000 R enclosure CG c7000 R2 enclosure CG c7000 R3 enclosure NonStop S-series CO I/O Enclosure Fuse Panel BladeSystem ServerNet Switch (Standard I/O) BladeSystem ServerNet Switch (High I/O) G5 CLIM CG DC Power Distribution NB50000c-cg, NB54000c-cg, and NB56000c-cg 99

100 Enclosure Type Power Lines per Enclosure Typical Power Consumption (VA) Maximum Power Consumption (VA) 2 G6 CLIM CG Gen8 CLIM CG Gen9 Storage CLIM CG Gen9 Networking CLIM CG, 5 copper ports (IP or 5 copper ports (IP or Telco) Gen9 Networking CLIM CG, 3 copper/2 optical ports (IP or Telco) MSA 2U2 CG SAS Disk Enclosure, empty CG SAS disk enclosure, empty SAS 2.5 inches disk drive 46 GB, 5k rpm SAS 2.5 inches disk drive 300 GB, 5k rpm SAS 3.5 in disk drive Breaker panel CG Maintenance switch System Alarm panel SE DAT Unit 2 Typical = measured at 22C ambient temp 2 Maximum = measured at 50C ambient temp All BL860c server blades measured with.6ghz dual-core processor, 6 GB RAM, and ServerNet Mezzanine Card 4 All BL860c i2 server blades measured with.73ghz quad-core processor and ServerNet mezzanine card. 5 All BL860c i4 server blades measured with.73ghz quad-core processor and ServerNet mezzanine card. 6 The CG c7000 R3 enclosure is compatible with NonStop i BladeSystem NB54000c-cg and NB56000c-cg. Dimensions and Weights NB50000c-cg, NB54000c-cg, and NB56000c-cg Seismic Rack Specifications NB50000c-cg, NB54000c-cg, and NB56000c-cg All information for connecting, grounding and installing the NonStop i BladeSystem NB50000c-cg, NB54000c-cg, or NB56000c-cg is available to your authorized service provider. Specification U Height Width Depth Cable Entry Value 36U 26.7 in (67.8 cm) 39.4 in (00.0 cm) Top 00 System Installation Specifications NB50000c-cg, NB54000c-cg, and NB56000c-cg

101 Specification Max Load Rack Weight Packaging and Pallet Weight Including side panels. Value 200 lbs (544.3 kg) 500 lbs (226.8 kg) Original seismic rack is 500 lbs (226.8 kg) Seismic rack R2 is 485 lbs (220 kg ) 200 lbs (90.7 kg) Floor Space Requirements Minimum Recommended Measurement in cm in cm Front (appearance side) clearance Rear (service side) clearance Space between racks <.0 < 2.5 rack pitch Unpacking area: When moving the rack off the shipping pallet, you need approximately 9 feet (2.74 meters) on one side of the rack to allow you to slide the pallet out from under the rack after the rack has been raised on the casters. Unit Sizes NB50000c-cg, NB54000c-cg, and NB56000c-cg Enclosure Type NonStop S-series CO I/O Enclosure Fuse Panel c7000 CG enclosure CLIMs CG CLIM CG Patch Panel MSA 2U2 CG SAS disk enclosure 24CG and M CG SAS disk enclosure Breaker panel CG Maintenance switch System Alarm panel 5344-SE DAT Unit Height (U) Contact your service provider 2 (includes seismic brace) Dimensions and Weights NB50000c-cg, NB54000c-cg, and NB56000c-cg 0

102 Enclosure Dimensions NB50000c-cg, NB54000c-cg, and NB56000c-cg Enclosure Type Height in cm Width in cm Depth in cm NonStop S-series CO I/O Enclosure (including safety cover) Fuse Panel (without safety cover) c7000 CG enclosure CLIM CG, all models CLIM CG RJ45 Patch Panel MSA 2U2 CG SAS disk enclosure CG SAS disk enclosure Breaker panel (including safety cover) 26 (without safety cover) CG Maintenance switch System Alarm panel SE DAT Unit Rack and Enclosure Weights Worksheet NB50000c-cg, NB54000c-cg, and NB56000c-cg The total weight of each rack is the sum the weight of the rack plus each enclosure installed in it. All weights are approximate. Use this worksheet in Rack Weight Worksheet (page 02) to calculate the weight. Table 4 Rack Weight Worksheet Weight Worksheet for Rack Number Enclosure Type Number of Enclosures Weight lb kg Total lb kg Breaker panel Fuse panel System Installation Specifications NB50000c-cg, NB54000c-cg, and NB56000c-cg

103 Table 4 Rack Weight Worksheet (continued) Weight Worksheet for Rack Number Enclosure Type Number of Enclosures Weight lb kg Total lb kg Alarm panel CG c7000 R enclosure CG c7000 R2 enclosure and R3 enclosure G5 CLIM CG G6 CLIM CG Gen8 CLIM CG Gen9 CLIM CG CLIM CG Patch Panel MSA 2U2 CG SAS disk enclosure (empty) CG SAS disk enclosure (empty) SE DAT unit CG Maintenance switch NonStop S-series CO I/O enclosure assembly (including CIA/SAP, TICs, and air baffle) Total Payload Seismic rack, 36U Total Weight -- Maximum payload weight for the 36U rack: 200 lbs (544.3 kg). 2 Weight of R2 seismic rack -- Environmental Specifications NB50000c-cg, NB54000c-cg, and NB56000c-cg To calculate the heat dissipation for each seismic rack, use the worksheet for your system type. Either: Heat Dissipation Worksheet for Seismic Rack NB50000c-cg (page 04) Heat Dissipation Worksheet for Seismic Rack NB54000c-cg (page 05) Heat Dissipation Worksheet for Seismic Rack NB56000c-cg (page 06). -- Environmental Specifications NB50000c-cg, NB54000c-cg, and NB56000c-cg 03

104 Heat Dissipation Specifications and Worksheets Carrier Grade Table 5 Heat Dissipation Worksheet for Seismic Rack NB50000c-cg Heat Dissipation Worksheet for Rack Number Enclosure Type Number Installed Typical Unit Heat (BTU/hour) Maximum Unit Heat (BTU/hour) Total (BTU/hour) Alarm Panel 7 34 CG c7000 R enclosure CG c7000 R2 enclosure BL860c server blade (for NB50000c-cg) BladeSystem ServerNet Switch (Standard I/O) BladeSystem ServerNet Switch (High I/O) G5 CLIM CG G6 CLIM CG MSA 2U2 CG SAS disk enclosure (empty) CG SAS disk enclosure (empty) SAS 3.5 in disk drive SAS 2.5 in disk drive 46 GB, 5k rpm 4 28 SAS 2.5 in disk drive 300 GB, 5k rpm SE DAT unit 5 02 CG Maintenance switch 9 36 NonStop S-series CO I/O enclosure Enclosure Type Number Installed Unit Heat (Btu/hour with one line powered) Unit Heat (Btu/hour with both lines powered) Breaker Panel 77 x (breaker panel load /480) Fuse panel x (fuse panel load /60) Total Heat (Btu/hour) System Installation Specifications NB50000c-cg, NB54000c-cg, and NB56000c-cg

105 Assumes full load per side with breaker panel dissipating 20W per side 2 Breaker panel load is the total nominal current on both sides 3 Maximum value. Assumes full load per side of 70W. 4 Fuse panel load is the total nominal current on both sides Table 6 Heat Dissipation Worksheet for Seismic Rack NB54000c-cg Heat Dissipation Worksheet for Rack Number Enclosure Type Number Installed Typical Unit Heat (BTU/hour) Maximum Unit Heat (BTU/hour) Total (BTU/hour) Alarm Panel 7 34 CG c7000 R enclosure CG c7000 R2 enclosure BL860c i2 server blade (for NB54000c-cg) BladeSystem ServerNet Switch (Standard I/O) BladeSystem ServerNet Switch (High I/O) G5 CLIM CG G6 CLIM CG MSA 2U2 CG SAS disk enclosure (empty) CG SAS disk enclosure (empty) SAS 3.5 in disk drive SAS 2.5 in disk drive 46 GB, 5k rpm 4 28 SAS 2.5 in disk drive 300 GB, 5k rpm SE DAT unit 5 02 CG Maintenance switch 9 36 NonStop S-series CO I/O enclosure Enclosure Type Number Installed Unit Heat (Btu/hour with one line powered) Unit Heat (Btu/hour with both lines powered) Breaker Panel 77 x (breaker panel load /480) Environmental Specifications NB50000c-cg, NB54000c-cg, and NB56000c-cg 05

106 Table 6 Heat Dissipation Worksheet for Seismic Rack NB54000c-cg (continued) Heat Dissipation Worksheet for Rack Number Enclosure Type Number Installed Typical Unit Heat (BTU/hour) Maximum Unit Heat (BTU/hour) Total (BTU/hour) Fuse panel x (fuse panel load /60) Total Heat (Btu/hour) Assumes full load per side with breaker panel dissipating 20W per side 2 Breaker panel load is the total nominal current on both sides 3 Maximum value. Assumes full load per side of 70W. 4 Fuse panel load is the total nominal current on both sides Table 7 Heat Dissipation Worksheet for Seismic Rack NB56000c-cg Heat Dissipation Worksheet for Rack Number Enclosure Type Number Installed Typical Unit Heat (BTU/hour) Maximum Unit Heat (BTU/hour) Total (BTU/hour) Alarm Panel 7 34 CG c7000 R enclosure CG c7000 R2 enclosure CG c7000 R3 enclosure BL860c i4 server blade (for NB56000c-cg) BladeSystem ServerNet Switch (Standard I/O) BladeSystem ServerNet Switch (High I/O) G5 CLIM CG G6 CLIM CG Gen8 CLIM CG Gen9 Storage CLIM CG Gen9 Networking CLIM CG, 5 copper ports (IP or 5 copper ports (IP or Telco) Gen9 Networking CLIM CG, 3 copper/2 optical ports (IP or Telco) MSA 2U2 CG SAS disk enclosure, empty System Installation Specifications NB50000c-cg, NB54000c-cg, and NB56000c-cg

107 Table 7 Heat Dissipation Worksheet for Seismic Rack NB56000c-cg (continued) Heat Dissipation Worksheet for Rack Number Enclosure Type Number Installed Typical Unit Heat (BTU/hour) Maximum Unit Heat (BTU/hour) Total (BTU/hour) 24CG SAS disk enclosure, empty SAS 3.5 in disk drive SAS 2.5 in disk drive 46 GB, 5k rpm 4 28 SAS 2.5 in disk drive 300 GB, 5k rpm SE DAT unit 5 02 CG Maintenance switch 9 36 NonStop S-series CO I/O enclosure Enclosure Type Number Installed Unit Heat (Btu/hour with one line powered) Unit Heat (Btu/hour with both lines powered) Breaker Panel 77 x (breaker panel load /480) Fuse panel x (fuse panel load /60) Total Heat (Btu/hour) Assumes full load per side with breaker panel dissipating 20W per side 2 Breaker panel load is the total nominal current on both sides 3 Maximum value. Assumes full load per side of 70W. 4 Fuse panel load is the total nominal current on both sides Operating Temperature, Humidity, and Altitude Specification Value Temperature range Operating Non operating -5 to 50 ºC ambient temperature -40 to 70 ºC Relative humidity (non-condensing) 2 Operating Non operating 5 to 90% relative humidity 5 to 93% relative humidity Altitude Operating 40 ºC from sea level to 6,000 ft. 30 ºC from 6,000 ft. to 3,000 ft. Seismic resistance Earthquake Zone 4 Temperature ratings are shown for sea level. No direct sunlight allowed. 2 Storage maximum humidity of 93% is based on a maximum temperature of 40 C. Environmental Specifications NB50000c-cg, NB54000c-cg, and NB56000c-cg 07

108 Power Load Worksheet NB50000c-cg, NB54000c-cg, and NB56000c-cg Enclosures in Seismic Racks Power and current specifications for seismic racks are the sum of the power and current specifications for each enclosure that are installed in the rack. Use the worksheet in Power Load Worksheet for Seismic Rack (page 08) to calculate the power load for each rack and to indicate the breaker panel and fuse panel configurations. Table 8 Power Load Worksheet for Seismic Rack Power Load Worksheet for Rack Number Enclosure Type Number of Enclosures Maximum Watts per Enclosure Subtotal Maximum Amps per Enclosure at -36V Breaker Panel Output Breaker Size Fuse Panel 2 Fuse Size CG c7000 R enclosure Output, 2, and 3 80A CG c7000 R2 enclosure Output, 2, and 3 80A CG c7000 R3 enclosure Output, 2, and 3 80A G5 CLIM CG A G6 CLIM CG A Gen8 CLIM CG A Gen9 CLIM CG A MSA 2U2 CG SAS disk enclosure A 24CG SAS disk enclosure A DAT Tape A System Alarm Panel A Maintenance Switch A NonStop S-series CO I/O Enclosure N/A N/A N/A N/A Total Breaker panel output options are through 7 2 Fuse panels have a maximum of 4 TPA fuses and 5 GMT fuses 3 Current per feed with 3x power supplies running (one side powered) 4 Current per feed with 3x power supplies running (one side powered) 5 Current per feed with 3x power supplies running (one side powered) 6 Choose breaker panel with breaker size 5A or fuse panel with 5A GMT or TPA fuse 08 System Installation Specifications NB50000c-cg, NB54000c-cg, and NB56000c-cg

109 7 Choose breaker panel with breaker size 5A or fuse panel with 5A GMT or TPA fuse 8 This enclosure is using 30A breaker or 30A TPA fuse 9 These enclosures use 30A breaker or 30A TPA fuse 0 Choose breaker panel with breaker size 2A or fuse panel with GMT fuse Sample Configuration NB50000c-cg, NB54000c-cg, and NB56000c-cg This subsection contains completed planning information for a sample carrier grade BladeSystem that consists of: Eight processors (one c7000 CG enclosure) Two IP CLIM CG enclosures Two Storage CLIM CG enclosures Two Telco CLIM CG enclosures Two MSA 2U2 CG SAS disk enclosure One 5344-SE DAT unit One alarm panel for each seismic rack One CG maintenance switch Table 9 Completed Weight Worksheet for Sample System Rack Number Weight Worksheet for Rack Number Enclosure Type Number of Enclosures Weight lb kg Total lb kg Breaker panel Fuse panel Alarm panel c7000 CG CLIM CG Ethernet Patch Panel G5 CLIM CG MSA 2U2 CG SAS disk enclosure SE DAT unit CG Maintenance switch NonStop S-series CO I/O enclosure assembly (including CIA/SAP, TICs, and air baffle) Total Payload Seismic rack, 36U Total Weight Maximum payload weight for the 36U rack: 200 lbs (544.3 kg). 2 Weight of Original Seismic rack Sample Configuration NB50000c-cg, NB54000c-cg, and NB56000c-cg 09

110 8 Support and other resources Accessing Hewlett Packard Enterprise Support For live assistance, go to the Contact Hewlett Packard Enterprise Worldwide website: To access documentation and support services, go to the HP Support Center Hewlett Packard Enterprise website: Information to collect Technical support registration number (if applicable) Product name, model or version, and serial number Operating system name and version Firmware version Error messages Product-specific reports and logs Add-on products or components Third-party products or components Accessing updates Some software products provide a mechanism for accessing software updates through the product interface. Review your product documentation to identify the recommended software update method. To download product updates, go to either of the following: HP Support Center Hewlett Packard Enterprise Get connected with updates from HP page: Software Depot website: To view and update your entitlements, and to link your contracts, Care Packs, and warranties with your profile, go to the HP Support Center Hewlett Packard Enterprise More Information on Access to HP Support Materials page: IMPORTANT: Access to some updates might require product entitlement when accessed through the HP Support Center Hewlett Packard Enterprise. You must have a Hewlett Packard Enterprise Passport set up with relevant entitlements. 0 Support and other resources

111 Websites Website Hewlett Packard Enterprise Information Library HP Support Center Hewlett Packard Enterprise Contact Hewlett Packard Enterprise Worldwide Subscription Service/Support Alerts Software Depot Customer Self Repair Insight Remote Support Serviceguard Solutions for HP-UX Single Point of Connectivity Knowledge (SPOCK) Storage compatibility matrix Storage white papers and analyst reports Link Customer self repair Hewlett Packard Enterprise customer self repair (CSR) programs allow you to repair your product. If a CSR part needs to be replaced, it will be shipped directly to you so that you can install it at your convenience. Some parts do not qualify for CSR. Your Hewlett Packard Enterprise authorized service provider will determine whether a repair can be accomplished by CSR. For more information about CSR, contact your local service provider or go to the CSR website: Remote support Remote support is available with supported devices as part of your warranty, Care Pack Service, or contractual support agreement. It provides intelligent event diagnosis, and automatic, secure submission of hardware event notifications to Hewlett Packard Enterprise, which will initiate a fast and accurate resolution based on your product s service level. Hewlett Packard Enterprise strongly recommends that you register your device for remote support. For more information and device support details, go to the following website: Documentation feedback Hewlett Packard Enterprise is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback (docsfeedback@hpe.com). When submitting your feedback, include the document title, part number, edition, and publication date located on the front cover of the document. For online help content, include the product name, product version, help edition, and publication date located on the legal notices page. Websites

112 A Cables for BladeSystems Cable Types and Connectors Although a considerable cable length can exist between the modular enclosures in the system, Hewlett Packard Enterprise recommends that cable length between each of the enclosures be as short as possible. NOTE: For G6SE cable types and cable specifications, refer to the G6SE Ethernet Connectivity Guide for NonStop BladeSystems. The following table lists the available cables and their lengths: Connection From... c7000 enclosure to c7000 enclosure (interconnection) Cable Type MMF Connectors MTP-MTP IB port on G6 IB CLIM to customer-supplied IB switch ETH port on Storage CLIM with encryption to customer-supplied switch (only supported on Storage CLIMs with encryption.) c7000 ServerNet switch to c7000 ServerNet switch (cross-link connection) c7000 enclosure to CLIM or IOAM Copper or Fiber CAT 6 UTP MMF MMF QSFP RJ-45 to RJ-45 LC-LC MTP-4LC c7000 ServerNet switch to S-series I/O enclosure MMF MTP-SC Maintenance LAN interconnect CAT 5e UTP RJ-45 to RJ-45 2 Cables for BladeSystems

113 Connection From... Maintenance LAN interconnect Cable Type CAT 6 UTP Connectors RJ-45 to RJ-45 G2 and G5 CLIMs only SAS disk enclosure to SAS disk enclosure (daisy-chain) G2 and G5 CLIMs only CG SAS disk enclosure to MSA 2U2 CG SAS disk enclosure (daisy-chain) G5 Storage CLIM to MSA70 SAS disk enclosure Copper Copper Copper SFF-8088 to SFF-8088 SFF-8470 to SFF-8470 SFF-8470 to SFF-8470 G5 Storage CLIM CG to MSA 2U2 CG SAS disk enclosure G5 Storage CLIM CG to CG SAS tape G6 Storage CLIM CG to CG SAS tape G6 Storage CLIM to D2700 SAS disk enclosure Copper Copper Copper Copper SFF-8470 to SFF-8470 SFF-8470 to SFF-8470 SFF-8470 to SFF-8470 SFF-8088 to SFF-8088 CG G6 Storage CLIM to P2000 SAS disk enclosure Copper SFF-8088 to SFF-8088 CG G6 Storage CG to MSA 2U2 CG SAS disk enclosure Copper SFF-8470 to SFF-8088 G6 Storage CLIM to CG SAS tape Copper SFF-8470 to SFF-8088 Gen8 Storage CLIM to D2700 SAS disk enclosure Gen9 Storage CLIM to D3700 SAS disk enclosure CG Gen8 Storage CLIM to P2000 SAS disk enclosure CG Gen9 Storage CLIM to P2000 G4 SAS disk enclosure Copper Copper Copper Copper SFF-8088 to SFF-8088 SFF-8644 (HD) to SFF-8644 (HD) SFF-8088 to SFF-8088 SFF-8644 (HD) to SFF-8088 Cable Types and Connectors 3

114 Connection From... Cable Type Connectors CG Gen8 Storage CLIM to CG SAS tape Copper SFF-8470 to SFF-8088 Storage CLIM FC HBA to: ESS FC switch FC tape MMF LC-LC FCSA in IOAM Enclosure to: ESS FC switch MMF LC-LC 4 Cables for BladeSystems

115 B Default Startup Characteristics Each NonStop i BladeSystem ships with these default startup characteristics: NOTE: The configurations documented here are typical for most sites. Your system load paths might be different, depending upon how your system is configured. To determine the system disk configuration to use in the OSM Low-Level Link, refer to the system attributes in the OSM Service Connection. You can select a system disk configuration to use for the system load from the Configuration drop-down menu in the System Load dialog box in the OSM Low-Level Link. $SYSTEM disks residing in either SAS disk enclosures or FCDM enclosures: SAS Disk Enclosures Systems with only two to three Storage CLIMs and two SAS disk enclosures with the disks in these locations: CLIM X Location SAS Disk Enclosure Path Group Module Slot Port Fiber Enclosure Bay Primary Backup Mirror Mirror-Backup Systems with at least four Storage CLIMs and two SAS disk enclosures with the disks in these locations: CLIM X Location SAS Disk Enclosure Path Group Module Slot Port Fiber Enclosure Bay Primary Backup Mirror Mirror-Backup FCDM Enclosures Systems with one IOAM enclosure, two FCDMs, and two FCSAs with the disks in these locations: IOAM FCSA Fibre Channel Disk Module Path Group Module Slot SAC Shelf Bay Primary 0 2 Backup 0 3 Mirror Mirror-Backup

116 Systems with two IOAM enclosures, two FCDMs, and two FCSAs with the disks in these locations: IOAM FCSA Fibre Channel Disk Module Path Group Module Slot SAC Shelf Bay Primary 0 2 Backup 3 Mirror 3 2 Mirror-Backup Systems with one IOAM enclosure, two FCDMs, and four FCSAs with the disks in these locations: IOAM FCSA Fibre Channel Disk Module Path Group Module Slot SAC Shelf Bay Primary 0 2 Backup 0 3 Mirror Mirror-Backup Systems with two IOAM enclosures, two FCDMs, and four FCSAs with the disks in these locations: IOAM FCSA Fibre Channel Disk Module Path Group Module Slot SAC Shelf Bay Primary 0 2 Backup 2 Mirror 3 2 Mirror-Backup Configured system load paths Enabled command interpreter input (CIIN) function If the automatic system load is not successful, additional paths for loading are available in the boot task. Using one load path, the system load task attempts to use another path and keeps trying until all possible paths have been used or the system load is successful. These 6 paths are available for loading and are listed in the order of their use by the system load task: Load Path Description Source Disk Destination Processor ServerNet Fabric Primary $SYSTEM-P 0 X 2 Primary $SYSTEM-P 0 Y 3 Backup $SYSTEM-P 0 X 6 Default Startup Characteristics

117 Load Path Description Source Disk Destination Processor ServerNet Fabric 4 Backup $SYSTEM-P 0 Y 5 Mirror $SYSTEM-M 0 X 6 Mirror $SYSTEM-M 0 Y 7 Mirror-Backup $SYSTEM-M 0 X 8 Mirror-Backup $SYSTEM-M 0 Y 9 Primary $SYSTEM-P X 0 Primary $SYSTEM-P Y Backup $SYSTEM-P X 2 Backup $SYSTEM-P Y 3 Mirror $SYSTEM-M X 4 Mirror $SYSTEM-M Y 5 Mirror-Backup $SYSTEM-M X 6 Mirror-Backup $SYSTEM-M Y The command interpreter input file (CIIN) is automatically invoked after the first processor is loaded. The CIIN file shipped with new systems contains the TACL RELOAD * command, which loads the remaining processors. For default configurations of the Fibre Channel ports, Fibre Channel disk modules, and load disks, refer to Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module (page 80). 7

118 C Site Power Cables Carrier Grade Site cables and lugs must be provided by the customer or the installation provider for the customer. Hewlett Packard Enterprise does not provide these items to the customer. IMPORTANT: The power cables should be assembled by a certified electrician only. Required Documentation The information in this appendix assumes the customer or the customer s installation provider is already familiar with and has access to the National Fire Protection Associations (NFPAs) published National Electrical Code Handbook 2005 (NFPA 70). Specifically either of these tables from the handbook, depending on the Cable Rating that is selected at the site (60C-90C vs. 50C-250C jacketing): Allowable Ampacities of Insulated Conductors Rated 0 Through 2000 Volts, 60 C Through 90 C, Not More Than Three Current-Carrying Conductors in Raceway, Cable, or Earth, Based on Ambient Temperature of 30 C (Table 30.6) Allowable Ampacities of Insulated Conductors Rated 0 Through 2000 Volts, 50 C Through 250 C (302 F Through 482 F). Not More Than Three Current-Carrying Conductors in Raceway, Cable, or Earth, Based on Ambient Temperature of 40 C (04 F) (Table 30.8) Requirements for Site Power or Ground Cables The customer must ensure the site power cables meet the ampacity requirements for a NEBS 50 C environment. CAUTION: Use Table 30.6 or Table 30.8 along with Table 30.5(B)(2)(a). See Required Documentation (page 8) to verify the site power input cables meet the ampacity requirements for a NEBS 50 C environment. To ensure the site power input cables you plan to use meet the ampacity requirements for a NEBS 50 C environment: Depending on the site s Cable Rating requirements, use Table 30.6 or Table 30.8 (60 C-90 C or 50 C-250 C jacketing, respectively). See Required Documentation (page 8) to determine the correct cable size, based on the type and grade of cable you plan to use. To reach the NEBS 50 C environment, apply the correction factor within each table for the 50 C ambient temperature range (Table 30.6 is 46 C-50 C; Table 30.8 is 4 C-50 C) to the ampacity values in the Table to obtain the corrected maximum ampacity that is acceptable in the NEBS 50 C environment. If more than three current-carrying conductors will be run vertically through the sides of the rack, or lay in a site s raceway for longer than 24 inches, apply the correction factor from Table 30.5(B)(2)(a) to the ampacity values from Table 30.6 or Table 30.8 at 50 C. 8 Site Power Cables Carrier Grade

119 D Power Configurations for NonStop BladeSystems in G2 Racks NonStop i BladeSystem Power Distribution G2 Rack Your NonStop i BladeSystem in a G2 rack uses one of these power distribution types: NonStop i BladeSystem Single-Phase Power Distribution G2 Rack (page 9) NonStop i BladeSystem Three-Phase Power Distribution in a G2 Rack (page 43) NonStop i BladeSystem Single-Phase Power Distribution G2 Rack CAUTION: There is an upgrade limitation with single-phase PDUs. Customers using a single-phase power distribution might have limited upgrade capabilities. The present single-phase rack configuration supports a fully-loaded c7000 enclosure NonStop i BladeSystem. However, future systems might be limited in a single-phase configuration. For further information, check with your Hewlett Packard Enterprise service provider. These power configurations require careful attention to phase load balancing. For more information, refer to Phase Load Balancing (page 65). North America/Japan (NA/JPN) and International (INTL) are the supported regions for single-phase NonStop BladeSystems. For these regions, there are three different versions of the rack level PDU, depending on whether you are using modular or monitored PDUs. For c7000 single-phase power setup details, refer to the instructions for your PDU: Single-Phase Power Setup, Monitored PDUs G2 Rack (page 20) Single-Phase Power Setup in a G2 Rack, Modular PDU (page 33) The NonStop BladeSystem's single-phase, c7000 enclosure contains an AC Input Module that provides N + N redundant power distribution. This single-phase power module has six C9 power supply (PS) receptacles that provide direct AC power feeds to the c7000 enclosure: One group of three c7000 power feeds connects to a rack PDU that is powered from the main power grid. Another group of three power feeds connects to a rack PDU that is powered from either the backup power grid or from one of the single-phase UPS's installed in the rack. These PDUs and the rack UPS are dedicated to the c7000. For the other components in the rack, two other PDUs are available (main and backup) and a second, optional rack UPS is available. NonStop i BladeSystem Power Distribution G2 Rack 9

120 Single-Phase Power Setup, Monitored PDUs G2 Rack Power set up depends on your region and whether your configuration includes a UPS. For North America/Japan (NA/JPN): NA/JPN: Monitored Single-Phase Power Setup in a G2 Rack (With Rack-Mounted R5000 UPS) (page 2) NA/JPN: Monitored Single-Phase Power Setup in a G2 Rack (Without Rack-Mounted UPS) (page 20) For International (INTL): INTL: Monitored Single-Phase Power Setup in a G2 Rack With Rack-Mounted R5000 UPS (page 27) INTL: Monitored Single-Phase Power Setup in a G2 Rack Without Rack-Mounted UPS (page 28) NA/JPN: Monitored Single-Phase Power Setup in a G2 Rack (Without Rack-Mounted UPS) To setup the single-phase power feed connections as shown in Figure 27:. Connect four single-phase 30A power feeds to the four AF94A PDU NEMA L6-30P (30A, 3 wire) input connectors. 2. Connect the c7000 power supply (PS) C9 cables starting on the right-side of the c7000 input module: a. Connect the C9 cable from PS to the top right PDU, L3-3 receptables. b. Connect the C9 cable from PS2 to the bottom right PDU, L-3 receptables. c. Connect the C9 cable from PS3 to the bottom right PDU, L2-3 receptables. 3. Connect the c7000 power supply cables from the left-side of the c7000 input module: a. Connect the C9 cable from PS4 to the left PDU, L3-3 receptables. b. Connect the C9 cable from PS5 to the bottom left PDU, L-3 receptables. c. Connect the C9 cable from PS6 to the bottom left PDU, L2-3 receptables. 20 Power Configurations for NonStop BladeSystems in G2 Racks

121 Figure 27 North America/Japan Monitored Single-Phase Power Setup in a G2 Rack (Without Rack-Mounted UPS) NA/JPN: Monitored Single-Phase Power Setup in a G2 Rack (With Rack-Mounted R5000 UPS) To setup the power feed connections as shown in Figure 28:. Connect two single-phase 30A power feeds to the rack-mounted UPS L6-30P input connector on each R5000 UPS. 2. Connect two single-phase 30A power feeds to the AF94A PDU NEMA L6-30P (30A, 3 wire) input connectors. 3. Connect the c7000 power supply (PS) C9 cables starting on the right-side of the c7000 input module: a. Connect the C9 cable from PS to the top right PDU, L3-3 receptables. b. Connect the C9 cable from PS2 to the bottom right PDU, L-3 receptables. c. Connect the C9 cable from PS3 to the bottom right PDU, L2-3 receptables. NonStop i BladeSystem Power Distribution G2 Rack 2

122 4. Connect the c7000 power supply cables from the left-side of the c7000 input module: a. Connect the C9 cable from PS4 to the left PDU, L3-3 receptables. b. Connect the C9 cable from PS5 to the bottom left PDU, L-3 receptables. c. Connect the C9 cable from PS6 to the bottom left PDU, L2-3 receptables. Figure 28 North America/Japan Monitored Single-Phase Power Setup in a G2 Rack (With Rack-Mounted R5000 UPS) NA/JPN: Monitored Single-Phase in a G2 Rack PDU Description Four half-height, single-phase power distribution units (PDUs) are installed to provide redundant power outlets for the c7000 enclosure and the components mounted in the rack. The PDUs are on opposite sides facing each other and are on swivel brackets that can be rotated to allow for servicing of components. Each PDU is inches long and has 27 AC receptacles, three circuit breakers, and an AC power cord. The NA/JPN single-phase PDUs are oriented with the AC power cords exiting the rack at either the top or bottom rear corners of the rack, depending on the site's power feed needs. 22 Power Configurations for NonStop BladeSystems in G2 Racks

123 For information about specific PDU input and output characteristics for these PDUs, which are factory-installed in racks, refer to NA/JPN: Input and Output Power Characteristics in a G2 Rack, Single-Phase Monitored PDUs and c7000s (page 26). Each single-phase PDU in a rack has: 24 AC receptacles per PDU (2 per segment) - IEC 320 C3 0A receptacle type 3 AC receptacles per PDU ( per segment) - IEC 320 C9 6A receptacle type 3 circuit-breakers Each PDU distributes site single-phase power to single-phase 200 to 240 V AC outlets for connecting the power cords from the components mounted in the rack. NA/JPN: AC Power Feeds in a G2 Rack, Monitored Single-Phase PDUs The AC power feed cables for the PDUs are mounted to exit the rack at either the top or bottom rear corners of the rack depending on what is ordered for the site's power feed. Figure 29 shows the power feed cables on the single-phase NA/JPN PDUs with AC feed at the bottom of the rack and the AC power outlets along the PDU. The single-phase power outlets face each other on opposite sides in the rack and are on swivel brackets that can be rotated to allow for servicing of components. NOTE: For visual clarity, the single-phase AC power feed illustrations show the PDUs facing outward as if their swivel brackets have been rotated. Typically, these PDUs would be on opposite sides facing each other. NonStop i BladeSystem Power Distribution G2 Rack 23

124 Figure 29 Bottom AC Power Feed, Single-Phase NA/JPN Monitored PDUs Figure 30 shows the power feed cables on the single-phase PDUs with AC feed at the top of the rack. NOTE: For visual clarity, the single-phase AC power feed illustrations show the PDUs facing outward as if their swivel brackets have been rotated. Typically, these PDUs would be on opposite sides facing each other. 24 Power Configurations for NonStop BladeSystems in G2 Racks

125 Figure 30 Top AC Power Feed in a G2 Rack NA/JPN Single-Phase Monitored PDUs NonStop i BladeSystem Power Distribution G2 Rack 25

126 NA/JPN: Input and Output Power Characteristics in a G2 Rack, Single-Phase Monitored PDUs and c7000s The rack includes four half-height NA/JPN PDUs with these power characteristics: PDU input characteristics 200V to 240V AC, single-phase, 30A RMS, 3-wire 6.5 feet (2 m) attached power cord 50/60Hz NEMA L6-30P input plug as shown below PDU output characteristics 3 circuit-breaker-protected 20A load segments 24 AC receptacles per PDU (2 per segment) - IEC 320 C3 0A receptacle type 3 AC receptacles per PDU ( per segment) - IEC 320 C9 6A receptacle type NA/JPN: Branch Circuits and Circuit Breakers for a G2 Rack (With Single-Phase Monitored PDUs) racks for NonStop BladeSystems that use a single-phase power configuration contain four half-height PDUs. In racks without the optional rack-mounted UPS, each of the four NA/JPN PDUs requires a separate branch circuit of these ratings: Region North America/Japan (NA/JPN) Volts (Phase-to-Phase) 200 to 240 Amps (see following CAUTION ) 30 CAUTION: Be sure the hardware configuration and resultant power loads of each rack within the system do not exceed the capacity of the branch circuit according to applicable electrical codes and regulations. Select circuit breaker ratings according to local codes and any applicable regulations for the circuit capacity. Note that circuit breaker ratings vary if your system includes the optional rack-mounted R5000 UPS. 26 Power Configurations for NonStop BladeSystems in G2 Racks

127 NA/JPN: Circuit Breaker Ratings in a G2 Rack for Single-Phase R5000 UPS These ratings apply to systems with the optional rack-mounted R5000 UPS that is used for a single-phase NA/JPN power configuration: Version Operating Voltage Settings Power Out (VA/Watts) Input Plug UPS Input Rating North America/Japan (NA/JPN) 200/208 2, 220, 230, /4500 L6-30P Dedicated 30 Amp The UPS input requires a dedicated (unshared) branch circuit that is suitably rated for your specific UPS. 2 Factory-default setting For further information and specifications on the R5000 UPS, refer to the HPE R5000 UPS User Guide. For further information on the UPS Network Module supported for the R5000 UPS, refer to the HPE UPS Network Module User Guide. To locate manuals, go to: International (INTL) Monitored Single-Phase Power Configuration in a G2 Rack Power set up depends on your single-phase International power configuration type: INTL: Monitored Single-Phase Power Setup in a G2 Rack With Rack-Mounted R5000 UPS (page 27) INTL: Monitored Single-Phase Power Setup in a G2 Rack Without Rack-Mounted UPS (page 28) INTL: Monitored Single-Phase Power Setup in a G2 Rack With Rack-Mounted R5000 UPS To setup the power feed connections as shown in Figure 3:. Connect two single-phase 32A power feeds to the rack-mounted UPS IEC P6 (32A, 3 wire/2 pole) input connector on each R5000 UPS. 2. Connect two single-phase 32A power feeds to the AF95A PDU IEC P6 (32A, 3 wire/2 pole). 3. Connect the c7000 power supply (PS) C9 cables starting on the right-side of the c7000 input module: a. Connect the C9 cable from PS to the top right PDU, L3-3 receptables. b. Connect the C9 cable from PS2 to the bottom right PDU, L-3 receptables. c. Connect the C9 cable from PS3 to the bottom right PDU, L2-3 receptables. 4. Connect the c7000 power supply cables from the left-side of the c7000 input module: a. Connect the C9 cable from PS4 to the left PDU, L3-3 receptables. b. Connect the C9 cable from PS5 to the bottom left PDU, L-3 receptables. c. Connect the C9 cable from PS6 to the bottom left PDU, L2-3 receptables. NonStop i BladeSystem Power Distribution G2 Rack 27

128 Figure 3 International Monitored Single-Phase Power Setup in a G2 Rack With Rack-Mounted R5000 UPS INTL: Monitored Single-Phase Power Setup in a G2 Rack Without Rack-Mounted UPS To setup the single-phase power feed connections as shown in Figure 32:. Connect four single-phase 32A power feeds to the four AF95A PDU IEC P6 (32A, 3 wire/2 pole) input connectors. 2. Connect the c7000 power supply (PS) C9 cables starting on the right-side of the c7000 input module: a. Connect the C9 cable from PS to the top right PDU, L3-3 receptables. b. Connect the C9 cable from PS2 to the bottom right PDU, L-3 receptables. c. Connect the C9 cable from PS3 to the bottom right PDU, L2-3 receptables. 3. Connect the c7000 power supply cables from the left-side of the c7000 input module: a. Connect the C9 cable from PS4 to the left PDU, L3-3 receptables. b. Connect the C9 cable from PS5 to the bottom left PDU, L-3 receptables. c. Connect the C9 cable from PS6 to the bottom left PDU, L2-3 receptables. 28 Power Configurations for NonStop BladeSystems in G2 Racks

129 Figure 32 International Monitored Single-Phase Power Setup in a G2 Rack Without Rack-Mounted UPS INTL: Single-Phase Monitored in a G2 Rack PDU Description Four half-height, single-phase power distribution units (PDUs) are installed to provide redundant power outlets for the c7000 enclosure and the components mounted in the rack. The PDUs are on opposite sides facing each other and are on swivel brackets that can be rotated to allow for servicing of components. Each PDU is inches long and has 27 AC receptacles, three circuit breakers, and an AC power cord. The INTL single-phase PDUs are oriented with the AC power cords exiting the rack at either the top or bottom rear corners of the rack, depending on the site's power feed needs. For information about specific PDU input and output characteristics for these PDUs, which are factory-installed in racks, refer to INTL: Input and Output Power Characteristics in a G2 Rack Single-Phase Monitored PDUs (page 30). NonStop i BladeSystem Power Distribution G2 Rack 29

130 Each single-phase PDU in a rack has: 24 AC receptacles per PDU (2 per segment) - IEC 320 C3 0A receptacle type 3 AC receptacles per PDU ( per segment) - IEC 320 C9 6A receptacle type 3 circuit-breakers Each PDU distributes site single-phase power to single-phase 200 to 240 V AC outlets for connecting the power cords from the components mounted in the rack. INTL: Input and Output Power Characteristics in a G2 Rack Single-Phase Monitored PDUs The International PDU power characteristics are: PDU input characteristics 200V to 240V AC, single-phase, 32A RMS, 3-wire 6.5 feet (2 m) attached harmonized power cord 50/60Hz IEC P6 3-pin, 32A input plug as shown below: PDU output characteristics 3 circuit-breaker-protected 20A load segments 24 AC receptacles per PDU (2 per segment) - IEC 320 C3 0A receptacle type 3 AC receptacles per PDU ( per segment) - IEC 320 C9 6A receptacle type INTL: AC Power Feeds in a G2 Rack, Single-Phase Monitored PDUs The AC power feed cables for the International monitored PDUs are mounted to exit the rack at either the top or bottom rear corners of the rack depending on what is ordered for the site's power feed. Figure 33 shows the power feed cables on these PDUs with AC feed at the bottom of the rack and the AC power outlets along the PDU. For clarity the PDUs are shown facing outward via their swivel brackets. The single-phase power outlets face each other on opposite sides in the rack and are on swivel brackets that can be rotated to allow for servicing of components. NOTE: For visual clarity, the single-phase AC power feed illustrations show the PDUs facing outward as if their swivel brackets have been rotated. Typically, these PDUs would be on opposite sides facing each other. 30 Power Configurations for NonStop BladeSystems in G2 Racks

131 Figure 33 Bottom AC Power Feed in a G2 Rack, Single-Phase International Monitored PDUs Figure 34 shows the power feed cables on the single-phase International PDUs with AC feed at the top of the rack. NOTE: For visual clarity, the single-phase AC power feed illustrations show the PDUs facing outward as if their swivel brackets have been rotated. Typically, these PDUs would be on opposite sides facing each other. NonStop i BladeSystem Power Distribution G2 Rack 3

132 Figure 34 Top AC Power Feed in a G2 Rack, Single-Phase International Monitored PDUs INTL: Branch Circuits and Circuit Breakers for a G2 Rack Single-Phase Monitored PDUs Racks for NonStop BladeSystems that use a single-phase power configuration contain four half-height PDUs. In racks without the optional rack-mounted UPS, each of the four INTL PDUs requires a separate branch circuit of these ratings: Region Volts (Phase-to-Phase) Amps (see following CAUTION ) International (INTL) Category D circuit breaker is required. 200 to CAUTION: Be sure the hardware configuration and resultant power loads of each rack within the system do not exceed the capacity of the branch circuit according to applicable electrical codes and regulations. Select circuit breaker ratings according to local codes and any applicable regulations for the circuit capacity. Note that circuit breaker ratings vary if your system includes the optional rack-mounted R5000 UPS. 32 Power Configurations for NonStop BladeSystems in G2 Racks

133 INTL: Circuit Breaker Ratings for Single-Phase R5000 UPS in a G2 Rack These ratings apply to systems with the optional rack-mounted R5000 Integrated UPS that is used for a INTL single-phase power configuration: Version Operating Voltage Settings Power Out (VA/Watts) Input Plug UPS Input Rating International (INTL) 200, 230 2, /4500 IEC Amp Dedicated 32 Amp If set at 200/208 Then 5000/4500 The UPS input requires a dedicated (unshared) branch circuit that is suitably rated for your specific UPS. 2 Factory-default setting For further information and specifications on the R5000 UPS and the UPS Network Module supported for the UPS, refer to UPS Manuals. Single-Phase Power Setup in a G2 Rack, Modular PDU Power set up is only supported for the NA/JPN region with or without a UPS: NA/JPN: Modular Single-Phase Power Setup in a G2 Rack With R5000 Rack-Mounted UPS (page 33) NA/JPN: Single-Phase Modular Power Setup in a G2 Rack Without Rack-Mounted UPS (page 36) NA/JPN: Modular Single-Phase Power Setup in a G2 Rack With R5000 Rack-Mounted UPS To setup the power feed connections as shown in Figure 35:. Connect two single-phase 30A power feeds to the rack-mounted UPS L6-30P input connector on each R5000 UPS. 2. Connect two single-phase 30A power feeds to the modular PDU L6-30P (30A, 3 wire) input connectors. 3. Connect the modular PDUs to the extension bar PDUs starting with the modular PDUs located in U and U2 (bottom-exit power) or U4 and U42 (if your system uses top-exit power feeds). All the output cables for the modular PDU are C9 cables. NOTE: PDU 3 is located behind PDU. PDU 4 is located behind PDU 2. PDU 3 and PDU 4 are accessed from the front of the rack. a. Facing the rear of the rack and PDU located in U or U4: Connect the cable from PDU, receptacle to the top right extension bar, L input receptacle. Connect the cable from PDU, receptacle 2 to the top right extension bar, L2 input receptacle. Connect the cable from PDU, receptacle 4 to the top right extension bar, L4 input receptacle. NonStop i BladeSystem Power Distribution G2 Rack 33

134 b. Facing the rear of the rack and PDU 2 located in U2 or U42: Connect the cable from PDU 2, receptacle 3 to the lower right extension bar, L3 input receptacle. c. Facing the front of the rack and PDU 3 (which is behind PDU ) in U or U4: Connect the cable from PDU 3, receptacle to the top left extension bar, L input receptacle. Connect the cable from PDU 3, receptacle 2 to the top left extension bar, L2 input receptacle. Connect the cable from PDU 3, receptacle 4 to the top left extension bar, L4 input receptacle. d. Facing the front of the rack and PDU 4 (which is behind PDU 2) in U2 or U42: Connect the cable from PDU 4, receptacle 3 to the lower left extension bar, L3 input receptacle. 4. All the c7000 power supply (PS) cables are C9 cables. Connect these cables to the applicable receptacle on the modular PDU: a. Connect the c7000 power supply cables from the right-side of the c7000 input module to the modular PDU: Connect the PS cable to PDU 2, receptacle in U2 or U42. Connect the PS2 cable to PDU 2, receptacle 2 in U2 or U42. Connect the PS3 cable to PDU, receptacle 3 in U or U4. b. Connect the c7000 power supply cables from the left-side of the c7000 input module to the modular PDU: Connect the PS4 cable to PDU 4, receptacle in U2 or U42. Connect the PS5 cable to PDU 4, receptacle 2 in U2 or U42. Connect the PS6 cable to PDU 3, receptacle 3 in U or U4. 5. Connect the BladeSystem components as shown in the diagram following the best practice of distributing connections between PDUs on either side, especially when there are two power cords per component. 34 Power Configurations for NonStop BladeSystems in G2 Racks

135 Figure 35 North America/Japan Modular Single-Phase Power Setup in a G2 Rack With Rack-Mounted R5000 UPS NonStop i BladeSystem Power Distribution G2 Rack 35

136 NA/JPN: Single-Phase Modular Power Setup in a G2 Rack Without Rack-Mounted UPS To setup the single-phase power feed connections as shown in Figure 36 (page 37):. Connect four single-phase 30A power feeds to the four modular PDU L6-30P (30A, 3 wire) input connectors. 2. Connect the modular PDUs to the extension bar PDUs starting with the modular PDUs located in U and U2 (bottom-exit power) or U4 and U42 (if your system uses top-exit power feeds). All the output cables for the modular PDU are C9 cables. NOTE: PDU 3 is located behind PDU. PDU 4 is located behind PDU 2. PDU 3 and PDU 4 are accessed from the front of the rack. a. Facing the rear of the rack and PDU located in U or U4: Connect the cable from PDU, receptacle to the top right extension bar, L input receptacle. Connect the cable from PDU, receptacle 2 to the top right extension bar, L2 input receptacle. Connect the cable from PDU, receptacle 4 to the top right extension bar, L4 input receptacle. b. Facing the rear of the rack and PDU 2 located in U2 or U42: Connect the cable from PDU 2, receptacle 3 to the lower right extension bar, L3 input receptacle. c. Facing the front of the rack and PDU 3 (which is behind PDU ) in U or U4: Connect the cable from PDU 3, receptacle to the top left extension bar, L input receptacle. Connect the cable from PDU 3, receptacle 2 to the top left extension bar, L2 input receptacle. Connect the cable from PDU 3, receptacle 4 to the top left extension bar, L4 input receptacle. d. Facing the front of the rack and PDU 4 (which is behind PDU 2) in U2 or U42: Connect the cable from PDU 4, receptacle 3 to the lower left extension bar, L3 input receptacle. 3. All the c7000 power supply (PS) cables are C9 cables. Connect these cables to the applicable receptacle on the modular PDU: a. Connect the c7000 power supply cables from the right-side of the c7000 input module to the modular PDU: Connect the PS cable to PDU 2, receptacle in U2 or U42. Connect the PS2 cable to PDU 2, receptacle 2 in U2 or U42. Connect the PS3 cable to PDU, receptacle 3 in U or U4. b. Connect the c7000 power supply cables from the left-side of the c7000 input module to the modular PDU: Connect the PS4 cable to PDU 4, receptacle in U2 or U42. Connect the PS5 cable to PDU 4, receptacle 2 in U2 or U42. Connect the PS6 cable to PDU 3, receptacle 3 in U or U4. 4. Connect the BladeSystem components as shown in the diagram following the best practice of distributing connections between PDUs on either side, especially when there are two power cords per component. 36 Power Configurations for NonStop BladeSystems in G2 Racks

137 Figure 36 North America/Japan Modular Single-Phase Power Setup in a G2 Rack Without Rack-Mounted UPS NonStop i BladeSystem Power Distribution G2 Rack 37

138 NA/JPN: Single-Phase Modular in a G2 Rack PDU Description Four modular power distribution units (PDUs) are installed to provide redundant power outlets for the c7000 enclosure and the components mounted in the rack. Each U rack-mounted modular PDU comes with four modular PDU extension bars. Two modular PDUs share the same U space. For example, PDU and PDU 3 are in same U location of the rack (U0) and PDU 2 and PDU 4 share the U02 rack location. The NA/JPN single-phase modular PDUs are oriented with the AC power cords exiting the rack at either the top or bottom rear corners of the rack, depending on the site's power feed needs. For information about specific PDU input and output characteristics for these PDUs, which are factory-installed in racks, refer to NA/JPN: Input and Output Power Characteristics in a G2 Rack, Single-Phase Modular PDU and Extension Bars (page 4). Each single-phase PDU in a rack has: 28 AC receptacles (7 per extension bar) - IEC 320 C3 0A receptacle type 4 AC receptacles per modular PDU - IEC 320 C9 6A receptacle type 4 circuit-breakers Each PDU distributes site single-phase power to single-phase 200 to 240 V AC outlets for connecting the power cords from the components mounted in the rack. NA/JPN: AC Power Feeds, Single-Phase Modular PDUs The AC power feed cables for the PDUs are mounted to exit the rack at either the top or bottom rear corners of the rack depending on what is ordered for the site's power feed. Figure 37 shows the power feed cables on the single-phase NA/JPN modular PDUs with AC feed at the bottom of the rack and the AC power outlets along the PDU. 38 Power Configurations for NonStop BladeSystems in G2 Racks

139 Figure 37 Bottom AC Power Feed in a G2 Rack Single-Phase NA/JPN Modular PDUs Figure 38 shows the power feed cables on the modular NA/JPN single-phase PDUs with AC feed at the top of the rack. NonStop i BladeSystem Power Distribution G2 Rack 39

140 Figure 38 Top AC Power Feed in a G2 Rack NA/JPN Single-Phase Modular PDUs 40 Power Configurations for NonStop BladeSystems in G2 Racks

141 NA/JPN: Input and Output Power Characteristics in a G2 Rack, Single-Phase Modular PDU and Extension Bars The rack includes four power distribution units (PDUs) with these power characteristics: PDU input characteristics 200V to 240V AC, single-phase 30A RMS, 3-wire 2 feet (3.6 m) attached power cord 50/60Hz NEMA L6-30P input plug as shown below PDU output characteristics 4 IEC 320 C9 receptacles per PDU with 5A circuit-breaker labels Extension bar input characteristics 200V to 240V AC, single-phase 24A RMS, 3-wire 50/60Hz IEC 320 C20 input plug 6.5 feet (2.0 m) attached power cord Extension bar output characteristics 7 IEC 320 C3 receptacles per PDU with 0A maximum per outlet NonStop i BladeSystem Power Distribution G2 Rack 4

142 NA/JPN: Branch Circuits and Circuit Breakers in a G2 Rack, Single-Phase Modular PDUs racks for NonStop BladeSystems that use a single-phase modular power configuration contain four modular PDUs. In racks without the optional rack-mounted UPS, each of the four NA/JPN modular PDUs requires a separate branch circuit of these ratings: Region North America/Japan (NA/JPN) Volts (Phase-to-Phase) 200 to 240 Amps (see following CAUTION ) 30 CAUTION: Be sure the hardware configuration and resultant power loads of each rack within the system do not exceed the capacity of the branch circuit according to applicable electrical codes and regulations. Select circuit breaker ratings according to local codes and any applicable regulations for the circuit capacity. Note that circuit breaker ratings vary if your system includes the optional rack-mounted R5000 UPS. NA/JPN: Circuit Breaker Ratings for Single-Phase R5000 UPS in a G2 Rack These ratings apply to systems with the optional rack-mounted R5000 that is used for a single-phase modular NA/JPN power configuration: Version Operating Voltage Settings Power Out (VA/Watts) Input Plug UPS Input Rating North America/Japan (NA/JPN) 200/208 2, 220, 230, /4500 L6-30P Dedicated 30 Amp The UPS input requires a dedicated (unshared) branch circuit that is suitably rated for your specific UPS. 2 Factory-default setting 42 Power Configurations for NonStop BladeSystems in G2 Racks

143 NonStop i BladeSystem Three-Phase Power Distribution in a G2 Rack North America/Japan (NA/JPN) and International (INTL) are the supported regions for three-phase NonStop BladeSystems. For these regions, there are four different versions of the rack level PDU, depending on whether you are using modular or monitored PDUs. For c7000 three-phase power setup details, refer to the instructions for your PDU: Three-Phase Power Setup in a G2 Rack, Monitored PDUs (page 43) Three-Phase Power Setup in a G2 Rack, Modular PDUs (page 52) Three-phase power configurations require 200V to 240V distribution and careful attention to phase load balancing. For more information, refer to Phase Load Balancing (page 65). The NonStop BladeSystem's three-phase c7000 enclosure contains an AC Input Module that provides N + N redundant power distribution for the power configurations. This power module comes with a pair of power cords that provide direct AC power feeds to the c7000 enclosure. One c7000 power feed connects to a rack PDU that is powered from the main power grid. The other c7000 power feed connects to a rack PDU that is powered from either the backup power grid or from the R2000/3 UPS installed in the rack. These PDUs and the rack UPS are dedicated to the c7000. For the other components in the rack, two other PDUs are available (main and backup) and a second, optional rack UPS is available. Three-Phase Power Setup in a G2 Rack, Monitored PDUs Power set up is based on your region and whether your configuration includes a UPS: NA/JPN: Monitored PDU in a G2 Rack : Three-Phase Power Setup With Rack-Mounted UPS (page 43) NA/JPN: Monitored PDU in a G2 Rack: Three-Phase Power Setup Without Rack-Mounted UPS (page 44) INTL: Monitored PDU: Three-Phase Power Setup in a G2 Rack With Rack-Mounted UPS (page 49) INTL: Monitored PDU: Three-Phase Power Setup Without Rack-Mounted UPS (page 50) NA/JPN: Monitored PDU in a G2 Rack : Three-Phase Power Setup With Rack-Mounted UPS To setup the power feed connections as shown in Figure 39:. Connect one 3-phase 60A power feed to the rack-mounted UPS IEC P9 (60A, 5 wire) input connector. 2. Connect one 3-phase 30A power feed to the AF504A PDU NEMA L5-30P (30A, 4 wire) input connector. NonStop i BladeSystem Power Distribution G2 Rack 43

144 3. Connect one 3-phase 30A power feed to the c7000 enclosure's NEMA L5-30P (30A, 4 wire/3 pole) input connector. Figure 39 North America/Japan Monitored 3-Phase Power Setup in a G2 Rack With Rack-Mounted UPS NA/JPN: Monitored PDU in a G2 Rack: Three-Phase Power Setup Without Rack-Mounted UPS To setup the power feed connections as shown in Figure 40:. Connect two 3-phase 30A power feeds to the two AF504A PDU NEMA L5-30P (30A, 4 wire/3 pole) input connectors. 2. Connect two 3-phase 30A power feeds to the two NEMA L5-30P (30A, 4 wire/3 pole) input connectors within the c7000 enclosure. 44 Power Configurations for NonStop BladeSystems in G2 Racks

145 Figure 40 North America/Japan Monitored 3-Phase Power Setup Without Rack-Mounted UPS NA/JPN: Monitored Three-Phase in a G2 Rack PDU Description Two monitored three-phase NA/JPN power distribution units (PDUs) are installed to provide redundant power outlets for the components mounted in the rack. The PDUs are oriented inward, facing the components within the rack. Each PDU is 60 inches long and has 39 AC receptacles, three circuit breakers, and an AC power cord. The PDU is oriented with the AC power cord exiting the rack at either the top or bottom rear corners of the rack, depending on the site's power feed needs. For information about specific PDU input and output characteristics for these PDUs which are factory-installed in racks, refer to NA/JPN: Input and Output Power Characteristics in a G2 Rack, Three-Phase Monitored PDUs and c7000s (page 47). Each three-phase monitored PDU in a rack has: 36 AC receptacles per PDU (2 per segment) - IEC 320 C3 0A receptacle type 3 AC receptacles per PDU ( per segment) - IEC 320 C9 6A receptacle type 3 circuit-breakers These 200V to 240V AC, three-phase delta for NA/JPN PDUs receive power from the site AC power source. Each PDU distributes site three-phase power to 39 single-phase 200 to 240 V AC outlets for connecting the power cords from the components mounted in the rack. NonStop i BladeSystem Power Distribution G2 Rack 45

146 NA/JPN: AC Power Feeds in a G2 Rack, Three-Phase Monitored PDUs The AC power feed cables for the PDUs are mounted to exit the rack at either the top or bottom rear corners of the rack depending on what is ordered for the site's power feed. Bottom AC Power Feed in a G2 Rack, Three-Phase (Monitored NA/JPN PDUs) (page 46) shows the power feed cables on monitored three-phase PDUs with AC feed at the bottom of the rack and the AC power outlets along the PDU. The monitored three-phase power outlets face in toward the components in the rack. Figure 4 Bottom AC Power Feed in a G2 Rack, Three-Phase (Monitored NA/JPN PDUs) Figure 42 shows the power feed cables on the three-phase monitored PDUs with AC feed at the top of the rack. 46 Power Configurations for NonStop BladeSystems in G2 Racks

147 Figure 42 Top AC Power Feed in a G2 Rack, Three-Phase (Monitored NA/JPN PDUs) NA/JPN: Input and Output Power Characteristics in a G2 Rack, Three-Phase Monitored PDUs and c7000s The rack includes two power distribution units (PDUs) with these power characteristics: PDU input characteristics 200V to 240V AC, 3-phase delta, 30A RMS, 4-wire 6.5 feet (2 m) attached power cord 50/60Hz NEMA L5-30P input plug as shown below: PDU output characteristics 3 circuit-breaker-protected 3.86A load segments 36 AC receptacles per PDU (2 per segment) - IEC 320 C3 0A receptacle type 3 AC receptacles per PDU ( per segment) - IEC 320 C9 6A receptacle type NonStop i BladeSystem Power Distribution G2 Rack 47

148 The rack includes a c7000 input with these power characteristics: c7000 input characteristics 200V to 208V AC, 3-phase delta, 30A RMS, 4-wire 0 feet (3.5 m) attached power cord 50/60Hz NEMA L5-30P input plug as shown below: NA/JPN: Branch Circuits and Circuit Breakers in a G2 Rack, Monitored Three-Phase racks for the NonStop i BladeSystem that use a three-phase power configuration with modular or monitored three-phase PDUs contain two PDUs. In racks without the optional rack-mounted UPS, each of the two PDUs requires a separate branch circuit of these ratings: Region North America/Japan (NA/JPN) PDU North America/Japan (NA/JPN) c7000 Volts (Phase-to-Phase) 200 to to 208 Amps (see following CAUTION ) CAUTION: Be sure the hardware configuration and resultant power loads of each rack within the system do not exceed the capacity of the branch circuit according to applicable electrical codes and regulations. Select circuit breaker ratings according to local codes and any applicable regulations for the circuit capacity. Note that circuit breaker ratings vary if your system includes the optional rack-mounted R2000/3 Integrated UPS. NA/JPN: Circuit Breaker Ratings, Monitored Three-Phase UPS in a G2 Rack These ratings apply to systems with the optional rack-mounted R2000/3 Integrated UPS that is used for a three-phase power configuration: Version Operating Voltage Settings Power Out (VA/Watts) Input Plug UPS Input Rating North America/Japan (NA/JPN) /2000 IEC P9 5-pin, 60 Amp Dedicated 60 Amp The UPS input requires a dedicated (unshared) branch circuit that is suitably rated for your specific UPS. 2 Factory-default setting 48 Power Configurations for NonStop BladeSystems in G2 Racks

149 For further information and specifications on the R2000/3 UPS (2kVA model), refer to the HPE 3 Phase UPS User Guide for the 2kVA model. This guide is available at: For information about the UPS management module supported for the R2000/3 UPS, refer to the HPE UPS Management Module User Guide located at: INTL: Monitored PDU: Three-Phase Power Setup in a G2 Rack With Rack-Mounted UPS To setup the power feed connections as shown in International Monitored 3-Phase Power Setup in a G2 Rack With Rack-Mounted UPS (page 49):. Connect one 3-phase 32A power feed to the rack-mounted UPS IEC P6 (32A, 5 wire/4 pole) input connector. 2. Connect one 3-phase 6A power feed to the AF508A PDU IEC309 56P6 (6A, 5 wire/4 pole) input connector. 3. Connect one 3-phase 6A power feed to the c7000 enclosure's IEC309 56P6 (6A, 5 wire/4 pole) input connector. Figure 43 International Monitored 3-Phase Power Setup in a G2 Rack With Rack-Mounted UPS NonStop i BladeSystem Power Distribution G2 Rack 49

150 INTL: Monitored PDU: Three-Phase Power Setup Without Rack-Mounted UPS To setup the power feed connections as shown in Figure 44:. Connect two 3-phase 6A power feeds to the two AF508A PDU IEC309 56P6 (6A, 5 wire/4 pole) input connectors. 2. Connect two 3-phase 6A power feeds to the two IEC309 56P6 (6A, 5 wire/4 pole) input connectors within the c7000 enclosure. Figure 44 International Monitored 3-Phase Power Setup Without in a G2 Rack Rack-Mounted UPS INTL: Three-Phase Monitored in a G2 Rack PDU Description Two monitored three-phase INTL power distribution units (PDUs) are installed to provide redundant power outlets for the components mounted in the rack. The PDUs are oriented inward, facing the components within the rack. Each PDU is 60 inches long and has 39 AC receptacles, three circuit breakers, and an AC power cord. The PDU is oriented with the AC power cord exiting the rack at either the top or bottom rear corners of the rack, depending on the site's power feed needs. For information about specific PDU input and output characteristics for PDUs factory-installed in racks, refer to INTL: Input and Output Power Characteristics, Three-Phase Monitored PDU and c7000 (page 5). 50 Power Configurations for NonStop BladeSystems in G2 Racks

151 Each three-phase modular PDU in a rack has: 36 AC receptacles per PDU (2 per segment) - IEC 320 C3 0A receptacle type 3 AC receptacles per PDU ( per segment) - IEC 320 C9 6A receptacle type 3 circuit-breakers These 380V to 45V AC, three-phase wye International PDUs receive power from the site AC power source. Each PDU distributes site three-phase power to 39 single-phase 200 to 240 V AC outlets for connecting the power cords from the components mounted in the rack. INTL: Input and Output Power Characteristics, Three-Phase Monitored PDU and c7000 The power characteristics of the monitored three-phase INTL PDU are: PDU input characteristics 380 to 45 V AC, 3-phase Wye, 6A RMS, 5-wire 6.5 feet (2 m) attached harmonized power cord 50/60Hz IEC309 56P6 5-pin, 6A input plug as shown below: PDU output characteristics 3 circuit-breaker-protected 6A load segments 36 AC receptacles per PDU (2 per segment) - IEC 320 C3 0A receptacle type 3 AC receptacles per PDU ( per segment) - IEC 320 C9 6A receptacle type The rack includes a c7000 input with these power characteristics: c7000 input characteristics 346 to 45 VAC line to line; 200 to 240 V AC line to neutral; 3 phase Wye, 6A RMS, 5-wire 0 feet (3.5 m) attached harmonized power cord 50/60Hz IEC309 56P6 5-pin, 6A input plug as shown below: INTL: Branch Circuits and Circuit Breakers in a G2 Rack, Three-Phase Racks for the NonStop i BladeSystem that use a three-phase power configuration with monitored three-phase PDUs contain two PDUs. NonStop i BladeSystem Power Distribution G2 Rack 5

152 In racks without the optional rack-mounted UPS, each of the two PDUs requires a separate branch circuit of these ratings: Region International PDU Volts (Phase-to-Phase) 380 to 45 Amps (see following CAUTION ) 6 International c7000 Category D circuit breaker is required. 2 Category D circuit breaker is required. 346 to CAUTION: Be sure the hardware configuration and resultant power loads of each rack within the system do not exceed the capacity of the branch circuit according to applicable electrical codes and regulations. Select circuit breaker ratings according to local codes and any applicable regulations for the circuit capacity. Note that circuit breaker ratings vary if your system includes the optional rack-mounted R2000/3 Integrated UPS. INTL: Circuit Breaker Ratings for Three-Phase UPS in a G2 Rack These ratings apply to systems with the optional rack-mounted R2000/3 Integrated UPS that is used for an INTL three-phase power configuration: Version Operating Voltage Settings Power Out (VA/Watts) Input Plug UPS Input Rating International (380-45) 2000/2000 IEC Amp Dedicated 32 Amp The UPS input requires a dedicated (unshared) branch circuit that is suitably rated for your specific UPS. 2 Factory-default setting For further information and specifications on the R2000/3 UPS (2kVA model), refer to the HPE 3 Phase UPS User Guide for the 2kVA model. This guide is available at: For information about the UPS management module supported for the R2000/3 UPS, refer to the HPE UPS Management Module User Guide located at: Three-Phase Power Setup in a G2 Rack, Modular PDUs Power set up is based on your region and whether your configuration includes a UPS: NA/JPN: Modular PDU in a G2 Rack: Three-Phase Power Setup With Rack-Mounted UPS (page 53) NA/JPN: Modular PDU: Three-Phase Power Setup Without Rack-Mounted UPS (page 53) INTL: Modular PDU: Three-Phase Power Setup in a G2 Rack (With Rack-Mounted UPS) (page 58) INTL: Modular PDU: Three-Phase Power Setup in a G2 Rack (Without Rack-Mounted UPS) (page 59) 52 Power Configurations for NonStop BladeSystems in G2 Racks

153 NA/JPN: Modular PDU in a G2 Rack: Three-Phase Power Setup With Rack-Mounted UPS To setup the power feed connections as shown in Figure 45:. Connect one 3-phase 60A power feed to the rack-mounted UPS IEC P9 (60A, 5 wire) input connector. 2. Connect one 3-phase 30A power feed to the AF52A PDU NEMA L5-30P (30A, 4 wire) input connector. 3. Connect one 3-phase 30A power feed to the c7000 enclosure's NEMA L5-30P (30A, 4 wire) input connector. Figure 45 North America/Japan Modular 3-Phase Power Setup in a G2 Rack With Rack-Mounted UPS NA/JPN: Modular PDU: Three-Phase Power Setup Without Rack-Mounted UPS To setup the power feed connections as shown in Figure 46:. Connect two 3-phase 30A power feeds to the two AF52A PDU NEMA L5-30P (30A, 4 wire/3 pole) input connectors. 2. Connect two 3-phase 30A power feeds to the two NEMA L5-30P (30A, 4 wire/3 pole) input connectors within the c7000 enclosure. NonStop i BladeSystem Power Distribution G2 Rack 53

154 Figure 46 North America/Japan Modular 3-Phase Power Setup in a G2 Rack Without Rack-Mounted UPS NA/JPN: Description of Modular Three-Phase PDU in G2 Rack Two three-phase modular power distribution units (PDUs) are installed to provide redundant power outlets for the components mounted in the rack. Each U rack-mounted modular PDU comes with four modular PDU extension bars. The PDUs are oriented facing each other within the rack. Each PDU has 28 AC receptacles, six circuit breakers, and an AC power cord. The PDU is oriented with the AC power cord exiting the rack at either the top or bottom rear corners of the rack, depending on the site's power feed needs. For information about specific PDU input and output characteristics for PDUs factory-installed in racks, refer to NA/JPN: Input and Output Power Characteristics, Three-Phase Modular PDU, c7000, and Extension Bars in a G2 Rack (page 56). Each three-phase modular PDU in a rack has: 28 AC receptacles per PDU (7 per extension bar) - IEC 320 C3 0A receptacle type 6 circuit-breakers The 208V AC, three-phase delta for North America/Japan (NA/JPN) PDU receives power from the site AC power source: Each PDU distributes site three-phase power to 34 single-phase 200 to 240 V AC outlets for connecting the power cords from the components mounted in the rack. NA/JPN: Modular PDU in G2 Rack: AC Power Feeds The AC power feed cables for the NA/JPN modular PDUs are mounted to exit the rack at either the top or bottom rear corners of the rack depending on what is ordered for the site's power feed. Bottom AC Power Feed in G2 Rack, Three Phase (NA/JPN Modular PDUs) (page 55) shows the power feed cables on modular three-phase PDUs with AC feed at the bottom of the rack and the output connections for the three-phase modular PDU. 54 Power Configurations for NonStop BladeSystems in G2 Racks

155 Figure 47 Bottom AC Power Feed in G2 Rack, Three Phase (NA/JPN Modular PDUs) Top AC Power Feed in G2 Rack, Three-Phase (NA/JPN Modular PDU) (page 56) shows the three-phase modular PDUs with AC feed at the top of the rack. NonStop i BladeSystem Power Distribution G2 Rack 55

156 Figure 48 Top AC Power Feed in G2 Rack, Three-Phase (NA/JPN Modular PDU) NA/JPN: Input and Output Power Characteristics, Three-Phase Modular PDU, c7000, and Extension Bars in a G2 Rack The rack includes two power distribution units (PDUs) with these power characteristics: PDU input characteristics 208V AC, 3-phase delta, 4.52A RMS, 4-wire 2 feet (3.6 m) attached power cord 50/60Hz NEMA L5-30 input plug as shown below: PDU output characteristics 6 IEC 320 C9 receptacles per PDU with 20A circuit-breaker labels (L, L2, L3, L4, L5, and L6) 56 Power Configurations for NonStop BladeSystems in G2 Racks

157 Extension bar input characteristics 00V to 240V AC, 3-phase delta, 6A RMS, 4-wire 50/60Hz IEC 320 C20 input plug 6.5 feet (2.0 m) attached power cord Extension bar output characteristics 7 IEC 320 C3 receptacles per PDU with 0A maximum per outlet The rack includes a c7000 input with these power characteristics: c7000 input characteristics 200V to 208V AC, 3-phase delta, 30A RMS, 4-wire 0 feet (3.5 m) attached power cord 50/60Hz NEMA L5-30P input plug as shown below: NA/JPN: Branch Circuits and Circuit Breakers, Modular Three-Phase in G2 Rack Racks for the NonStop i BladeSystem that use a three-phase power configuration with modular three-phase PDUs contain two PDUs. In racks without the optional rack-mounted UPS, each of the two PDUs requires a separate branch circuit of these ratings: Region North America/Japan (NA/JPN) PDU North America/Japan (NA/JPN) c7000 Volts (Phase-to-Phase) 200 to to 208 Amps (see following CAUTION ) CAUTION: Be sure the hardware configuration and resultant power loads of each rack within the system do not exceed the capacity of the branch circuit according to applicable electrical codes and regulations. Select circuit breaker ratings according to local codes and any applicable regulations for the circuit capacity. Note that circuit breaker ratings vary if your system includes the optional rack-mounted R2000/3 Integrated UPS. NonStop i BladeSystem Power Distribution G2 Rack 57

158 NA/JPN: Circuit Breaker Ratings, Modular Three-Phase UPS in G2 Rack These ratings apply to systems with the optional rack-mounted R2000/3 Integrated UPS that is used for a three-phase power configuration: Version Operating Voltage Settings Power Out (VA/Watts) Input Plug UPS Input Rating North America/Japan (NA/JPN) /2000 IEC P9 5-pin, 60 Amp Dedicated 60 Amp The UPS input requires a dedicated (unshared) branch circuit that is suitably rated for your specific UPS. 2 Factory-default setting For further information and specifications on the R2000/3 UPS (2kVA model), refer to the HPE 3 Phase UPS User Guide for the 2kVA model. This guide is available at: For information about the UPS management module supported for the R2000/3 UPS, refer to the HPE UPS Management Module User Guide located at: INTL: Modular PDU: Three-Phase Power Setup in a G2 Rack (With Rack-Mounted UPS) To setup the power feed connections as shown in Figure 49:. Connect one 3-phase 32A power feed to the rack-mounted UPS IEC P6 (32A, 5 wire/4 pole) input connector. 2. Connect one 3-phase 6A power feed to the AF53A PDU IEC309 56P6 (6A, 5 wire/4 pole) input connector. 3. Connect one 3-phase 6A power feed to the c7000 enclosure's IEC309 56P6 (6A, 5 wire/4 pole) input connector. 58 Power Configurations for NonStop BladeSystems in G2 Racks

159 Figure 49 International Modular 3-Phase Power Setup in a G2 Rack (With Rack-Mounted UPS) INTL: Modular PDU: Three-Phase Power Setup in a G2 Rack (Without Rack-Mounted UPS) To setup the power feed connections as shown in Figure 50:. Connect two 3-phase 6A power feeds to the two AF53A PDU IEC309 56P6 (6A, 5 wire/4 pole) input connectors. 2. Connect two 3-phase 6A power feeds to the two IEC309 56P6 (6A, 5 wire/4 pole) input connectors within the c7000 enclosure. NonStop i BladeSystem Power Distribution G2 Rack 59

160 Figure 50 International Modular 3-Phase Power Setup in a G2 Rack (Without Rack-Mounted UPS) Description of INTL: Three-Phase Modular PDU in a G2 Rack Two three-phase INTL modular power distribution units (PDUs) are installed to provide redundant power outlets for the components mounted in the rack. Each U rack-mounted modular PDU comes with four modular PDU extension bars. The PDUs are oriented facing each other within the rack. Each PDU has 28 AC receptacles, six circuit breakers, and an AC power cord. The PDU is oriented with the AC power cord exiting the rack at either the top or bottom rear corners of the rack, depending on the site's power feed needs. For information about specific PDU input and output characteristics for PDUs factory-installed in racks, refer to Enclosure AC Input G2 Rack (page 64). Each three-phase modular PDU in a rack has: 28 AC receptacles per PDU (7 per extension bar) - IEC 320 C3 0A receptacle type 6 circuit-breakers The 380V to 45V AC, three-phase wye for International (INTL) PDU receives power from the site AC power source. Each PDU distributes site three-phase power to 34 single-phase 200 to 240 V AC outlets for connecting the power cords from the components mounted in the rack. INTL: Modular PDU in a G2 Rack: AC Power Feeds, Three-Phase The AC power feed cables for the PDUs are mounted to exit the rack at either the top or bottom rear corners of the rack depending on what is ordered for the site's power feed. Bottom AC Power Feed in G2 Rack, Three Phase (NA/JPN Modular PDUs) (page 55) shows the power feed cables on modular three-phase PDUs with AC feed at the bottom of the rack and the output connections for the three-phase modular PDU. 60 Power Configurations for NonStop BladeSystems in G2 Racks

161 Figure 5 Bottom AC Power Feed in a G2 Rack, Three Phase (INTL Modular PDUs) Top AC Power Feed in G2 Rack, Three-Phase (NA/JPN Modular PDU) (page 56) shows the three-phase modular PDUs with AC feed at the top of the rack. NonStop i BladeSystem Power Distribution G2 Rack 6

162 Figure 52 Top AC Power Feed in a G2 Rack, Three-Phase (INTL Modular PDU) INTL: Input and Output Power Characteristics, Three-Phase Modular PDU, c7000, and Extension Bars in a G2 Rack The G2 rack includes two power distribution units (PDUs) with these power characteristics: PDU input characteristics 380 to 45 V AC, 3-phase Wye, 6A RMS, 5-wire 2 feet (3.6 m) attached power cord 50/60Hz IEC309 5-pin, 4 pole, 6A input plug as shown below: PDU output characteristics 6 AC IEC 320 C9 receptacles per PDU with 20A circuit-breaker labels (L, L2, L3, L4, L5, and L6) 62 Power Configurations for NonStop BladeSystems in G2 Racks

163 Extension bar input characteristics 200V to 240V AC, 3-phase delta, 6A RMS, 4-wire 50/60Hz IEC 320 C20 input plug 6.5 feet (2.0 m) attached power cord Extension bar output characteristics 7 AC IEC 320 C3 receptacles per PDU with 0A maximum per outlet The G2 rack includes a c7000 input with these power characteristics: c7000 input characteristics 346 to 45 VAC line to line; 200 to 240 V AC line to neutral; 3-phase Wye, 6A RMS, 5-wire 200 to 240 VAC loads wired phase to neutral 0 feet (3.5 m) attached harmonized power cord 50/60Hz IEC309 56P6 5-pin, 6A input plug as shown below: INTL: Modular PDU: Branch Circuits and Circuit Breakers, Three-Phase in a G2 Rack G2 racks use a three-phase power configuration for the modular three-phase PDUs and contain two PDUs. G2 racks without the optional rack-mounted UPS, require a separate branch circuit of these ratings for each of the two PDUs: Region International PDU Volts (Phase-to-Phase) 380 to 45 Amps (see following CAUTION ) 6 International c7000 Category D circuit breaker is required. 2 Category D circuit breaker is required. 346 to CAUTION: Be sure the hardware configuration and resultant power loads of each rack within the system do not exceed the capacity of the branch circuit according to applicable electrical codes and regulations. Select circuit breaker ratings according to local codes and any applicable regulations for the circuit capacity. Note that circuit breaker ratings vary if your system includes the optional rack-mounted R2000/3 Integrated UPS. NonStop i BladeSystem Power Distribution G2 Rack 63

164 Circuit Breaker Ratings for Three-Phase UPS, INTL Modular PDUs in a G2 Rack These ratings apply to systems with the optional rack-mounted R2000/3 Integrated UPS that is used for an INTL three-phase power configuration: Version Operating Voltage Settings Power Out (VA/Watts) Input Plug UPS Input Rating International (380-45) 2000/2000 IEC Amp Dedicated 32 Amp The UPS input requires a dedicated (unshared) branch circuit that is suitably rated for your specific UPS. 2 Factory-default setting For further information and specifications on the R2000/3 UPS (2kVA model), refer to the HPE 3 Phase UPS User Guide for the 2kVA model. This guide is available at: For information about the UPS management module supported for the R2000/3 UPS, refer to the HPE UPS Management Module User Guide located at: Enclosure AC Input G2 Rack NOTE: For instructions on grounding the G2 rack by using the Rack Grounding Kit (AF074A), ask your service provider to refer to the instructions in the HPE 0000 G2 Series Rack Options Installation Guide. Enclosures (IP CLIM, IOAM enclosure, and so forth) require: Specification Nominal input voltage Voltage range Nominal line frequency Frequency ranges Number of phases Value 200/208/220/230/240 V AC RMS V AC 50 or 60 Hz Hz or Hz c7000 enclosures require: Specification Value 3-phase (NA/JPN) 3-phase (INTL) -phase Voltage range VAC line to line, 3-phase Delta VAC line to line, VAC line to neutral VAC Nominal line frequency 50 or 60 Hz 50 or 60 Hz 50 or 60 Hz Frequency ranges Hz or Hz Hz or Hz Hz or Hz Number of phases Power Configurations for NonStop BladeSystems in G2 Racks

165 Phase Load Balancing For three-phase racks, each PDU is wired such that there are three load segments with groups of outlets alternating between load segments, going up and down the PDU. Factory-installed enclosures, other than the c7000, are connected to the PDUs on alternating load segments to facilitate phase load balancing. The c7000 has its own three-phase input, with each phase (International) or pairs of phases (North America/Japan) associated with one of the c7000 power supplies. When the c7000 is operating in Dynamic Power Saving Mode, the minimum number of power supplies are enabled to redundantly power the enclosure. This mode increases power supply efficiency, but leaves the phases or phase pairs associated with the disabled power supplies unloaded. For multiple-rack installations, in order to balance phase loads when Dynamic Power Saving Mode is enabled, Hewlett Packard Enterprise recommends rotating the phases from one rack to the next. For example, if the first rack is wired A-B-C, the next rack should be wired B-C-A, and the next C-A-B, and so on. For single-phase racks, follow the guidelines for phase load balancing by alternating load segment assignments of module power cords to prevent overloading the PDU load segments and circuit breakers. Enclosure AC Input G2 Rack 65

166 E Earlier CLIM Models (G2, G5, and G6 CLIMs) The G2, G5, and G6 CLIMs function as Ethernet or I/O adapters and are managed by the CIP subsystem. A CLIM is identified by the number on the rear label; this same number is also listed as the part number in OSM. Gen8 and Gen9 CLIM models are described in CLuster I/O Modules (CLIMs) (page 9). These topics describe the G2, G5, and G6 CLIM models: Earlier Networking CLIMs (IP, Telco, and IB) Earlier Storage CLIMs and SAS disk enclosures Earlier Networking CLIMs (IP, Telco, IB) This illustration shows the front views of the G2, G5, and G6 CLIMs. G2 or G5 IP CLIM Option Five Ethernet Copper Ports Slot 3 contains a ServerNet PCIe interface card, which provides the ServerNet fabric connections. Slot contains a PCIe NIC that provides four Gb Ethernet copper ports for customer interfaces. Eth port provides one embedded Gigabit NIC for customer data. 66 Earlier CLIM Models (G2, G5, and G6 CLIMs)

167 G2 or G5 IP CLIM Option 2 Three Ethernet Copper and Two Ethernet Optical Ports Slot contains a NIC that provides two Gb Ethernet copper ports for customer interfaces. Eth port provides one embedded Gigabit NIC for customer data. Slots 2 contains a NIC that provides one Gb Ethernet optical port for customer interfaces. Slot 3 contains a ServerNet interface PCIe card, which provides the ServerNet fabric connections. Slot 4 contains a NIC that provides one Gb Ethernet optical port for customer interface. G6 IP CLIM Option Five Ethernet Copper Ports Slot contains a ServerNet PCIe interface card, which provides the ServerNet fabric connections. Slot 2 contains a 2-port GbE copper NIC for customer interfaces. Eth, eth2, and eth3 ports provide three embedded Gb Ethernet copper ports for customer data. 67

168 G6 IP CLIM Option 2 Three Ethernet Copper and Two Ethernet Optical Ports Slot contains a ServerNet PCIe interface card, which provides the ServerNet fabric connections. Slots 2 and 3 each contain a -port GbE optical NIC for customer interfaces. Eth, eth2, and eth3 ports provide three embedded Gb Ethernet copper ports for customer data. G2 or 385 G5 Telco CLIM Five Ethernet Copper Ports Slot 3 contains a ServerNet PCIe interface card, which provides the ServerNet fabric connections. Slot contains a PCIe NIC that provides four Gb Ethernet copper ports for customer interfaces. Eth port provides one embedded Gigabit NIC for customer data. 68 Earlier CLIM Models (G2, G5, and G6 CLIMs)

169 G6 Telco CLIM Five Ethernet Copper Ports Slot contains a ServerNet PCIe interface card, which provides the ServerNet fabric connections. Slot 2 contains a 2-port GbE copper NIC for customer interfaces. Eth, eth2, and eth3 ports provide three embedded Gb Ethernet copper ports for customer data. G6 IB CLIM The G6 IB CLIM is used in some NonStop i BladeSystem configurations to provide InfiniBand connectivity via dual-ported Host Channel Adapter (HCA) InfiniBand interfaces. The HCA IB interface on the IB CLIM connects to a customer-supplied IB switch using a customer-supplied cable as part of the Low Latency Solution. The Low Latency Solution also requires Subnet Manager software either installed on the IB switch or running on another server. NOTE: IB CLIMs are only used as a Low Latency Solution. They do not provide general purpose InfiniBand connectivity for NonStop Systems. The Low Latency Solution architecture provides a high speed and low latency messaging system for stock exchange trading from the incoming trade server to the NonStop operating system. The solution utilizes third-party software from Informatica for messaging and order sequencing which must be installed separately. For information about obtaining Informatica software, contact your service provider. For the list of supported third-party software, refer to the CLuster I/O Module (CLIM) Software Compatibility Reference. The following illustration shows the IB and Ethernet interfaces and ServerNet fabric connections on an IB CLIM. 69

170 G6 IB CLIM Port and Slot Eth, Eth 2, Eth 3 ports Slot Slot 2 and Slot 3 Slot 4 Slot 5 and Slot 6 Description Each Eth port provides one Gb Ethernet copper interface via embedded Gigabit NIC ServerNet fabric connections via a PCIe 4x adapter Unused Two InfiniBand interfaces (ib0 and ib ports) via the IB HCA card. Only one IB interface port is utilized by the Informatica software. Hewlett Packard Enterprise recommends connecting to the ib0 interface for ease of manageability. Unused NOTE: All CLIMs use the Cluster I/O Protocols (CIP) subsystem. For more information about the CIP subsystem, refer to the Cluster I/O Protocols (CIP) Configuration and Management Manual. Earlier Storage CLIMs and SAS Disk Enclosures G2 or G5 Storage CLIM The G2 and G5 Storage CLIMs contain 5 PCIe HBA slots with these characteristics: Storage CLIM HBA Slot Configuration Optional customer order Optional customer order Part of base configuration Part of base configuration Part of base configuration Provides SAS or Fibre Channel NOTE: Not part of base configuration. Optional customer order. SAS or Fibre Channel NOTE: Not part of base configuration. Optional customer order. ServerNet fabric connections via a PCIe 4x adapter. One SAS external connector with four SAS links per connector and 3 Gbps per link is provided by the PCIe 8x slot. One SAS external and internal connector with four SAS links per connector and 3 Gbps per link is provided by the PCIe 8x slot. 70 Earlier CLIM Models (G2, G5, and G6 CLIMs)

171 The G2 and G5 CLIMs use the StorageWorks 70 Modular Smart Array (MSA) disk enclosure which holds 25, Gb SAS HDDs. G6 Storage CLIM NOTE: The G6 Storage CLIM uses the D2700 SAS disk enclosure described in SAS Disk Enclosure (page 23). The G6 Storage CLIM contains 4 PCIe HBA slots with these characteristics: Storage CLIM HBA Slot , 6 Configuration Part of base configuration Part of base configuration Optional customer order Optional customer order Not used Provides ServerNet fabric connections via a PCIe 4x adapter. Two SAS external connectors with four SAS links per connector and 6 Gbps per link is provided by the PCIe 8x slot SAS or Fibre Channel NOTE: Not part of base configuration. Optional customer order. SAS or Fibre Channel NOTE: Not part of base configuration. Optional customer order. 7

172 NOTE: The Storage CLIM uses the Cluster I/O Protocols (CIP) subsystem. For more information about the CIP subsystem, refer to the Cluster I/O Protocols (CIP) Configuration and Management Manual. 72 Earlier CLIM Models (G2, G5, and G6 CLIMs)

173 F Legacy Hardware (IOAM, FCDM, S-Series I/O Enclosure) NonStop S-series I/O Enclosures (Optional) NOTE: For NonStop BladeSystems NB54000c and NB56000c, only the Platform Configuration 9 option supports connections to NonStop S-series I/O enclosures. For more information, refer to NonStop i BladeSystems Platform Configurations (page 26). Up to four NonStop S-series I/O enclosures (Groups -4) can be connected to the ServerNet switches in a c7000 enclosure (Group 00 only). The supported S-series I/O connections are for the Token-Ring ServerNet Adapter (TRSA) or 6763 Common Communications ServerNet Adapter 2 (CCSA 2). These adapters can coexist in the same system configuration. No other S-series ServerNet adapters are supported. NonStop S-series I/O enclosures in NonStop BladeSystems have these characteristics: Connections are made using an MTP-SC cable between the IOMF 2 FRUs (slots 50 and 55) in the S-series I/O enclosure and port 3 of the ServerNet switches in the c7000 enclosure. Each IOMF 2 connection supports up to 2 TRSAs or 2 CCSA 2s. Only ServerNet switch port 3 supports S-series I/O connections and all S-series connections are made to this one port. The group numbers for the four I/O S-series enclosures are, 2, 3, and 4. These enclosures can be connected to the Group 00 c7000 enclosure only. New NonStop S-series I/O enclosures can be installed in G2 racks. If you have existing S-series I/O enclosures in other types of racks, you can also connect these I/O enclosures to a NonStop BladeSystem. The NonStop S-series I/O enclosure assembly requires a total of 20 U of contiguous space. A rack can only contain up to two NonStop S-series I/O enclosure assemblies. If NonStop S-series I/O enclosures are present, CLIMs cannot be connected to port 3 of the ServerNet switches in the c7000 enclosure. For more information about these connections, ask your Hewlett Packard Enterprise service provider to refer to the NonStop i BladeSystem Hardware Installation Manual NonStop S-series CO I/O Enclosures (Optional) Up to four NonStop S-series CO I/O enclosures (Groups -4) can be connected to the ServerNet switches in a c7000 enclosure (Group 00 only). For more information, refer to Legacy Hardware (IOAM, FCDM, S-Series I/O Enclosure) (page 73). The supported S-series CO I/O connections are for the Token-Ring ServerNet Adapter (TRSA) or 6763 Common Communications ServerNet Adapter 2 (CCSA 2) which provide SS7 connectivity. These adapters can coexist in the same system configuration. No other S-series ServerNet adapters are supported. These connections are made using an MTP-SC cable between the IOMF 2 FRUs (slots 50 and 55) in the S-series CO I/O enclosure and port 3 of the ServerNet switches in the c7000 enclosure. Each IOMF 2 connection supports up to 2 TRSAs or 2 CCSA 2s. Only ServerNet switch port 3 supports S-series CO I/O connections and all S-series connections are made to this one port. NOTE: If CLIMs are already connected to port 3, you must move them to another port on the ServerNet switches and reconfigure them. NonStop S-series CO I/O enclosures in NonStop Carrier Grade BladeSystems have these characteristics: New NonStop S-series CO I/O enclosures can be installed in seismic racks. A customer that has existing S-series I/O enclosures in other types of racks can connect those I/O enclosures to a NonStop Carrier Grade BladeSystem, but any alarm panels from older systems cannot be used. NonStop Carrier Grade BladeSystems support the alarm panel described in HPE NonStop System Alarm Panel (page 96) only. NonStop S-series I/O Enclosures (Optional) 73

174 No 46xxdisk drives are supported. Only the TRSA or CCSA 2 (with up to 4 SS7TE3 plug-in cards (PICs) is supported. No other configurations of the CCSA 2 are supported and no other ServerNet adapters are supported. The CO I/O enclosure assembly requires a total of 20 U of contiguous space. The seismic rack can contain only one NonStop S-series CO I/O enclosure assembly. CO I/O enclosures receive power directly from site power rail and site power rail B. They do not connect to the fuse panel or to the breaker panel. NOTE: If NonStop S-series CO I/O enclosures are present, CLIMs cannot be connected to port 3 of the ServerNet switches in the c7000 CG enclosure. Fibre Channel Devices This subsection describes Fibre Channel devices. The rack-mounted Fibre Channel disk module (FCDM) can only be used with NonStop i BladeSystems that have IOAM enclosures. An FCDM and its disk drives are controlled through the Fibre Channel ServerNet adapter (FCSA). For more information on the FCSA, refer to the Fibre-Channel ServerNet Adapter Installation and Support Guide. For more information on the Fibre Channel disk module (FCDM), refer to Fibre Channel Disk Module (FCDM) (page 24). For examples of cable connections between FCSAs and FCDMs, refer to Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module (page 80). This illustration shows an FCSA with indicators and ports: This illustration shows the locations of the hardware in the Fibre Channel disk module as well as the Fibre Channel port connectors at the back of the enclosure: 74 Legacy Hardware (IOAM, FCDM, S-Series I/O Enclosure)

175 Fibre Channel disk modules connect to Fibre Channel ServerNet adapters (FCSAs) via Fiber Channel arbitrated loop (FC-AL) cables. This drawing shows the two Fibre Channel arbitrated loops implemented within the Fibre Channel disk module: Fibre Channel Disk Module Group-Module-Slot Numbering This table shows the default numbering for the Fibre Channel disk module: IOAM Enclosure FCDM Group Module Slot FCSA F-SACs Shelf Slot Item X fabric; 3 - Y fabric - 5, 2-4 if daisy-chained; if single disk enclosure Fibre Channel disk module Disk drive bays Transceiver A 90 Transceiver A2 9 Transceiver B 92 Transceiver B2 93 Left FC-AL board Fibre Channel Devices 75

176 IOAM Enclosure FCDM Group Module Slot FCSA F-SACs Shelf Slot Item 94 Right FC-AL board 95 Left power supply 96 Right power supply 97 Left blower 98 Right blower 99 EMU The form of the GMS numbering for a disk in a Fibre Channel disk module is: This example shows the disk in bay 03 of the Fibre Channel disk module that connects to the FCSA in the IOAM group, module 2, slot, FSAC : 76 Legacy Hardware (IOAM, FCDM, S-Series I/O Enclosure)

177 IOAM Enclosure Group-Module-Slot Numbering A NonStop i BladeSystem supports IOAM enclosures, identified as group 0 through 5: IOAM Group Module Slot Port Fiber (EA) (EA) (EC) (EC) (EB) (EB) (ED) (ED) (EA) (EA) (EC) (EC) - 4 IOAM Group X ServerNet Module Y ServerNet Module Slot Item Port 0-5 (See preceding table.) 2 3 to 5 ServerNet adapters - n: where n is number of ports on adapter 4 ServerNet switch logic board - 4 5, 8 Power supplies - 6, 7 Fans - Fibre Channel Devices 77

178 This illustration shows the slot locations for the IOAM enclosure: Factory-Default Disk Volume Locations for FCDMs This illustration shows where the factory-default locations for the primary and mirror system disk volumes reside in separate Fibre Channel disk modules: FCSA location and cable connections vary according to the various controller and Fibre Channel disk module combinations. Configurations for Fibre Channel Devices Storage subsystems in NonStop S-series systems used a fixed hardware layout. Each enclosure can have up to four controllers for storage devices and up to 6 internal disk drives. The controllers and disk drives always have a fixed logical location with standardized location IDs of group-module-slot. Only the group number changes as determined by the enclosure position in the ServerNet topology. However, the NonStop BladeSystems have no fixed boundaries for the Fibre Channel hardware layout. Up to 60 FCSA (or 20 ServerNet addressable controllers) and 240 Fibre Channel disk 78 Legacy Hardware (IOAM, FCDM, S-Series I/O Enclosure)

179 enclosures, with identification depending on the ServerNet connection of the IOAM and slot housing in the FCSAs. Configuration Restrictions for Fibre Channel Devices These configuration restrictions apply and are invoked by Subsystem Control Facility (SCF): Primary and mirror disk drives cannot connect to the same Fibre Channel loop. Loss of the Fibre Channel loop makes both the primary volume and the mirrored volume inaccessible. This configuration inhibits fault tolerance. Disk drives in different Fibre Channel disk modules on a daisy chain connect to the same Fibre Channel loop. The primary path and backup Fibre Channel communication links to a disk drive should not connect to FCSAs in the same module of an IOAM enclosure. In a fully populated system, loss of one FCSA can make up to 56 disk drives inaccessible on a single Fibre Channel communications path. This configuration is allowed, but only if you override an SCF warning message. The mirror path and mirror backup Fibre Channel communication links to a disk drive should not connect to FCSAs in the same module of an IOAM enclosure. In a fully populated system, loss of one FCSA can make up to 56 disk drives inaccessible on a single Fibre Channel communications path. This configuration is allowed, but only if you override an SCF warning message. Recommendations for Fibre Channel Device Configuration These recommendations apply to FCSA and Fibre Channel disk module configurations: Primary Fibre Channel disk module connects to the FCSA F-SAC. Mirror Fibre Channel disk module connects to the FCSA F-SAC 2. FC-AL port A is the incoming port from an FCSA or from another Fibre Channel disk module. FC-AL port A2 is the outbound port to another Fibre Channel disk module. FC-AL port B2 is the incoming port from an FCSA or from a Fibre Channel disk module. FC-AL port B is the outbound port to another Fibre Channel disk module. In a daisy-chain configuration, the ID expander harness determines the enclosure number. Enclosure is always at the bottom of the chain. FCSAs can be installed in slots through 5 in an IOAM. G4SAs can be installed in slots through 5 in an IOAM. In systems with two or more racks, primary and mirror Fibre Channel disk modules reside in separate racks to prevent application or system outage if a power outage affects one rack. With primary and mirror Fibre Channel disk modules in the same rack, the primary Fibre Channel disk module resides in a lower U than the mirror Fibre Channel disk module. Fibre Channel disk drives are configured with dual paths. Where possible, FCSAs and Fibre Channel disk modules are configured with four FCSAs and four Fibre Channel disk modules for maximum fault tolerance. If FCSAs are not in groups of four, the remaining FCSAs and Fibre Channel disk modules can be configured in other fault-tolerant configurations such as with two FCSAs and two Fibre Channel disk modules or four FCSAs and three Fibre Channel disk modules. In systems with one IOAM enclosure: With two FCSAs and two Fibre Channel disk modules, the primary FCSA resides in module 2 of the IOAM enclosure, and the backup FCSA resides in module 3. (See the example configuration in Two FCSAs, Two FCDMs, One IOAM Enclosure (page 8). With four FCSAs and four Fibre Channel disk modules, FCSA and FCSA 2 reside in module 2 of the IOAM enclosure, and FCSA 3 and FCSA 4 reside in module 3. (See Fibre Channel Devices 79

180 the example configuration in Four FCSAs, Four FCDMs, One IOAM Enclosure (page 8).) In systems with two or more IOAM enclosures: With two FCSAs and two Fibre Channel disk modules, the primary FCSA resides in IOAM enclosure, and the backup FCSA resides in IOAM enclosure 2. (See the example configuration in Two FCSAs, Two FCDMs, Two IOAM Enclosures (page 82).) With four FCSAs and four Fibre Channel disk modules, FCSA and FCSA 2 reside in IOAM enclosure, and FCSA 3 and FCSA 4 reside in IOAM enclosure 2. (See the example configuration in Four FCSAs, Four FCDMs, Two IOAM Enclosures (page 83).) Daisy-chain configurations follow the same configuration restrictions and rules that apply to configurations that are not daisy-chained. (See Daisy-Chain Configurations (FCDMs) (page 84).) Fibre Channel disk modules containing mirrored volumes must be installed in separate daisy chains. Daisy-chained configurations require that all Fibre Channel disk modules reside in the same rack and be physically grouped together. Daisy-chain configurations require an ID expander harness with terminators for proper Fibre Channel disk module and disk drive identification. After you connect all Fibre Channel disk modules in configurations of four FCSAs and four Fibre Channel disk modules, yet three Fibre Channel disk modules remain not connected, connect them to the four FCSAs. (See the example configuration in Four FCSAs, Three FCDMs, One IOAM Enclosure (page 86).) Gigabit Ethernet 4-Port ServerNet Adapter (G4SA) Ethernet Ports For information on the Ethernet ports on a G4SA installed in an IOAM enclosure, refer to the Gigabit Ethernet 4-Port Adapter (G4SA) Installation and Support Guide. Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module These subsections show various example configurations of FCSA controllers and Fibre Channel disk modules with IOAM enclosures. NOTE: Although it is not a requirement for fault tolerance to house the primary and mirror disk drives in separate FCDMs. the example configurations show FCDMs housing only primary or mirror drives, mainly for simplicity in keeping track of the physical locations of the drives. Two FCSAs, Two FCDMs, One IOAM Enclosure (page 8) Four FCSAs, Four FCDMs, One IOAM Enclosure (page 8) Two FCSAs, Two FCDMs, Two IOAM Enclosures (page 82) Four FCSAs, Four FCDMs, Two IOAM Enclosures (page 83) Daisy-Chain Configurations (FCDMs) (page 84) Four FCSAs, Three FCDMs, One IOAM Enclosure (page 86) 80 Legacy Hardware (IOAM, FCDM, S-Series I/O Enclosure)

181 Two FCSAs, Two FCDMs, One IOAM Enclosure This illustration shows example cable connections between the two FCSAs and the primary and mirror Fibre Channel disk modules: This table lists the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identification for the factory-default system disk locations in the configuration of four FCSAs, two Fibre Channel disk modules, and one IOAM enclosure: Disk Volume Name $SYSTEM (primary) $DSMSCM (primary) $AUDIT (primary) $OSS (primary) $SYSTEM (mirror) $DSMSCM (mirror) $AUDIT (mirror) $OSS (mirror) FCSA GMSP and and and and and and and and Disk GMSB* * For an illustration of the factory-default slot locations for a Fibre Channel disk module, refer to Factory-Default Disk Volume Locations for FCDMs (page 78). Four FCSAs, Four FCDMs, One IOAM Enclosure This illustration shows example cable connections between the four FCSAs and the two sets of primary and mirror Fibre Channel disk modules: Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module 8

182 This table lists the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identification for the factory-default system disk locations in the configuration of four FCSAs, four Fibre Channel disk modules, and one IOAM enclosure: Disk Volume Name $SYSTEM (primary ) $DSMSCM (primary ) $AUDIT (primary ) $OSS (primary ) $SYSTEM (mirror ) $DSMSCM (mirror ) $AUDIT (mirror ) FCSA GMSP and and and and and and and Disk GMSB $OSS (mirror ) and For an illustration of the factory-default slot locations for a Fibre Channel disk module, refer to Factory-Default Disk Volume Locations for FCDMs (page 78). Two FCSAs, Two FCDMs, Two IOAM Enclosures This illustration shows example cable connections between the two FCSAs split between two IOAM enclosures and one set of primary and mirror Fibre Channel disk modules: 82 Legacy Hardware (IOAM, FCDM, S-Series I/O Enclosure)

183 This table list the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identification for the factory-default system disk locations in the configuration of two FCSAs, two Fibre Channel disk modules, and two IOAM enclosures: Disk Volume Name $SYSTEM (primary ) $DSMSCM (primary ) $AUDIT (primary ) $OSS (primary ) $SYSTEM (mirror ) $DSMSCM (mirror ) $AUDIT (mirror ) FCSA GMSP and and and and and and and.2..2 Disk GMSB $OSS (mirror ) and For an illustration of the factory-default slot locations for a Fibre Channel disk module, refer to Factory-Default Disk Volume Locations for FCDMs (page 78). Four FCSAs, Four FCDMs, Two IOAM Enclosures This illustration shows example cable connections between the four FCSAs split between two IOAM enclosures and two sets of primary and mirror Fibre Channel disk modules: Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module 83

184 This table lists the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identification for the factory-default system disk locations in the configuration of four FCSAs, four Fibre Channel disk modules, and two IOAM enclosures: Disk Volume Name $SYSTEM (primary) $DSMSCM (primary) $AUDIT (primary) $OSS (primary) $SYSTEM (mirror) $DSMSCM (mirror) $AUDIT (mirror) $OSS (mirror) FCSA GMSP and and and and and and and and.2..2 Disk GMSB* * For an illustration of the factory-default slot locations for a Fibre Channel disk module, refer to Factory-Default Disk Volume Locations for FCDMs (page 78) Daisy-Chain Configurations (FCDMs) When planning for possible use of daisy-chained disks, consider: Daisy-Chained Disks Recommended Daisy-Chained Disks Not Recommended Requirements for Daisy-Chain Cost-sensitive storage and applications using low-bandwidth disk I/O. Many volumes in a large Fibre Channel loop. The more volumes that exist in a larger loop, the higher the potential for negative impact from a failure that takes down a Fibre Channel loop. All daisy-chained Fibre Channel disk modules reside in the same rack and are physically grouped together. Low-cost, high-capacity data storage is important. Applications with a highly mixed workload, such as transaction data bases or applications with high disk I/O. ID expander harness with terminators is installed for proper Fibre Channel disk module and drive identification. 84 Legacy Hardware (IOAM, FCDM, S-Series I/O Enclosure)

185 Daisy-Chained Disks Recommended Daisy-Chained Disks Not Recommended Requirements for Daisy-Chain FCSA for each Fibre Channel loop is installed in a different IOAM module for fault tolerance. Two Fibre Channel disk modules minimum, with four Fibre Channel disk modules maximum per daisy chain. See Fibre Channel Devices (page 74). This illustration shows an example of cable connections between the two FCSAs and four Fibre Channel disk modules in a single daisy-chain configuration: A second equivalent configuration, including an IOAM enclosure, two FCSAs, four Fibre Channel disk modules with an ID expander, is required for fault-tolerant mirrored disk storage. Installing each mirrored disk in the same corresponding FCDM and bay number as its primary disk in not required, but it is recommend to simplify the physical management and identification of the disks. This table list the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identification for the factory-default system disk locations in a daisy-chained configuration: Disk Volume Name $SYSTEM $DSMSCM $AUDIT $OSS FCSA GMSP and and and and Disk GMSB* * For an illustration of the factory-default slot locations for a Fibre Channel disk module, refer to Factory-Default Disk Volume Locations for FCDMs (page 78). Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module 85

186 Four FCSAs, Three FCDMs, One IOAM Enclosure This illustration shows example cable connections between the four FCSAs and three Fibre Channel disk modules with the primary and mirror drives split within each Fibre Channel disk module: This table lists the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identification for the factory-default disk volumes for the configuration of four FCSAs, three Fibre Channel disk modules, and one IOAM enclosure: Disk Volume Name $SYSTEM (primary ) $DSMSCM (primary ) $AUDIT (primary ) $OSS (primary ) $SYSTEM (mirror ) $DSMSCM (mirror ) $AUDIT (mirror ) $OSS (mirror ) FCSA GMSP and and and and and and and and Disk GMSB This illustration shows the factory-default locations for the configurations of four FCSAs and three Fibre Channel disk modules where the primary system file disk volumes are in Fibre Channel disk module : 86 Legacy Hardware (IOAM, FCDM, S-Series I/O Enclosure)

187 This illustration shows the factory-default locations for the configurations of four FCSAs with three Fibre Channel disk modules where the mirror system file disk volumes are in Fibre Channel disk module 3: Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module 87

188 G UPS and Data Center Power Configurations This appendix provides examples of UPS and data center power configurations, and: Specifies the UPS configurations supported on the NonStop i BladeSystem including the recommended UPS configuration for when the disk drive write cache is enabled. Identifies the non-supported UPS configurations that should not be used with the NonStop i BladeSystem when the disk drive write cache is enabled. Explains why some configurations are not supported. Informs you of what you must do to prevent data loss. NOTE: All example UPS configuration illustrations in this appendix show NonStop i BladeSystem hardware, but these configurations are supported on all NonStop platforms and can be used with single-phase and three-phase UPS. IMPORTANT: You must change the ride-through time for a Hewlett Packard Enterprise-supported UPS from the manufacturing default setting to an appropriate value for your system. During installation of NonStop i BladeSystem or UPS, your service provider can refer to the "Setting the Ride-Through Time and Configuring for Maximized Runtime" procedure in the NonStop i BladeSystem Hardware Manual for these instructions. Supported UPS Configurations These are the supported UPS configurations for a NonStop BladeSystem: NonStop i BladeSystem With a Fault-Tolerant Data Center (page 89) NonStop i BladeSystem With a Rack-Mounted UPS (page 90) SAS Disk Enclosures With a Rack-Mounted UPS (page 9) 88 UPS and Data Center Power Configurations

189 NonStop i BladeSystem With a Fault-Tolerant Data Center In this supported configuration, the NonStop i BladeSystem is installed in a Tier-IV data center. The data center tier classification is defined by the Uptime InstituteTM Tier Classifications Define Site Infrastructure White Paper. IMPORTANT: With this configuration, you can guarantee the data center never loses power. Figure 53 shows an example of a NonStop i BladeSystem in a fault-tolerant data center which has two simultaneously-active power distribution paths with multiple backup UPS and engine-generator systems. Figure 53 NonStop i BladeSystem With a Fault-Tolerant Data Center Supported UPS Configurations 89

190 NonStop i BladeSystem With a Rack-Mounted UPS Figure 54 shows an example of a supported configuration in a NonStop i BladeSystem with the left PDUs connected to one or more rack-mounted UPS, and the right PDUs connected directly to the utility power. The rack-mounted UPS is connected to the utility power. Figure 54 NonStop i BladeSystem With a Rack-Mounted UPS When OSM detects that one power rail is running on UPS and the other power rail has lost power, OSM logs an event indicating the beginning of the configured ride-through time period. OSM monitors if AC power is restored before the ride-through period ends. If AC Power is restored before the ride-through period ends, the ride-through countdown terminates and OSM does not take further steps to prepare for an outage. If AC Power is not restored before the ride-through periods ends, OSM broadcasts a PFAIL_SHOUT command to all processors (the processor running OSM being the last one in the queue) to shut down the system ServerNet routers and processors. The PFAIL_SHOUT command enables disk writes for data that is in transit through controllers and disks to complete. 90 UPS and Data Center Power Configurations

191 SAS Disk Enclosures With a Rack-Mounted UPS In this supported configuration, only SAS Disk Enclosures are protected by the rack-mounted UPS. The rack(s) with SAS Disk Enclosures and/or Storage CLIMs are supported by one or more rack-mounted UPS. Figure 55 shows an example of this supported configuration in a NonStop i BladeSystem with the left PDUs connected to the rack-mounted UPS, and the right PDUs connected to the utility power. The rack-mounted UPS is connected to the utility power. Figure 55 SAS Disk Enclosures With a Rack-Mounted UPS When the utility power fails, the NonStop i BladeSystem powers off without an OSM-initiated controlled shutdown of the I/O operations and processors. Only the products in the rack with the rack-mounted UPS remain powered on. All completed disk write transactions data are written to the disk drive media or the disk drive write cache. The rack-mounted UPS provides the extended time for the disk drives to transfer the data from their write cache to the media. The rack-mounted UPS provides extended time for the disk drives to transfer the data from their write cache to the media preventing loss of data. Supported UPS Configurations 9

192 Non-Supported UPS Configurations This section identifies non-supported UPS configurations and explains why these configurations are not supported. It also explains what you must do to prevent data loss. CAUTION: If disk drive write caching is enabled in the NonStop BladeSystem, do not use any of these non-supported UPS configurations. They might result in data loss. NonStop i BladeSystem With a Data Center UPS, Single Power Rail (page 93) NonStop i BladeSystem With Data Center UPS, Both Power Rails (page 94) NonStop i BladeSystem With Rack-Mounted UPS and Data Center UPS in Parallel (page 95) NonStop i BladeSystem With Two Rack-Mounted UPS in Parallel (page 96) NonStop i BladeSystem with Cascading Rack-Mounted UPS and Data Center UPS (page 97) 92 UPS and Data Center Power Configurations

193 NonStop i BladeSystem With a Data Center UPS, Single Power Rail Figure 56 shows an example of a non-supported configuration in a NonStop i BladeSystem with the left PDUs directly connected to the utility power, and the right PDUs connected to the data center UPS. In this configuration, OSM does not manage or monitor the data center UPS. Figure 56 NonStop i BladeSystem With a Data Center UPS, Single Power Rail When the utility power fails, OSM does not detect that the data center UPS is running on battery and the UPS has entered its battery runtime. OSM does not initiate the controlled shutdown of the I/O operations and processors. If the utility power is not restored before the data center UPS shuts down, any data in the NonStop i BladeSystem disk drive write cache that has not been transferred to the disk drive media is lost. To prevent data loss during a utility power failure, you must manually disable the Write Cache Enable (WCE) option on all the disk drive volumes. For information on the WRITECACHE disk attribute and how to disable WCE, refer to the SCF Reference Manual for the Storage Subsystem (G06.28+, H06.05+, J06.03+). When the utility power is restored, you can enable the WCE. During the utility power failure, the system can continue to run until the data center UPS runs out of power or until it shuts down. Non-Supported UPS Configurations 93

194 NonStop i BladeSystem With Data Center UPS, Both Power Rails In this non-supported configuration, a NonStop i BladeSystem is installed in a Tier-I, Tier-II or Tier-III data center. The data center tier classification is defined by the Uptime InstituteTM Tier Classifications Define Site Infrastructure White Paper. IMPORTANT: lose power. If you use this configuration, there is no guarantee your data center will never Figure 57 shows an example of this non-supported configuration in a NonStop i BladeSystem with the left and right PDUs connected to the data center UPS. In this configuration, OSM does not manage or monitor the data center UPS. Figure 57 NonStop i BladeSystem With Data Center UPS, Both Power Rails When the utility power fails, OSM does not detect that the data center UPS is running on battery and the UPS has entered its battery runtime. OSM does not initiate the controlled shutdown of the I/O operations and processors. If the utility power is not restored before the data center UPS shuts down, any data in the NonStop i BladeSystem disk drive write cache that has not been transferred to the disk drive media is lost. To prevent data loss during a utility power failure, you must manually disable the Write Cache Enable (WCE) option on all the disk drive volumes. For information on the WRITECACHE disk attribute and how to disable WCE, see the SCF Reference Manual for the Storage Subsystem (G06.28+, H06.05+, J06.03+). When the utility power is restored, you can enable WCE. During a utility power failure, the system can continue to run until the data center UPS runs out of power or until it shuts down. 94 UPS and Data Center Power Configurations

195 NonStop i BladeSystem With Rack-Mounted UPS and Data Center UPS in Parallel Figure 58 shows an example of a non-supported configuration in a NonStop i BladeSystem with the left PDUs connected to the rack-mounted UPS, and the right PDUs connected to the data center UPS. In this configuration, OSM manages and monitors the rack-mounted UPS. However, OSM does not manage or monitor the data center UPS. Figure 58 NonStop i BladeSystem With Rack-Mounted UPS and Data Center UPS in Parallel When the utility power fails, OSM detects a UPS AC Input not Present event from the rack-mounted UPS. OSM does not recognize the data center UPS. OSM does not detect that the data center UPS is running on battery and that the UPS has entered its battery runtime. OSM does not initiate the controlled shutdown of the I/O operations and processors. The rack-mounted UPS shuts down before the data center UPS. If the utility power is not restored before the data center UPS shuts down, any data in the NonStop i BladeSystem disk drive write cache that has not been transferred to the disk drive media is lost. To prevent data loss during a utility power failure, you must manually disable the Write Cache Enable (WCE) option on all the disk drive volumes. For information on the WRITECACHE disk attribute and how to disable WCE, see the SCF Reference Manual for the Storage Subsystem (G06.28+, H06.05+, J06.03+). When the utility power is restored, you can enable the WCE. During the utility power failure, the system can continue to run until the data center UPS runs out of power or until it shuts down. Non-Supported UPS Configurations 95

196 NonStop i BladeSystem With Two Rack-Mounted UPS in Parallel Figure 59 shows an example of a non-supported configuration in a NonStop i BladeSystem with the left PDUs connected to a rack-mounted UPS, and the right PDUs connected to a different rack-mounted UPS. In this configuration, OSM manages and monitors the rack-mounted UPSs. Figure 59 NonStop i BladeSystem With Two Rack-Mounted UPS in Parallel OSM has the capability to monitor both UPSs, but it does not have the logic to initiate the controlled shutdown of the I/O operations and processors when utility power fails in this configuration. If the utility power is not restored before both rack-mounted UPSs shut down, any data in the NonStop i BladeSystem disk drive write cache that has not been transferred to the disk drive media is lost. If you want to extend the UPS battery runtime, Hewlett Packard Enterprise recommends adding Extended Runtime Modules (ERMs) to the UPS. 96 UPS and Data Center Power Configurations

197 NonStop i BladeSystem with Cascading Rack-Mounted UPS and Data Center UPS Figure 60 shows an example of a non-supported configuration in a NonStop i BladeSystem with the left PDUs connected to the rack-mounted UPS, and the right PDUs connected to a data center UPS. To create a cascading UPS configuration, the rack-mounted UPS is connected to the data center UPS. Figure 60 NonStop i BladeSystem With Cascading UPS A cascading UPS configuration presents potential problems. Problems attaining stability between the pair of cascaded UPSs can cause unexpected and undesirable behavior. The control loops of each UPS can interfere with the other. A typical scenario where this behavior occurs is the failure of the smaller downstream UPS to recognize a stable input from its upstream source. In the event of an upstream UPS failure or output disturbance, the downstream UPS switches the load to battery. Once the upstream UPS regains full function, the downstream UPS should recognize a stable input and switch to pass-through mode; but this does not happen in all cases and some cases fail. In a failing case, the downstream UPS fails to switch back to pass-through mode, instead running from the battery until the battery set is drained. Once the battery is drained, the downstream UPS must attempt to switch back up to pass-through mode. At the minimum, this leaves the downstream UPS with depleted batteries. Non-Supported UPS Configurations 97

HP Integrity NonStop BladeSystem Planning Guide

HP Integrity NonStop BladeSystem Planning Guide HP Integrity NonStop BladeSystem Planning Guide HP Part Number: 545740-08 Published: November 202 Edition: J06.3 and subsequent J-series RVUs Copyright 202 Hewlett-Packard Development Company, L.P. Legal

More information

HPE SIM for NonStop Manageability

HPE SIM for NonStop Manageability HPE SIM for NonStop Manageability Part Number: 575075-005R Published: January 2016 Edition: J06.03 and subsequent J-series RVUs, H06.03 and subsequent H-series RVUs, and G06.15 and subsequent G-series

More information

HPE Direct-Connect External SAS Storage for HPE BladeSystem Solutions Deployment Guide

HPE Direct-Connect External SAS Storage for HPE BladeSystem Solutions Deployment Guide HPE Direct-Connect External SAS Storage for HPE BladeSystem Solutions Deployment Guide This document provides device overview information, installation best practices and procedural overview, and illustrated

More information

OSM Service Connection User's Guide

OSM Service Connection User's Guide OSM Service Connection User's Guide Part Number: 879519-001 Published: March 017 Edition: J06.03 and subsequent J-series RVUs and H06.03 and subsequent H-series RVUs. Copyright 016, 017 Hewlett Packard

More information

BITUG 2013 NonStop Big Sig HP NonStop update

BITUG 2013 NonStop Big Sig HP NonStop update BITUG 2013 NonStop Big Sig HP NonStop update Mark Pollans Sr. Worldwide Product Manager, HP December 2013 Forward-looking statements This is a rolling (up to three year) roadmap and is subject to change

More information

HPE Virtual Connect for c-class BladeSystem Setup and Installation Guide Version 4.60

HPE Virtual Connect for c-class BladeSystem Setup and Installation Guide Version 4.60 HPE Virtual Connect for c-class BladeSystem Setup and Installation Guide Version 4.60 Abstract This document contains setup, installation, and configuration information for HPE Virtual Connect. This document

More information

HP Integrity NonStop NS-Series Planning Guide

HP Integrity NonStop NS-Series Planning Guide HP Integrity NonStop NS-Series Planning Guide Abstract This guide describes the Integrity NonStop NS-series system hardware and provides examples of system configurations to assist in planning for installation

More information

HP Integrity NonStop NS14000 Series Planning Guide

HP Integrity NonStop NS14000 Series Planning Guide HP Integrity NonStop NS14000 Series Planning Guide HP Part Number: 543635009 Published: March 2012 Edition: H06.13 and subsequent Hseries RVUs Copyright 2006, 2012 HewlettPackard Development Company, L.P.

More information

HPE BladeSystem c3000 Enclosure Quick Setup Instructions

HPE BladeSystem c3000 Enclosure Quick Setup Instructions HPE BladeSystem c3000 Enclosure Quick Setup Instructions Part Number: 446990-007 2 Site requirements Select an installation site that meets the detailed installation site requirements described in the

More information

HPE NonStop Development Environment for Eclipse 6.0 Debugging Supplement

HPE NonStop Development Environment for Eclipse 6.0 Debugging Supplement HPE NonStop Development Environment for Eclipse 6.0 Debugging Supplement Part Number: 831776-001 Published: June 2016 Edition: L16.05 and subsequent L-series RVUs, J06.20 and subsequent J-series RVUs Copyright

More information

HPE Integrity MC990 X Server Getting Started Guide

HPE Integrity MC990 X Server Getting Started Guide HPE Integrity MC990 X Server Getting Started Guide Abstract This guide describes the HPE Integrity MC990 X Server computer system and provides information to get started using it. Part Number: 855699-003

More information

HP Virtual Connect for c-class BladeSystem Setup and Installation Guide

HP Virtual Connect for c-class BladeSystem Setup and Installation Guide HP Virtual Connect for c-class BladeSystem Setup and Installation Guide Version 4.01 Abstract This document contains setup, installation, and configuration information for HP Virtual Connect. This document

More information

Over 70% of servers within a data center are not connected to Fibre Channel SANs for any of the following reasons:

Over 70% of servers within a data center are not connected to Fibre Channel SANs for any of the following reasons: Overview The provides modular multi-protocol SAN designs with increased scalability, stability and ROI on storage infrastructure. Over 70% of servers within a data center are not connected to Fibre Channel

More information

NonStop Development Environment for Eclipse 7.0 Debugging Supplement

NonStop Development Environment for Eclipse 7.0 Debugging Supplement NonStop Development Environment for Eclipse 7.0 Debugging Supplement Part Number: 831776-002 Published: August 2017 Edition: L15.02 and all subsequent L-series RVUs, J06.18 and all subsequent J-series

More information

HPE ProLiant Gen9 Troubleshooting Guide

HPE ProLiant Gen9 Troubleshooting Guide HPE ProLiant Gen9 Troubleshooting Guide Volume II: Error Messages Abstract This guide provides a list of error messages associated with HPE ProLiant servers, Integrated Lights-Out, Smart Array storage,

More information

HP BladeSystem Matrix Compatibility Chart

HP BladeSystem Matrix Compatibility Chart HP BladeSystem Matrix Compatibility Chart For supported hardware and software, including BladeSystem Matrix firmware set 1.01 Part Number 512185-003 December 2009 (Third Edition) Copyright 2009 Hewlett-Packard

More information

HPE Synergy Configuration and Compatibility Guide

HPE Synergy Configuration and Compatibility Guide HPE Synergy Configuration and Compatibility Guide Abstract This guide describes HPE Synergy hardware configuration options and compatibility. Hewlett Packard Enterprise assumes you are qualified in the

More information

Microsoft Windows Server 2008 On Integrity Servers Overview

Microsoft Windows Server 2008 On Integrity Servers Overview Overview The HP Integrity servers based on Intel Itanium architecture provide one of the most reliable and scalable platforms currently available for mission-critical Windows Server deployments. Windows

More information

Retired. Windows Server 2008 R2 for Itanium-Based Systems offers the following high-end features and capabilities:

Retired. Windows Server 2008 R2 for Itanium-Based Systems offers the following high-end features and capabilities: Overview NOTE: HP no longer sells Microsoft Windows Server 2008/2008 R2 on Integrity servers. HP will continue to support Microsoft Windows Server 2008/2008 R2 until Microsoft's end of mainstream support

More information

HPE WBEM Providers for OpenVMS Integrity servers Release Notes Version 2.2-5

HPE WBEM Providers for OpenVMS Integrity servers Release Notes Version 2.2-5 HPE WBEM Providers for OpenVMS Integrity servers Release Notes Version 2.2-5 January 2016 This release note describes the enhancement, known restrictions, and errors found in the WBEM software and documentation,

More information

HPE BladeSystem c-class Virtual Connect Support Utility Version Release Notes

HPE BladeSystem c-class Virtual Connect Support Utility Version Release Notes HPE BladeSystem c-class Virtual Connect Support Utility Version 1.12.0 Release Notes Abstract This document provides release information for the HPE BladeSystem c-class Virtual Connect Support Utility

More information

HP ProLiant BL35p Server Blade

HP ProLiant BL35p Server Blade Data sheet The new HP ProLiant BL35p two-way Server Blade delivers uncompromising manageability, maximum compute density and breakthrough power efficiencies to the high-performance data centre. The ProLiant

More information

Host and storage system rules

Host and storage system rules Host and storage system rules Host and storage system rules are presented in these chapters: Heterogeneous server rules on page 185 MSA storage system rules on page 235 HPE StoreVirtual storage system

More information

Code Profiling Utilities Manual

Code Profiling Utilities Manual Code Profiling Utilities Manual Part Number: P04195-001 Published: April 2018 Edition: L15.02 and all subsequent L-series RVUs, J06.03 and all subsequent J-series RVUs, and H06.03 and all subsequent H-series

More information

HP Database Manager (HPDM) User Guide

HP Database Manager (HPDM) User Guide HP Database Manager (HPDM) User Guide HP Part Number: 597527-001 Published: March 2010 Edition: HP Neoview Release 2.4 Service Pack 2 Copyright 2010 Hewlett-Packard Development Company, L.P. Legal Notice

More information

HP ProLiant Agentless Management Pack (v 3.2) for Microsoft System Center User Guide

HP ProLiant Agentless Management Pack (v 3.2) for Microsoft System Center User Guide HP ProLiant Agentless Management Pack (v 3.2) for Microsoft System Center User Guide Abstract This guide provides information on using the HP ProLiant Agentless Management Pack for System Center version

More information

HP Cluster Platform Server and Workstation Overview

HP Cluster Platform Server and Workstation Overview HP Cluster Platform Server and Workstation Overview HP Part Number: A-CPSOV-H Published: March 009 Copyright 009 Hewlett-Packard Development Company, L.P. The information contained herein is subject to

More information

HP ProLiant m300 1P C2750 CPU 32GB Configure-to-order Server Cartridge B21

HP ProLiant m300 1P C2750 CPU 32GB Configure-to-order Server Cartridge B21 Overview HP Moonshot System with the HP ProLiant m300 server cartridge was created for cost-effective Dynamic Content Delivery, frontend Web and online analytics. The ProLiant m300 server cartridge has

More information

HP 10GbE Pass-Thru Module

HP 10GbE Pass-Thru Module Overview The is designed for c-class BladeSystem and HP Integrity Superdome 2 customers requiring a nonblocking, one -to-one connection between each server and the network. The pass-thru module provides

More information

QuickSpecs. Models. HP StorageWorks Modular Smart Array 30 Multi-Initiator (MSA30 MI) Enclosure. Overview

QuickSpecs. Models. HP StorageWorks Modular Smart Array 30 Multi-Initiator (MSA30 MI) Enclosure. Overview Overview (Supporting HP-UX and 64 Bit Linux Operating Systems on HP Integrity and HP 9000 Servers only) (Supporting HP-UX and 64 Bit Linux Operating Systems on HP Integrity and HP 9000 Servers/Workstations

More information

QuickSpecs. Useful Web Sites For additional information, see the following web sites: Linux Operating System. Overview. Retired

QuickSpecs. Useful Web Sites For additional information, see the following web sites: Linux Operating System. Overview. Retired Overview NOTE: HP no longer sells RHEL and SLES on Integrity servers. HP will continue to support RHEL 5 on Integrity servers until Red Hat's end of support life date for RHEL 5 (March 31st, 2017). HP

More information

HPE 1/8 G2 Tape Autoloader and MSL Tape Libraries Encryption Kit User Guide

HPE 1/8 G2 Tape Autoloader and MSL Tape Libraries Encryption Kit User Guide HPE 1/8 G2 Tape Autoloader and MSL Tape Libraries Encryption Kit User Guide Abstract This guide provides information about developing encryption key management processes, configuring the tape autoloader

More information

HP BladeSystem c-class Virtual Connect Support Utility Version Release Notes

HP BladeSystem c-class Virtual Connect Support Utility Version Release Notes HP BladeSystem c-class Virtual Connect Support Utility Version 1.9.1 Release Notes Abstract This document provides release information for the HP BladeSystem c-class Virtual Connect Support Utility Version

More information

HP Integrity NonStop NS16000 Server Data sheet

HP Integrity NonStop NS16000 Server Data sheet Data sheet The HP Integrity NonStop NS16000 Server is a top-of-the-line enterprise computing solution that is designed to deliver unprecedented levels of application availability with absolute data integrity.

More information

QuickSpecs HP BladeSystem Breaker Panel

QuickSpecs HP BladeSystem Breaker Panel Overview The HP BladeSystem Breaker Panel is a dual input panel, where each input is rated for 240Amp, -36VDC to -72VDC. There are seven breakers on each input of the panel; where the first three are pre-configured

More information

HP Integrity NonStop NS14200 Server

HP Integrity NonStop NS14200 Server HP Integrity NonStop NS14200 Server Data sheet HP Integrity NonStop NS-series servers offer the highest levels of service of any platform, while extending the framework of the HP Adaptive Infrastructure

More information

Enterprise Server Midrange - Hewlett Packard

Enterprise Server Midrange - Hewlett Packard IFB DGS-911-5 Exhibit 11.37A.1 Pricing Worksheet - Hewlett Packard, Midrange A and B BIDDER SUBMITTED BY Answer - Mandatory INSTRUCTIONS: Bidders enter information in the YELLOW shaded cells as required

More information

HP StorageWorks Enterprise Virtual Array 4400 to 6400/8400 upgrade assessment

HP StorageWorks Enterprise Virtual Array 4400 to 6400/8400 upgrade assessment HP StorageWorks Enterprise Virtual Array 4400 to 6400/8400 upgrade assessment Part number: 5697-8185 First edition: June 2009 Legal and notice information Copyright 2009 Hewlett-Packard Development Company,

More information

Looking for a simple and intuitive solution that allows you to access and manage servers across your entire network?

Looking for a simple and intuitive solution that allows you to access and manage servers across your entire network? Overview Looking for a simple and intuitive solution that allows you to access and manage servers across your entire network? allow you to view and manage up to 256 rackmount servers across your data center

More information

Looking for a simple and intuitive solution that allows you to access and manage servers across your entire network?

Looking for a simple and intuitive solution that allows you to access and manage servers across your entire network? Overview Looking for a simple and intuitive solution that allows you to access and manage servers across your entire network? allow you to view and manage up to 256 rackmount servers across your data center

More information

HPE Factory Express Customized Integration with Onsite Startup Service

HPE Factory Express Customized Integration with Onsite Startup Service Data sheet HPE Factory Express Customized Integration with Onsite Startup Service HPE Lifecycle Event Services HPE Factory Express Customized Integration with Onsite Startup Service (formerly known as

More information

N3240 Installation and Setup Instructions

N3240 Installation and Setup Instructions IBM System Storage N3240 Installation and Setup Instructions Covering the N3240 model GA32-2203-01 Notices Mail comments to: IBM Corporation Attention Department GZW 9000 South Rita Road Tucson, AZ 85744-0001

More information

Veritas NetBackup Appliance Fibre Channel Guide

Veritas NetBackup Appliance Fibre Channel Guide Veritas NetBackup Appliance Fibre Channel Guide Release 2.7.3 NetBackup 52xx and 5330 Document Revision 1 Veritas NetBackup Appliance Fibre Channel Guide Release 2.7.3 - Document Revision 1 Legal Notice

More information

Lot # 10 - Servers. 1. Rack Server. Rack Server Server

Lot # 10 - Servers. 1. Rack Server. Rack Server Server 1. Rack Server Rack Server Server Processor: 1 x Intel Xeon E5 2620v3 (2.4GHz/6 core/15mb/85w) Processor Kit. Upgradable to 2 CPU Chipset: Intel C610 Series Chipset. Intel E5 2600v3 Processor Family. Memory:

More information

DNS-2608 Enterprise JBOD Enclosure User Manual

DNS-2608 Enterprise JBOD Enclosure User Manual DNS-2608 Enterprise JBOD Enclosure User Manual Nov.2017 Copyright DataON. All rights reserved. www.dataonstorage.com 1 Contents Package Contents... 3 System Requirements... 3 Technical Support... 3 DataON

More information

HPE VMware ESXi and vsphere 5.x, 6.x and Updates Getting Started Guide

HPE VMware ESXi and vsphere 5.x, 6.x and Updates Getting Started Guide HPE VMware ESXi and vsphere 5.x, 6.x and Updates Getting Started Guide Abstract This guide is intended to provide setup information for HPE VMware ESXi and vsphere. Part Number: 818330-003 Published: April

More information

HP StorageWorks MSA/P2000 Family Disk Array Installation and Startup Service

HP StorageWorks MSA/P2000 Family Disk Array Installation and Startup Service HP StorageWorks MSA/P2000 Family Disk Array Installation and Startup Service HP Services Technical data The HP StorageWorks MSA/P2000 Family Disk Array Installation and Startup Service provides the necessary

More information

HPE ProLiant BL660c Gen9 Server Blade User Guide

HPE ProLiant BL660c Gen9 Server Blade User Guide HPE ProLiant BL660c Gen9 Server Blade User Guide Abstract This document is for the person who installs, administers, and troubleshoots servers and storage systems. Hewlett Packard Enterprise assumes you

More information

Target Environments The Smart Array 6i Controller offers superior investment protection to the following environments: Non-RAID

Target Environments The Smart Array 6i Controller offers superior investment protection to the following environments: Non-RAID Overview The Smart Array 6i controller is an Ultra320 intelligent array controller for entry-level, hardware-based fault tolerance for protection of OS, applications, and logs. Most models have one internal-only

More information

StorageWorks Dual Channel 4Gb, PCI-X-to-Fibre Channel Host Bus Adapter for Windows and Linux (Emulex LP11002)

StorageWorks Dual Channel 4Gb, PCI-X-to-Fibre Channel Host Bus Adapter for Windows and Linux (Emulex LP11002) Models FC2143 4Gb PCI-X 2.0 HBA FC2243 Dual 4Gb PCI-X 2.0 HBA FC1143 4Gb PCI-X 2.0 HBA FC1243 Dual 4Gb PCI-X 2.0 HBA StorageWorks 4Gb, PCI-X-to-Fibre Host Bus Adapter for Windows and Linux. (Emulex LP1150)

More information

ProLiant DL F100 Integrated Cluster Solutions and Non-Integrated Cluster Bundle Configurations. Configurations

ProLiant DL F100 Integrated Cluster Solutions and Non-Integrated Cluster Bundle Configurations. Configurations Overview ProLiant DL F100 Integrated Cluster Solutions and Non-Integrated Cluster Bundle Configurations 1. MSA1000 6. Fibre Channel Interconnect #1 and #2 2. Smart Array Controller 7. Ethernet "HeartBeat"

More information

The HP BladeSystem p-class 1U power enclosure: hot-plug, redundant power for a server blade enclosure

The HP BladeSystem p-class 1U power enclosure: hot-plug, redundant power for a server blade enclosure The HP BladeSystem p-class 1U power enclosure: hot-plug, redundant power for a server blade enclosure technology brief Abstract... 3 Introduction... 3 Components of the enclosure... 3 Hot-plug, redundant

More information

BL8x0c i2: Overview, Setup, Troubleshooting, and Various Methods to Install OpenVMS

BL8x0c i2: Overview, Setup, Troubleshooting, and Various Methods to Install OpenVMS OpenVMS Technical Journal V15 BL8x0c i2: Overview, Setup, Troubleshooting, and Various Methods to Install OpenVMS Aditya B S and Srinivas Pinnika BL8x0c i2: Overview, Setup, Troubleshooting, and Various

More information

BITUG Big Sig December 2013 NonStop Performance Update David Sly HP UK Tech Services

BITUG Big Sig December 2013 NonStop Performance Update David Sly HP UK Tech Services BITUG Big Sig December 2013 NonStop Performance Update David Sly HP UK Tech Services 1 HP confidential information This is a rolling (up to three year) Roadmap and is subject to change without notice.

More information

QuickSpecs. HPE 10GbE Pass-Thru Module. Overview. What's New. At A Glance

QuickSpecs. HPE 10GbE Pass-Thru Module. Overview. What's New. At A Glance Overview The is designed for c-class BladeSystem and HPE Integrity Superdome 2 customers requiring a nonblocking, one -to-one connection between each server and the network. The pass-thru module provides

More information

QuickSpecs. HP ProLiant m710 Server Cartridge. Overview. HP ProLiant m710 Server Cartridge. Retired

QuickSpecs. HP ProLiant m710 Server Cartridge. Overview. HP ProLiant m710 Server Cartridge. Retired Overview HP Moonshot System with the HP ProLiant m710 server cartridge is designed to enhance the performance of Rich Application Streaming and Video Transcoding. The ProLiant m710 server cartridge has

More information

Models Smart Array 6402/128 Controller B21 Smart Array 6404/256 Controller B21

Models Smart Array 6402/128 Controller B21 Smart Array 6404/256 Controller B21 Overview The Smart Array 6400 high performance Ultra320, PCI-X controller family provides maximum performance, flexibility, and reliable data protection for HP ProLiant servers, through its unique modular

More information

Powering HP BladeSystem c7000 Enclosures

Powering HP BladeSystem c7000 Enclosures Powering HP BladeSystem c7000 Enclosures HOWTO Abstract... 2 Power draw of HP BladeSystem c7000 Enclosure... 2 Power Distribution...2 Uninterruptible Power Supplies... 6 For more information... 9 Call

More information

HPE D2600/D2700 Disk Enclosure I/O Module Firmware 0149 Release Notes

HPE D2600/D2700 Disk Enclosure I/O Module Firmware 0149 Release Notes HPE D2600/D2700 Disk Enclosure I/O Module Firmware 0149 Release Notes Part Number: 504224-011R Published: November 2015 Edition: 12 Copyright 2009, 2015 Hewlett Packard Enterprise Development LP The information

More information

HP0-S15. Planning and Designing ProLiant Solutions for the Enterprise. Download Full Version :

HP0-S15. Planning and Designing ProLiant Solutions for the Enterprise. Download Full Version : HP HP0-S15 Planning and Designing ProLiant Solutions for the Enterprise Download Full Version : http://killexams.com/pass4sure/exam-detail/hp0-s15 QUESTION: 174 Which rules should be followed when installing

More information

HP MSA Family Installation and Startup Service

HP MSA Family Installation and Startup Service Technical data HP MSA Family Installation and HP Services Service benefits Allows your IT resources to stay focused on their core tasks and priorities Reduces implementation time, impact, and risk to your

More information

Oracle Communications HP Solutions Firmware Upgrade Pack

Oracle Communications HP Solutions Firmware Upgrade Pack Oracle Communications HP Solutions Firmware Upgrade Pack Software Centric Release Notes Release 2.2.9 E64917 Revision 03 April 2016 Oracle Communications, HP Solutions Firmware Upgrade Pack, Release 2.2.9

More information

HPE ProLiant BL460c Gen9 Server Blade User Guide

HPE ProLiant BL460c Gen9 Server Blade User Guide HPE ProLiant BL460c Gen9 Server Blade User Guide Abstract This document is for the person who installs, administers, and troubleshoots servers and storage systems. Hewlett Packard Enterprise assumes you

More information

HPE ProLiant WS460c Gen9 Graphics Server Blade User Guide

HPE ProLiant WS460c Gen9 Graphics Server Blade User Guide HPE ProLiant WS460c Gen9 Graphics Server Blade User Guide Abstract This guide provides operation information for the HPE ProLiant WS460c Graphics Server Blade. This guide is for technicians that install,

More information

HP LeftHand P4000 Virtual SAN Appliance in an HP BladeSystem environment solution guide

HP LeftHand P4000 Virtual SAN Appliance in an HP BladeSystem environment solution guide HP LeftHand P4000 Virtual SAN Appliance in an HP BladeSystem environment solution guide AT459-96002 Part number: AT459-96002 First edition: April 2009 Legal and notice information Copyright 2009 Hewlett-Packard

More information

HP D6000 Disk Enclosure Direct Connect Cabling Guide

HP D6000 Disk Enclosure Direct Connect Cabling Guide HP D6000 Disk Enclosure Direct Connect Cabling Guide Abstract This document provides cabling examples for when an HP D6000 Disk Enclosure is connected directly to a server. Part Number: 682251-001 September

More information

HPE StoreOnce 3100, 3500, 5100, and 5500 System Installation and Configuration Guide

HPE StoreOnce 3100, 3500, 5100, and 5500 System Installation and Configuration Guide HPE StoreOnce 3100, 3500, 5100, and 5500 System Installation and Configuration Guide Abstract This guide is for HPE StoreOnce System Administrators. It assumes that the user has followed the instructions

More information

HP Cluster Platform Overview

HP Cluster Platform Overview HP Cluster Platform Overview Abstract This document describes the benefits, hardware support, software support, and installation considerations of HP Cluster Platform systems. HP Part Number: 5697-2030

More information

Rack-mountable 14 drive enclosure with single bus, single power supply. Tower 14-bay drive enclosure, single bus, single power supply, LCD monitor

Rack-mountable 14 drive enclosure with single bus, single power supply. Tower 14-bay drive enclosure, single bus, single power supply, LCD monitor Overview Description The HP StorageWorks Enclosure 4300 is an Ultra3 SCSI disk drive storage enclosure. These enclosures deliver industry-leading data performance, availability, storage density, and upgradability

More information

QuickSpecs. HPE Altoline 6921 Switch Series. Overview. HPE Altoline 6921 Switch Series

QuickSpecs. HPE Altoline 6921 Switch Series. Overview. HPE Altoline 6921 Switch Series Overview Models HPE Altoline 6921 48SFP+ 6QSFP+ x86 ONIE AC Front-to-Back Switch HPE Altoline 6921 48SFP+ 6QSFP+ x86 ONIE AC Back-to-Front Switch HPE Altoline 6921 48XGT 6QSFP+ x86 ONIE AC Front-to-Back

More information

The HP Blade Workstation Solution A new paradigm in workstation computing featuring the HP ProLiant xw460c Blade Workstation

The HP Blade Workstation Solution A new paradigm in workstation computing featuring the HP ProLiant xw460c Blade Workstation The HP Blade Workstation Solution A new paradigm in workstation computing featuring the HP ProLiant xw460c Blade Workstation Executive overview...2 HP Blade Workstation Solution overview...2 Details of

More information

HP ProLiant DL385p Gen8 Server

HP ProLiant DL385p Gen8 Server DATASHEET Gen8 Server Flexibility to future-proof your infrastructure HP Proliant DL385 servers offer low list prices and high rewards The Gen8 server is purpose-built to: Redefine the customer experience

More information

HP Integrity NonStop Hardware and Software VNUG, May 2010

HP Integrity NonStop Hardware and Software VNUG, May 2010 HP Integrity NonStop Hardware and Software VNUG, May 2010 Mittal Parekh WW Product Manager, Multiple Product Lines NonStop Enterprise Division 1 Agenda 1. HP Integrity NonStop Multi core Hardware 2. HP

More information

HP P6300/P6500 EVA Fibre Channel Controller Replacement Instructions

HP P6300/P6500 EVA Fibre Channel Controller Replacement Instructions HP P6300/P6500 EVA Fibre Channel Controller Replacement Instructions About this document For the latest documentation, go to http:// www.hp.com/support/manuals, and select your product. The information

More information

QuickSpecs. What's New. Models. ProLiant Essentials Server Migration Pack - Physical to ProLiant Edition. Overview

QuickSpecs. What's New. Models. ProLiant Essentials Server Migration Pack - Physical to ProLiant Edition. Overview Overview Upgrading or replacing your existing server? Migration is now an option! Replicate the server you are replacing using the software, the only product of its kind from a server vendor that provides

More information

HPE Altoline QSFP28 x86 ONIE AC Front-to-Back Switch HPE Altoline QSFP28 x86 ONIE AC Back-to-Front Switch

HPE Altoline QSFP28 x86 ONIE AC Front-to-Back Switch HPE Altoline QSFP28 x86 ONIE AC Back-to-Front Switch Overview Models HPE Altoline 6960 32QSFP28 x86 ONIE AC Front-to-Back Switch HPE Altoline 6960 32QSFP28 x86 ONIE AC Back-to-Front Switch JL279A JL280A Key features High 100GbE port density and low latency

More information

N3220 Installation and Setup Instructions

N3220 Installation and Setup Instructions IBM System Storage N3220 Installation and Setup Instructions Covering the N3220 model GA32-2202-01 Notices Mail comments to: IBM Corporation Attention Department GZW 9000 South Rita Road Tucson, AZ 85744-0001

More information

HP Direct-Connect External SAS Storage for HP BladeSystem Solutions Deployment Guide

HP Direct-Connect External SAS Storage for HP BladeSystem Solutions Deployment Guide HP Direct-Connect External SAS Storage for HP BladeSystem Solutions Deployment Guide Part number: 510464 003 Third edition: November 2009 Legal and notice information Copyright 2008-2009 Hewlett-Packard

More information

3331 Quantifying the value proposition of blade systems

3331 Quantifying the value proposition of blade systems 3331 Quantifying the value proposition of blade systems Anthony Dina Business Development, ISS Blades HP Houston, TX anthony.dina@hp.com 2004 Hewlett-Packard Development Company, L.P. The information contained

More information

Sun StorageTek. 1U Rackmount Media Tray Reference Guide. Sun Doc Part Number: Second edition: December 2007

Sun StorageTek. 1U Rackmount Media Tray Reference Guide. Sun Doc Part Number: Second edition: December 2007 Sun StorageTek nl 1U Rackmount Media Tray Reference Guide Sun Doc Part Number: 875 4297 10 Second edition: December 2007 Legal and notice information Copyright 2007 Hewlett Packard Development Company,

More information

Cisco UCS Virtual Interface Card Drivers for Windows Installation Guide

Cisco UCS Virtual Interface Card Drivers for Windows Installation Guide Cisco UCS Virtual Interface Card Drivers for Windows Installation Guide First Published: 2011-09-06 Last Modified: 2015-09-01 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA

More information

HP ProLiant ML370 G4 Storage Server

HP ProLiant ML370 G4 Storage Server HP ProLiant ML370 G4 Storage Server Data sheet The HP ProLiant ML370 Storage Server is ideal for price conscious small to medium business (SMB) or remote offices that need more performance and scalability

More information

QuickSpecs. Compaq TaskSmart W-Series M ODELS

QuickSpecs. Compaq TaskSmart W-Series M ODELS M ODELS Compaq TaskSmart W-series W2200 series Model 30 227952-001 227952-421 (Int l) Model 20 222864-001 222864-421 (Int l) Model 10 222863-001 222863-421 (Int l) O VERVIEW Compaq TaskSmart W2200 web

More information

Veritas NetBackup Appliance Fibre Channel Guide

Veritas NetBackup Appliance Fibre Channel Guide Veritas NetBackup Appliance Fibre Channel Guide Release 3.1 and 3.1.1 NetBackup 52xx and 53xx Veritas NetBackup Appliance Fibre Channel Guide Legal Notice Copyright 2018 Veritas Technologies LLC. All rights

More information

The use of the HP SAS Expander Card requires a minimum of 256MB cache on the SA-P410 or SA-P410i Controller.)

The use of the HP SAS Expander Card requires a minimum of 256MB cache on the SA-P410 or SA-P410i Controller.) Overview The HP Smart SAS Expander Card enhances the Smart Array controller family by allowing support for more then 8 internal hard disk drives on select ProLiant servers when connected to a Smart Array

More information

Transform your data center cost-effectively with the ultra-dense, efficient, and high-performance HP ProLiant DL320 G6 enterpriseclass

Transform your data center cost-effectively with the ultra-dense, efficient, and high-performance HP ProLiant DL320 G6 enterpriseclass HP ProLiant DL320 G6 Server Data sheet Transform your data center cost-effectively with the ultra-dense, efficient, and high-performance HP ProLiant DL320 G6 enterpriseclass rack server Would you consider

More information

The Virtualized Server Environment

The Virtualized Server Environment CHAPTER 3 The Virtualized Server Environment Based on the analysis performed on the existing server environment in the previous chapter, this chapter covers the virtualized solution. The Capacity Planner

More information

StorNext M440 Site Planning Guide

StorNext M440 Site Planning Guide StorNext M440 Contents StorNext M440 Site Planning Guide Included with Your StorNext M440... 1 Professional Installation Requirements. 2 Site Requirements... 2 Shipping Information... 3 StorNext M440 System

More information

Safeguard Administrator s Manual

Safeguard Administrator s Manual Safeguard Administrator s Manual Part Number: 862340-003a Published: June 2017 Edition: L15.02, J06.03, H06.08, and G06.29, and later L-series, J-series, H-series, and G-series RVUs. 2011, 2017 Hewlett

More information

HP XP7 High Availability User Guide

HP XP7 High Availability User Guide HP XP7 High Availability User Guide Abstract HP XP7 High Availability helps you create and maintain a synchronous copy of critical data in a remote location. This document describes and provides instructions

More information

DtS Data Migration to the MSA1000

DtS Data Migration to the MSA1000 White Paper September 2002 Document Number Prepared by: Network Storage Solutions Hewlett Packard Company Contents Migrating Data from Smart Array controllers and RA4100 controllers...3 Installation Notes

More information

Introduction...2. Executive summary...2. Test results...3 IOPs...3 Service demand...3 Throughput...4 Scalability...5

Introduction...2. Executive summary...2. Test results...3 IOPs...3 Service demand...3 Throughput...4 Scalability...5 A6826A PCI-X Dual Channel 2Gb/s Fibre Channel Adapter Performance Paper for Integrity Servers Table of contents Introduction...2 Executive summary...2 Test results...3 IOPs...3 Service demand...3 Throughput...4

More information

MSA1500csReleaseNotes8_ txt MSA1500cs ReleaseNotes. hp StorageWorks Modular Smart Array 1500 cs Release Notes. Third Edition (February 2005)

MSA1500csReleaseNotes8_ txt MSA1500cs ReleaseNotes. hp StorageWorks Modular Smart Array 1500 cs Release Notes. Third Edition (February 2005) MSA1500cs ReleaseNotes hp StorageWorks Modular Smart Array 1500 cs Release Notes Third Edition (February 2005) Publication of the third edition of this document coincides with the release of MSA1500 cs

More information

ftserver 3300 Service Bulletin

ftserver 3300 Service Bulletin ftserver 3300 Service Bulletin Last Updated 2/12/04 1. Overview The ftserver 3300 is based on the 2.4-GHz or 3.06-GHz Intel IA32 architecture using Intel s Xeon processors and one-way or two-way (one or

More information

QuickSpecs. HP StorageWorks 60 Modular Smart Array. Overview

QuickSpecs. HP StorageWorks 60 Modular Smart Array. Overview Overview The enclosure is a 2U Serial Attach SCSI (SAS) disk drive storage enclosure supporting 3.5" SAS or Serial ATA (SATA) drives. This enclosure delivers industry-leading data performance, availability,

More information

Installing the IPS 4345 and IPS 4360

Installing the IPS 4345 and IPS 4360 CHAPTER 4 Installing the IPS 4345 and IPS 4360 Contents This chapter describes the Cisco IPS 4345 and the IPS 4360, and includes the following sections: Installation Notes and Caveats, page 4-1 Product

More information

PASS4TEST. IT Certification Guaranteed, The Easy Way! We offer free update service for one year

PASS4TEST. IT Certification Guaranteed, The Easy Way!   We offer free update service for one year PASS4TEST IT Certification Guaranteed, The Easy Way! \ http://www.pass4test.com We offer free update service for one year Exam : HP2-T15 Title : Servicing HP BladeSystem Vendors : HP Version : DEMO Get

More information

Retired. Models HPE Altoline QSFP+ x86 ONIE AC Front-to-Back Switch HPE Altoline QSFP+ x86 ONIE AC Back-to-Front Switch

Retired. Models HPE Altoline QSFP+ x86 ONIE AC Front-to-Back Switch HPE Altoline QSFP+ x86 ONIE AC Back-to-Front Switch Overview Models HPE Altoline 6940 32QSFP+ PPC ONIE AC Front-to-Back Switch HPE Altoline 6940 32QSFP+ PPC ONIE AC Back-to-Front Switch HPE Altoline 6940 32QSFP+ x86 ONIE AC Front-to-Back Switch HPE Altoline

More information

XP7 High Availability User Guide

XP7 High Availability User Guide XP7 High Availability User Guide Abstract HPE XP7 High Availability helps you create and maintain a synchronous copy of critical data in a remote location. This document describes and provides instructions

More information

QuickSpecs. HP 50 Modular Smart Array Enclosure. Overview

QuickSpecs. HP 50 Modular Smart Array Enclosure. Overview Overview The HP 50 Modular Smart Array (MSA50) Enclosure is a 1U Serial Attach SCSI(SAS) disk drive storage enclosure supporting Small Form Factor(SFF) SAS or Serial ATA(SATA) drives. This enclosure delivers

More information