HP Integrity NonStop BladeSystem Planning Guide

Size: px
Start display at page:

Download "HP Integrity NonStop BladeSystem Planning Guide"

Transcription

1 HP Integrity NonStop BladeSystem Planning Guide HP Part Number: Published: November 202 Edition: J06.3 and subsequent J-series RVUs

2 Copyright 202 Hewlett-Packard Development Company, L.P. Legal Notice Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 2.2 and 2.22, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor s standard commercial license. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Export of the information contained in this publication may require authorization from the U.S. Department of Commerce. Microsoft, Windows, and Windows NT are U.S. registered trademarks of Microsoft Corporation. Intel, Pentium, and Celeron are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Java is a registered trademark of Oracle and/or its affiliates. Motif, OSF/, UNIX, X/Open, and the "X" device are registered trademarks, and IT DialTone and The Open Group are trademarks of The Open Group in the U.S. and other countries. Open Software Foundation, OSF, the OSF logo, OSF/, OSF/Motif, and Motif are trademarks of the Open Software Foundation, Inc. OSF MAKES NO WARRANTY OF ANY KIND WITH REGARD TO THE OSF MATERIAL PROVIDED HEREIN, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. OSF shall not be liable for errors contained herein or for incidental consequential damages in connection with the furnishing, performance, or use of this material. 990, 99, 992, 993 Open Software Foundation, Inc. The OSF documentation and the OSF software to which it relates are derived in part from materials supplied by the following: 987, 988, 989 Carnegie-Mellon University. 989, 990, 99 Digital Equipment Corporation. 985, 988, 989, 990 Encore Computer Corporation. 988 Free Software Foundation, Inc. 987, 988, 989, 990, 99 Hewlett-Packard Company. 985, 987, 988, 989, 990, 99, 992 International Business Machines Corporation. 988, 989 Massachusetts Institute of Technology. 988, 989, 990 Mentat Inc. 988 Microsoft Corporation. 987, 988, 989, 990, 99, 992 SecureWare, Inc. 990, 99 Siemens Nixdorf Informationssysteme AG. 986, 989, 996, 997 Sun Microsystems, Inc. 989, 990, 99 Transarc Corporation. OSF software and documentation are based in part on the Fourth Berkeley Software Distribution under license from The Regents of the University of California. OSF acknowledges the following individuals and institutions for their role in its development: Kenneth C.R.C. Arnold, Gregory S. Couch, Conrad C. Huang, Ed James, Symmetric Computer Systems, Robert Elz. 980, 98, 982, 983, 985, 986, 987, 988, 989 Regents of the University of California.

3 Contents About This Document...2 Supported Release Version Updates (RVUs)...2 Intended Audience...2 New and Changed Information for New and Changed Information for New and Changed Information for New and Changed Information for New and Changed Information for New and Changed Information for Document Organization...6 Notation Conventions...7 General Syntax Notation...7 Publishing History...9 HP Encourages Your Comments...9 I NonStop BladeSystem Series Overview...20 II NonStop BladeSystems NB50000c, NB54000c...2 NonStop BladeSystems Overview...22 Migrating Commercial and Carrier Grade BladeSystems...24 NonStop BladeSystem Platform Configurations NB50000c and NB54000c...25 NonStop BladeSystem Hardware NB50000c and NB54000c...27 c7000 Enclosure...29 NonStop Server Blades...3 CLuster I/O Modules (CLIMs)...3 IP CLuster I/O Module (CLIM)...33 Telco CLuster I/O Module (CLIM)...35 IB CLuster I/O Module (CLIM)...36 Storage CLuster I/O Module (CLIM)...38 SAS Disk Enclosures...40 CLIM Cable Management Ethernet Patch Panel...4 IOAM Enclosure...4 Fibre Channel Disk Module (FCDM)...42 Maintenance Switch...42 BladeSystem Connections to Maintenance Switch...42 CLIM Connections to Maintenance Switch...42 IOAM Enclosure Connections to Maintenance Switch...42 System Console...42 UPS and ERM (Optional)...43 UPS and ERM for a Single-Phase Power Configuration (Optional)...43 UPS and ERM for a Three-Phase Power Configuration (Optional)...44 Enterprise Storage System (Optional)...45 Tape Drive and Interface Hardware (Optional)...46 NonStop S-Series I/O Enclosures (Optional)...46 Preparation for Other Server Hardware...47 Management Tools for NonStop BladeSystems...47 OSM Package...47 Onboard Administrator (OA)...47 Integrated Lights Out (ilo)...47 Cluster I/O Protocols (CIP) Subsystem...48 Subsystem Control Facility (SCF) Subsystem...48 Component Location and Identification...48 Contents 3

4 Terminology...48 Rack and Offset Physical Location...49 ServerNet Switch Group-Module-Slot Numbering...49 NonStop Server Blade Group-Module-Slot Numbering...50 CLIM Enclosure Group-Module-Slot-Port-Fiber Numbering...5 IOAM Enclosure Group-Module-Slot Numbering...5 Fibre Channel Disk Module Group-Module-Slot Numbering...53 System Installation Document Packet...54 Technical Document for the Factory-Installed Hardware Configuration...54 Configuration Forms for CLIMs and the ServerNet Adapters Core Licensing NB54000c and NB54000c-cg...55 Core Licensing Introduction...55 Core Licensing Feature...55 Migration and Expansion Verifying the Core License File on Pre-J06.3 RVUs...55 When to Order a New Core License File...55 Ordering the License File for NonStop BladeSystems NB54000c or NB54000c-cg...56 OSM Install Core License File guided procedure...56 Core License Validation Tool (CLICVAL)...56 Core License File Requirements and Attributes...57 Displaying Processor Core and Licensing Information...58 What to do if the Core License File is Missing or Invalid Core License File is Missing or Invalid at Coldload Core License File is Missing or Invalid at Coldload...59 What to do if Processors Fail During Coldload with %70 Halt Code...59 Task Checklist for Core License Scenarios Site Preparation Guidelines NB50000c and NB54000c...62 Modular Cabinet Power and I/O Cable Entry...62 Emergency Power-Off Switches...62 EPO Requirement for NonStop BladeSystems...62 EPO Requirement for HP R5000 and R5500 XR UPS...62 EPO Requirement for HP R2000/3 UPS...63 Electrical Power and Grounding Quality...63 Power Quality...63 Grounding Systems...63 Power Consumption...64 Uninterruptible Power Supply (UPS)...64 Cooling and Humidity Control...65 Weight...66 Flooring...66 Dust and Pollution Control...66 Zinc Particulates...66 Space for Receiving and Unpacking the System...66 Operational Space System Installation Specifications NB50000c and NB54000c...68 Modular Cabinets...68 Power Distribution for NonStop BladeSystems in Intelligent Racks...68 Power Distribution Unit (PDU) Types for an Intelligent Rack...69 Four Intelligent PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase)...7 Four Intelligent PDUs With Single-Phase UPS (NA/JPN and INTL)...72 Four Intelligent PDUs With Three-Phase UPS (NA/JPN and INTL)...73 Two Intelligent PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase)...74 Two Intelligent PDUs With Single-Phase UPS (NA/JPN and INTL) Contents

5 Two Intelligent PDUs With Three-Phase UPS (NA/JPN and INTL)...76 Four Modular PDUs Without UPS (NA and JPN, Single-Phase and Three-Phase)...77 Four Modular PDUs With Single-Phase UPS (NA/JPN and INTL)...78 Four Modular PDUs With Three-Phase UPS (NA/JPN and INTL)...79 Two Modular PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase)...80 Two Modular PDU Connections With Single-Phase UPS (NA/JPN and INTL)...8 Two Modular PDU Connections With Three-Phase UPS (NA/JPN and INTL)...82 AC Power Feeds in an Intelligent Rack...83 AC Input Power for Modular Cabinets Intelligent Rack...90 Enclosure AC Input...93 Enclosure Power Loads...94 Dimensions and Weights...95 Plan View of the 42U Modular Cabinet G2 and Intelligent Racks...96 Service Clearances for Modular Cabinets...96 Unit Sizes U G2 Rack Modular Cabinet Physical Specifications U Intelligent Rack Modular Cabinet Physical Specifications...97 Enclosure Dimensions...98 Modular Cabinet and Enclosure Weights With Worksheet...99 Modular Cabinet Stability...0 Environmental Specifications...0 Heat Dissipation Specifications and Worksheet NB50000c...02 Heat Dissipation Specifications and Worksheet NB54000c...02 Operating Temperature, Humidity, and Altitude...03 Nonoperating Temperature, Humidity, and Altitude...03 Cooling Airflow Direction...04 Blanking Panels...04 Typical Acoustic Noise Emissions...04 Tested Electrostatic Immunity...04 Calculating Specifications for Enclosure Combinations NB50000c...04 Calculating Specifications for Enclosure Combinations NB54000c System Configuration Guidelines NB50000c and NB54000c...08 Internal ServerNet Interconnect Cabling...08 Dedicated Service LAN Cables...08 Length Restrictions for Optional Cables Connecting Outside the Modular Cabinet...08 Cable Product IDs...08 ServerNet Fabric and Supported Connections...08 ServerNet Cluster Connections...08 BladeCluster Solution Connections...09 ServerNet Fabric Cross-Link Connections...09 Interconnections Between c7000 Enclosures...09 I/O Connections (Standard and High I/O ServerNet Switch Configurations)...09 Connections to IOAM Enclosures...0 Connections to CLIMs... Connections to NonStop S-Series I/O Enclosures... NonStop BladeSystem Port Connections... Fibre Channel Ports to Fibre Channel Disk Modules... Fibre Channel Ports to Fibre Tape Devices... SAS Ports to SAS Disk Enclosures... SAS Ports to SAS Tape Devices... Storage CLIM Devices...2 Factory-Default Disk Volume Locations for SAS Disk Devices...3 Configuration Restrictions for Storage CLIMs...4 Configurations for Storage CLIM and SAS Disk Enclosures...4 Contents 5

6 DL385 G2 or G5 Storage CLIM and SAS Disk Enclosure Configurations...4 DL380 G6 Storage CLIM and SAS Disk Enclosure Configurations...6 Fibre Channel Devices...20 Factory-Default Disk Volume Locations for FCDMs...22 Configurations for Fibre Channel Devices...22 Configuration Restrictions for Fibre Channel Devices...23 Recommendations for Fibre Channel Device Configuration...23 Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module...24 Two FCSAs, Two FCDMs, One IOAM Enclosure...24 Four FCSAs, Four FCDMs, One IOAM Enclosure...25 Two FCSAs, Two FCDMs, Two IOAM Enclosures...26 Four FCSAs, Four FCDMs, Two IOAM Enclosures...27 Daisy-Chain Configurations (FCDMs)...28 Four FCSAs, Three FCDMs, One IOAM Enclosure...29 InfiniBand to Networks...3 Ethernet to Networks...3 IP CLIM Ethernet Interfaces...32 Telco CLIM Ethernet Interfaces...34 Gigabit Ethernet 4-Port ServerNet Adapter (G4SA) Ethernet Ports...34 Managing NonStop BladeSystem Resources...35 HP Power Regulator for NonStop BladeSystems NB54000c and NB54000c-cg...35 Changing Customer Passwords...35 Change the Onboard Administrator (OA) Password...36 Change the CLIM ilo Password...36 Change the Maintenance Interface (Eth0) Password...37 Change the Interconnect Ethernet Switch Password...37 Change the NonStop ServerBlade MP (ilo) Password NB50000c and NB54000c...39 Change the Remote Desktop Password...39 Default Naming Conventions...40 Possible Values of Disk and Tape LUNs Hardware Configuration in Modular Cabinets...42 Maximum Number of Modular Components NB50000c and NB54000c...42 Enclosure Locations in Cabinets NB50000c and NB54000c...42 III NonStop BladeSystem NB50000c-cg and NB54000c-cg (Carrier Grade) NonStop BladeSystems Carrier Grade Overview...44 NEBS Required Statements...46 NonStop BladeSystem NB50000c-cg and NB54000c-cg Hardware...46 Seismic Rack...47 c7000 CG Enclosure...47 DC Input Module Numbering...48 Power Supply Bay Numbering...49 Power Supply LEDs...49 Cable Connections...50 IP CLIM CG...50 Telco CLIM CG...50 CLIM CG Cable Management Ethernet Patch Panel...5 Storage CLIM CG and Storage Products...5 Storage CLIM CG...5 M8390-2CG SAS Disk Enclosure...52 M839-24CG SAS Disk Enclosure SE DAT Tape Unit...53 Enterprise Storage System (ESS)...53 Breaker Panel Contents

7 Fuse Panel...54 System Alarm Panel...55 System Alarm Panel Connections...57 CG Maintenance Switch...57 Cable Connections...58 System Console...58 NonStop S-Series CO I/O Enclosures (Optional) Site Preparation Guidelines NB50000c-cg and NB54000c-cg...60 Modular Cabinet Power and I/O Cable Entry...60 Emergency Power-Off Connections...60 Electrical Power and Grounding Quality...60 Power Quality...60 Power Consumption...6 Cooling and Humidity Control...6 Cooling Airflow Direction...6 Weight...6 Flooring...6 Dust and Pollution Control...62 Zinc Particulates...62 Space for Receiving and Unpacking the System...62 Operational Space...62 Communications Interference...63 Electrostatic Discharge...63 Preventing electrostatic discharge...63 Grounding Methods to Prevent Electrostatic Discharge System Installation Specifications NB50000c-cg and NB54000c-cg...65 DC Power Distribution NB50000c-cg and NB54000c-cg...65 Enclosure Power Loads NB50000c-cg and NB54000c-cg...66 Breaker Panel and Fuse Panel Power Specifications NB50000c-cg and NB54000c-cg...67 Dimensions and Weights NB50000c-cg and NB54000c-cg...68 Seismic Rack Specifications NB50000c-cg and NB54000c-cg...68 Floor Space Requirements...68 Unit Sizes NB50000c-cg and NB54000c-cg...68 Enclosure Dimensions NB50000c-cg and NB54000c-cg...69 Cabinet and Enclosure Weights Worksheet NB50000c-cg and NB54000c-cg...70 Environmental Specifications NB50000c-cg and NB54000c-cg...70 Heat Dissipation Specifications and Worksheets Carrier Grade...7 Operating Temperature, Humidity, and Altitude...73 Power Load Worksheet NB50000c-cg and NB54000c-cg...73 Enclosures in Seismic Racks...73 Sample Configuration Carrier Grade System Configuration Guidelines NB50000c-cg and NB54000c-cg...76 A Maintenance and Support Connectivity...77 Dedicated Service LAN...77 Fault-Tolerant LAN Configuration...77 DHCP, TFTP, and DNS Services...79 IP Addresses...79 Ethernet Cables...84 SWAN Concentrator Restrictions...84 Dedicated Service LAN Links Using G4SAs...84 Dedicated Service LAN Links Using IP CLIMs...84 Dedicated Service LAN Links Using Telco CLIMs...85 Initial Configuration for a Dedicated Service LAN...85 Contents 7

8 System Consoles...85 System Console Configurations...86 Primary and Backup System Consoles Managing One System...86 Primary and Backup System Consoles Managing Multiple Systems...87 B Cables for BladeSystems...88 Cable Types, Connectors, Length Restrictions, and Product IDs...88 Maximum Supported Cable Length for All Connections...9 C Operations and Management Using OSM Applications...93 System-Down OSM Low-Level Link...93 AC Power Monitoring...94 How OSM Power Failure Support Works...94 Considerations for Ride-Through Time Configuration...95 Considerations for Site UPS Configurations...95 AC Power-Fail States...96 D Default Startup Characteristics NB50000c and NB54000c...97 E Site Power Cables Carrier Grade Required Documentation Requirements for Site Power or Ground Cables F Power Configurations for NonStop BladeSystems in G2 Racks...20 NonStop BladeSystem Power Distribution G2 Rack...20 NonStop BladeSystem Single-Phase Power Distribution G2 Rack...20 Single-Phase Power Setup, Monitored PDUs G2 Rack International (INTL) Monitored Single-Phase Power Configuration in a G2 Rack...2 Single-Phase Power Setup in a G2 Rack, Modular PDU...29 NonStop BladeSystem Three-Phase Power Distribution in a G2 Rack...23 Three-Phase Power Setup in a G2 Rack, Monitored PDUs...23 Three-Phase Power Setup in a G2 Rack, Modular PDUs Enclosure AC Input G2 Rack Phase Load Balancing Safety and Compliance Index Contents

9 Figures Example of NonStop BladeSystems (Front Views) Example of NonStop BladeSystem NB54000c with 6 Processors (Front View) Flexible Processor Bay Configuration Example of NonStop BladeSystem NB54000c with 6 Processors (Front View) Sequential Numbering c7000 Enclosure Features Ethernet Patch Panel Connections Between Storage CLIMs and ESS Four ipdus Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase) Four ipdus With Single-Phase UPS (NA/JPN and INTL) Four ipdus With Three-Phase UPS (NA/JPN and INTL) Two Intelligent PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase)...74 Two Intelligent PDUs With Single-Phase (NA/JPN and INTL) Two Intelligent PDUs With Three-Phase UPS (NA/JPN and INTL) Four Modular PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase) Four Modular PDUs With Single-Phase UPS (NA/JPN and INTL) Four Modular PDUs With Three-Phase UPS (NA/JPN and INTL) Two Modular PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase) Two Modular PDUs With a Single-Phase UPS (NA/JPN and INTL) Two Modular PDUs With a Three-Phase UPS (NA/JPN and INTL) Example of Bottom AC Power Feed in an Intelligent Rack (Without UPS) Example of Top AC Power Feed in an Intelligent Rack (Without UPS) Example of Top AC Power Feed in an Intelligent Rack (With Single-Phase UPS) Example of Bottom AC Power Feed in an Intelligent Rack (With Single-Phase UPS) Example of Top AC Power Feed in an Intelligent Rack (With Three-Phase UPS) Example of Bottom AC Power Feed in an Intelligent Rack (With Three-Phase UPS) ServerNet Switch Standard I/O Supported Connections ServerNet Switch High I/O Supported Connections Two DL385 G2 or G5 Storage CLIMs, Two MSA70 SAS Disk Enclosure Configuration Four DL385 G2 or G5 Storage CLIMs, Four MSA70 SAS Disk Enclosure Configuration Two DL380 G6 Storage CLIMs, Two D2700 SAS Disk Enclosure Configuration Two DL380 G6 Storage CLIMs, Four D2700 SAS Disk Enclosure Configuration Four DL380 G6 Storage CLIMs, Four D2700 SAS Disk Enclosure Configuration Four DL380 G6 Storage CLIMs, Eight D2700 SAS Disk Enclosure Configuration Ethernet Patch Panel for CLIM CG M8390-2CG SAS Disk Enclosure, Front M8390-2CG SAS Disk Enclosure, Back M CG SAS Disk Enclosure, Front M CG SAS Disk Enclosure, Back SE DAT Tape Unit Breaker Panel Front View Breaker Panel Rear View Fuse Panel System Alarm Panel Front System Alarm Panel Rear CG Maintenance Switch, Front CG Maintenance Switch, Back DC Power Distribution for Sample Single Cabinet System Sample Power Distribution for Seismic Rack and Carrier Grade Cabinet Example of a Fault-Tolerant LAN Configuration With Two Maintenance Switches North America/Japan Monitored Single-Phase Power Setup in a G2 Rack With Rack-Mounted R5500 XR UPS...203

10 50 North America/Japan Monitored Single-Phase Power Setup in a G2 Rack (With Rack-Mounted R5000 UPS) North America/Japan Monitored Single-Phase Power Setup in a G2 Rack (Without Rack-Mounted UPS) Bottom AC Power Feed, Single-Phase NA/JPN Monitored PDUs Top AC Power Feed in a G2 Rack NA/JPN Single-Phase Monitored PDUs International Monitored Single-Phase Power Setup in a G2 Rack With Rack-Mounted R5000 UPS International Monitored Single-Phase Power Setup in a G2 Rack With Rack-Mounted R5500 XR UPS International Monitored Single-Phase Power Setup in a G2 Rack Without Rack-Mounted UPS Bottom AC Power Feed in a G2 Rack, Single-Phase International Monitored PDUs Top AC Power Feed in a G2 Rack, Single-Phase International Monitored PDUs North America/Japan Modular Single-Phase Power Setup in a G2 Rack With Rack-Mounted R5000 UPS North America/Japan Modular Single-Phase Power Setup in a G2 Rack With Rack-Mounted R5500 XR UPS North America/Japan Modular Single-Phase Power Setup in a G2 Rack Without Rack-Mounted UPS Bottom AC Power Feed in a G2 Rack Single-Phase NA/JPN Modular PDUs Top AC Power Feed in a G2 Rack NA/JPN Single-Phase Modular PDUs North America/Japan Monitored 3-Phase Power Setup in a G2 Rack With Rack-Mounted UPS North America/Japan Monitored 3-Phase Power Setup Without Rack-Mounted UPS Bottom AC Power Feed in a G2 Rack, Three-Phase (Monitored NA/JPN PDUs) Top AC Power Feed in a G2 Rack, Three-Phase (Monitored NA/JPN PDUs) International Monitored 3-Phase Power Setup in a G2 Rack With Rack-Mounted UPS International Monitored 3-Phase Power Setup Without in a G2 Rack Rack-Mounted UPS North America/Japan Modular 3-Phase Power Setup in a G2 Rack With Rack-Mounted UPS North America/Japan Modular 3-Phase Power Setup in a G2 Rack Without Rack-Mounted UPS Bottom AC Power Feed in G2 Rack, Three Phase (NA/JPN Modular PDUs) Top AC Power Feed in G2 Rack, Three-Phase (NA/JPN Modular PDU) International Modular 3-Phase Power Setup in a G2 Rack (With Rack-Mounted UPS) International Modular 3-Phase Power Setup in a G2 Rack (Without Rack-Mounted UPS) Bottom AC Power Feed in a G2 Rack, Three Phase (INTL Modular PDUs) Top AC Power Feed in a G2 Rack, Three-Phase (INTL Modular PDU) Tables Characteristics of a NonStop BladeSystem NB50000c Characteristics of a NonStop BladeSystem NB54000c Task Checklist for Core License Scenarios North America/Japan Single-Phase Power Specifications North America/Japan Three-Phase Power Specifications International Single-Phase Power Specifications International Three-Phase Power Specifications Example of Cabinet Load Calculations NB50000c Example of Cabinet Load Calculations NB54000c Default User Names and Passwords...36 Characteristics of a NonStop Carrier Grade BladeSystem NB50000c-cg Characteristics of a NonStop Carrier Grade BladeSystem NB54000c-cg Breaker Panel Power Specifications...67

11 4 Fuse Panel Power Specifications Rack Weight Worksheet Heat Dissipation Worksheet for Seismic Rack NB50000c-cg Heat Dissipation Worksheet for Seismic Rack NB54000c-cg Power Load Worksheet for Seismic Rack Completed Weight Worksheet for Sample System Rack Number Cable Length Restrictions...9

12 About This Document This guide describes the HP Integrity NonStop BladeSystem and provides examples of system configurations to assist you in planning for installation of a new HP Integrity NonStop NB50000c, NB50000c-cg, or NB54000c BladeSystem. Supported Release Version Updates (RVUs) This publication supports J06.3 and all subsequent J-series RVUs until otherwise indicated in a replacement publication. Intended Audience This guide is written for those responsible for planning the installation, configuration, and maintenance of a NonStop BladeSystem and the software environment at a particular site. Appropriate personnel must have completed HP training courses on system support for NonStop BladeSystems. New and Changed Information for Additional revisions have been made to these topics to support the new Intelligent rack including its new power configurations and PDUs. UPS and ERM for a Single-Phase Power Configuration (Optional) (page 43) System Installation Specifications NB50000c and NB54000c (page 68) (added several new topics throughout this chapter) IP Addresses (page 79) for the Intelligent PDU (ipdu) new IP addresses Power Configurations for NonStop BladeSystems in G2 Racks (page 20) New and Changed Information for Added or updated these topics to support the new Intelligent rack including its new power configurations and PDUs. UPS and ERM for a Single-Phase Power Configuration (Optional) (page 43) System Installation Specifications NB50000c and NB54000c (page 68) (added several new topics throughout this chapter) IP Addresses (page 79) for the Intelligent PDU (ipdu) new IP addresses Power Configurations for NonStop BladeSystems in G2 Racks (page 20) New and Changed Information for The following updates have been made to this version: Added information to describe the new power regulator modes for NonStop BladeSystems NB54000c and NB54000c-cg: Characteristics of a NonStop BladeSystem NB54000c (page 23) HP Power Regulator for NonStop BladeSystems NB54000c and NB54000c-cg (page 35) Characteristics of a NonStop Carrier Grade BladeSystem NB54000c-cg (page 45) Revised the CLIM DVD version information to now point to the CLuster I/O Module (CLIM) Software Compatability Guide for CLIM version information. 2

13 Added a caution statement recommending using a UPS in conjunction with the WRITECACHE attribute to prevent data loss to SAS Disk Enclosures (page 40). Updated these topics: Environmental Specifications (page 0) Example of Cabinet Load Calculations NB50000c (page 05) Example of Cabinet Load Calculations NB54000c (page 06) IP Addresses (page 79) for changes to the maintenance switch and system alarm panel IP addresses. Added a new topic: Maximum Supported Cable Length for All Connections (page 9) New and Changed Information for The following updates have been made to this version: Added a new chapter to describe the features and function of the new core licensing option for BladeSystems NB54000c and NB54000c-cg: Core Licensing NB54000c and NB54000c-cg (page 55). Updated these related topics for core licensing considerations: Characteristics of a NonStop BladeSystem NB54000c (page 23) Migrating Commercial and Carrier Grade BladeSystems (page 24) Added information and specifications for the new 64 GB memory option for NonStop BladeSystem NB54000c/NB54000c-cg server blades running J06.3 and later RVUs to: Characteristics of a NonStop BladeSystem NB54000c (page 23) NonStop Server Blades (page 3) Enclosure Power Loads NB50000c-cg and NB54000c-cg (page 66) Characteristics of a NonStop Carrier Grade BladeSystem NB54000c-cg (page 45) Updated requirements for Site Power Cables Carrier Grade (page 200). Added or changed these topics to describe the new NonStop BladeSystem NB54000c-cg system rack, including features and specifications: Seismic Rack (page 47) Seismic Rack Specifications NB50000c-cg and NB54000c-cg (page 68) Table 5 (page 70) Added or changed these topics to support the new single-phase R5000 UPS: UPS and ERM for a Single-Phase Power Configuration (Optional) (page 43) Site Preparation Guidelines NB50000c and NB54000c (page 62) (added R5000 UPS references to EPO and UPS topics) Single-Phase Power Setup, Monitored PDUs G2 Rack (page 202) NA/JPN: Circuit Breaker Ratings in a G2 Rack for Single-Phase R5000 UPS (page 20) International (INTL) Monitored Single-Phase Power Configuration in a G2 Rack (page 2) New and Changed Information for

14 INTL: Circuit Breaker Ratings for Single-Phase R5000 UPS in a G2 Rack (page 28) NA/JPN: Modular Single-Phase Power Setup in a G2 Rack With R5000 Rack-Mounted UPS (page 29) Unit Sizes (page 96) Enclosure Dimensions (page 98) Modular Cabinet and Enclosure Weights With Worksheet (page 99) Operations and Management Using OSM Applications (page 93) (added minor R5000 UPS references throughout appendix) 4

15 New and Changed Information for The following updates have been made to this version: Single-Phase Power Setup in a G2 Rack, Modular PDU (page 29) has been added to describe the new NA/JPN single-phase modular PDU, including the procedures for power setup with or without UPS, illustrations, and specifications. New and Changed Information for The following updates have been made to this version: Added or changed these topics to describe the new NonStop BladeSystem NB54000c-cg, including features and specifications: Core Licensing Introduction (page 55) NEBS Required Statements (page 46) Migrating Commercial and Carrier Grade BladeSystems (page 24) NonStop BladeSystem Platform Configurations NB50000c and NB54000c (page 25) NonStop BladeSystems Carrier Grade Overview (page 44) Characteristics of a NonStop Carrier Grade BladeSystem NB50000c-cg (page 44) Characteristics of a NonStop Carrier Grade BladeSystem NB54000c-cg (page 45) System Installation Specifications NB50000c-cg and NB54000c-cg (page 65) (changes throughout chapter) Added or changed these topics to describe the new SAS Solid State Drive (SSD), including features and specifications: SAS Disk Enclosures (page 40) Modular Cabinet and Enclosure Weights With Worksheet (page 99) Heat Dissipation Specifications and Worksheet NB50000c (page 02) Heat Dissipation Specifications and Worksheet NB54000c (page 02) Calculating Specifications for Enclosure Combinations NB50000c (page 04) Calculating Specifications for Enclosure Combinations NB54000c (page 05) New and Changed Information for

16 Added or changed these topics to describe the new IB CLIM: Characteristics of a NonStop BladeSystem NB50000c (page 22) Characteristics of a NonStop BladeSystem NB54000c (page 23) Example of NonStop BladeSystems (Front Views) (page 24) Example of NonStop BladeSystem NB54000c with 6 Processors (Front View) Flexible Processor Bay Configuration (page 26) Example of NonStop BladeSystem NB54000c with 6 Processors (Front View) Sequential Numbering (page 27) CLuster I/O Modules (CLIMs) (page 3) IB CLuster I/O Module (CLIM) (page 36) ServerNet Switch Group-Module-Slot Numbering (page 49) Connections to CLIMs (page ) InfiniBand to Networks (page 3) Default Naming Conventions (page 40) Enclosure Locations in Cabinets NB50000c and NB54000c (page 42) Modular Cabinet and Enclosure Weights With Worksheet (page 99) Heat Dissipation Specifications and Worksheet NB54000c (page 02) Calculating Specifications for Enclosure Combinations NB54000c (page 05) Maintenance and Support Connectivity (page 77) Document Organization Chapter Contents Part I NonStop BladeSystem Overview Provides an overview of the manual. Part II NonStop BladeSystem NonStop BladeSystems Overview (page 22) Site Preparation Guidelines NB50000c and NB54000c (page 62) System Installation Specifications NB50000c and NB54000c (page 68) System Configuration Guidelines NB50000c and NB54000c (page 08) Hardware Configuration in Modular Cabinets (page 42) Provides an overview of the Integrity NonStop BladeSystems NB50000c and NB54000c. Outlines topics to consider when planning or upgrading the installation site. Provides the installation specifications for a fully populated NonStop BladeSystem enclosure. Describes the guidelines for implementing the NonStop BladeSystem. Shows recommended locations for hardware enclosures in the NonStop BladeSystem. Part III NonStop BladeSystem Carrier Grade NonStop BladeSystems Carrier Grade Overview (page 44) Provides an overview of the Integrity NonStop BladeSystem NB50000c-cg. 6

17 Chapter Contents Part I NonStop BladeSystem Overview Provides an overview of the manual. Part II NonStop BladeSystem Site Preparation Guidelines NB50000c-cg and NB54000c-cg (page 60) System Installation Specifications NB50000c-cg and NB54000c-cg (page 65) System Configuration Guidelines NB50000c-cg and NB54000c-cg (page 76) Maintenance and Support Connectivity (page 77) Cables for BladeSystems (page 88) Operations and Management Using OSM Applications (page 93) Default Startup Characteristics NB50000c and NB54000c (page 97) Site Power Cables Carrier Grade (page 200) Outlines topics to consider when planning or upgrading the installation site. Provides the installation specifications for a fully populated Carrier Grade NonStop BladeSystem enclosure. Describes the guidelines for implementing the NonStop Carrier Grade BladeSystem. Describes the connectivity options, including ISEE, for maintenance and support of a NonStop BladeSystem. Identifies the cables used with the NonStop BladeSystem hardware. Describes how to use the OSM applications to manage a NonStop BladeSystem. Describes the default startup characteristics for a NonStop BladeSystem. Describes how to work with power cables specific to Carrier Grade systems. Notation Conventions General Syntax Notation This list summarizes the notation conventions for syntax presentation in this manual. UPPERCASE LETTERS Uppercase letters indicate keywords and reserved words. Type these items exactly as shown. Items not enclosed in brackets are required. For example: MAXATTACH Italic Letters Italic letters, regardless of font, indicate variable items that you supply. Items not enclosed in brackets are required. For example: file-name Computer Type Computer type letters indicate: C and Open System Services (OSS) keywords, commands, and reserved words. Type these items exactly as shown. Items not enclosed in brackets are required. For example: Use the cextdecs.h header file. Text displayed by the computer. For example: Last Logon: 4 May 2006, 08:02:23 A listing of computer code. For example if (listen(sock, ) < 0) { perror("listen Error"); exit(-); } Notation Conventions 7

18 Bold Text Bold text in an example indicates user input typed at the terminal. For example: ENTER RUN CODE?23 CODE RECEIVED: The user must press the Return key after typing the input. [ ] Brackets Brackets enclose optional syntax items. For example: TERM [\system-name.]$terminal-name INT[ERRUPTS] A group of items enclosed in brackets is a list from which you can choose one item or none. The items in the list can be arranged either vertically, with aligned brackets on each side of the list, or horizontally, enclosed in a pair of brackets and separated by vertical lines. For example: FC [ num ] [ -num ] [ text ] K [ X D ] address { } Braces A group of items enclosed in braces is a list from which you are required to choose one item. The items in the list can be arranged either vertically, with aligned braces on each side of the list, or horizontally, enclosed in a pair of braces and separated by vertical lines. For example: LISTOPENS PROCESS { $appl-mgr-name } { $process-name } ALLOWSU { ON OFF } Vertical Line A vertical line separates alternatives in a horizontal list that is enclosed in brackets or braces. For example: INSPECT { OFF ON SAVEABEND } Ellipsis An ellipsis immediately following a pair of brackets or braces indicates that you can repeat the enclosed sequence of syntax items any number of times. For example: M address [, new-value ] - ] { } An ellipsis immediately following a single syntax item indicates that you can repeat that syntax item any number of times. For example: "s-char " Punctuation Parentheses, commas, semicolons, and other symbols not previously described must be typed as shown. For example: 8

19 error := NEXTFILENAME ( file-name ) ; LISTOPENS SU $process-name.#su-name Quotation marks around a symbol such as a bracket or brace indicate the symbol is a required character that you must type as shown. For example: "[" repetition-constant-list "]" Item Spacing Spaces shown between items are required unless one of the items is a punctuation symbol such as a parenthesis or a comma. For example: CALL STEPMOM ( process-id ) ; If there is no space between two items, spaces are not permitted. In this example, no spaces are permitted between the period and any other items: $process-name.#su-name Line Spacing If the syntax of a command is too long to fit on a single line, each continuation line is indented three spaces and is separated from the preceding line by a blank line. This spacing distinguishes items in a continuation line from items in a vertical list of selections. For example: Publishing History ALTER [ / OUT file-spec / ] LINE [, attribute-spec ] Part Number Product Version N.A. N.A. N.A. N.A. N.A. Publication Date May 20 August 20 November 20 February 202 August 202 HP Encourages Your Comments HP encourages your comments concerning this document. We are committed to providing documentation that meets your needs. Send any errors found, suggestions for improvement, or compliments to docsfeedback@hp.com. Include the document title, part number, and any comment, error found, or suggestion for improvement you have concerning this document. Publishing History 9

20 Part I NonStop BladeSystem Series Overview The Integrity NonStop BladeSystem provides an integrated infrastructure with consolidated server, network, storage, power, and management capabilities. The NonStop BladeSystem implements the BladeSystem c-class architecture and is optimized for enterprise data center applications or telecommunications environments. The servers in the BladeSystem series consist of the: NonStop BladeSystem NB50000c or NB54000c Servers An AC-powered BladeSystem series server installed in a commercial grade 36U or 42U rack For information about this server, see NonStop BladeSystems NB50000c, NB54000c (page 2) NonStop BladeSystem NB50000c-cg or NB54000c-cg Carrier Grade Servers A DC-powered BladeSystem series server used in Telco environments and installed in a 36U seismic rack For information about this server, see NonStop BladeSystem NB50000c-cg and NB54000c-cg (Carrier Grade) (page 43) Chapters and Related Appendices NonStop BladeSystem NB50000c or NB54000c NonStop BladeSystems Overview (page 22) Core Licensing NB54000c and NB54000c-cg (page 55) Site Preparation Guidelines NB50000c and NB54000c (page 62) System Installation Specifications NB50000c and NB54000c (page 68) System Configuration Guidelines NB50000c and NB54000c (page 08) Hardware Configuration in Modular Cabinets (page 42) NonStop BladeSystem NB50000c-cg NonStop BladeSystems Carrier Grade Overview (page 44) Site Preparation Guidelines NB50000c-cg and NB54000c-cg (page 60) System Installation Specifications NB50000c-cg and NB54000c-cg (page 65) System Configuration Guidelines NB50000c-cg and NB54000c-cg (page 76) Appendices A B C D E Maintenance and Support Connectivity (page 77) Cables for BladeSystems (page 88) Operations and Management Using OSM Applications (page 93) Default Startup Characteristics NB50000c and NB54000c (page 97) Site Power Cables Carrier Grade (page 200)

21 Part II NonStop BladeSystems NB50000c, NB54000c NonStop BladeSystem Topics NB50000c and NB54000c Chapters and Related Appendices NonStop BladeSystems Overview (page 22) Site Preparation Guidelines NB50000c and NB54000c (page 62) System Installation Specifications NB50000c and NB54000c (page 68) System Configuration Guidelines NB50000c and NB54000c (page 08) Hardware Configuration in Modular Cabinets (page 42) NOTE: For information about the Carrier Grade NonStop BladeSystem server, refer to NonStop BladeSystem NB50000c-cg and NB54000c-cg (Carrier Grade) (page 43).

22 NonStop BladeSystems Overview NonStop BladeSystems combine the NonStop operating system and HP Integrity NonStop Server Blades in a single footprint and communicate using Expand and ServerNet. ServerNet connectivity is via a ServerNet mezzanine PCI Express (PCIe) interface card installed in the server blade. NonStop BladeSystem Hardware NB50000c and NB54000c (page 27) describes standard and optional NonStop BladeSystem hardware. Some of this hardware is illustrated in Figure (page 24). Characteristics of individual NonStop BladeSystem models are described in: Characteristics of a NonStop BladeSystem NB50000c (page 22) Characteristics of a NonStop BladeSystem NB54000c (page 23) Table Characteristics of a NonStop BladeSystem NB50000c Processor/Processor model Supported RVU Chassis Cabinet Minimum/max. memory Minimum/max. processors Platform configurations Max. CLIMs in a 6 processor BladeSystem Minimum CLIMs for fault-tolerance Max. SAS disk enclosures per Storage CLIM pair Max. SAS disk drives Max. FCDMs through IOAM enclosure Max. IOAM enclosures ESS support through Storage CLIMs or IOAMs Connection to NonStop ServerNet Clusters Connection to BladeCluster Solution Connection to NonStop S-series I/O enclosures Intel Itanium/NSE-M Note: NB50000c and NB54000c server blades cannot coexist in the same system J06.04 and later RVUs c7000 enclosure ( c7000 for 2-8 processors; 2 c7000s for 0-6 processors) 42U, 9 inch rack 8 GB to 48 GB main memory per logical processor 2 to 6 Two supported configurations that support 2, 4, 6, 8, 0, 2, 4, or 6 processors 48 CLuster I/O Modules (CLIMs) Storage, IP, Telco, and IB. 0 CLIMs (if there are IOAM enclosures) 2 Storage CLIMs (if there are no IOAM enclosures) 2 Networking CLIMs IP, Telco, and IB (if there are no IOAM enclosures) A Storage CLIM pair supports a maximum of 4 SAS disk enclosures. This maximum applies to all Storage CLIM types: G2, G5, G6. 00 per Storage CLIM pair 4 Fibre-Channel Disk Modules (FCDMs) daisy-chained with 4 disk drives per FCDM 4 IOAMs for 2-8 processors; 6 IOAMs for 0-6 processors Supported Supported Supported Supported for token-ring ServerNet connectivity or System Signaling Seven (SS7) connectivity for up to 4 S-series I/O enclosures. When CLIMs are also included in the configuration, the maximum number of IOAMs might be smaller. Check with your HP representative to determine your system's maximum for IOAMs. 22 NonStop BladeSystems Overview

23 Table 2 Characteristics of a NonStop BladeSystem NB54000c Processor/Processor model Supported RVU CLIM DVD (Minimum DVD version required for RVU) Chassis Cabinet Minimum/max. memory Minimum/max. processors Supported platform configurations Max. CLIMs in a NonStop BladeSystem with 6 processors Minimum CLIMs for fault-tolerance Max. SAS disk enclosures per Storage CLIM pair Maximum SAS disk drives Maximum FCDMs through IOAM enclosure Maximum IOAM enclosures ESS support through Storage CLIMs or IOAMs Connection to NonStop ServerNet Clusters Connection to BladeCluster Solution Connection to NonStop S-series I/O enclosures Core licensing file HP Power Regulator Intel Itanium/NSE-AB Note: NB50000c and NB54000c server blades cannot coexist in the same system J06. and later RVUs. Refer to the CLuster I/O Module (CLIM) Software Compatability Guide for supported version. Note: this file is preinstalled on new NB54000c/NB54000c-cg systems. c7000 enclosure (one to two enclosures depending on the number of server blades or platform configuration) 42U, 9 inch rack 6 GB to 64 GB main memory per logical processor (64 GB supported on J06.3 and later RVUs) 2 to 6 3 configuration options that support 2, 4, 6, 8, 0, 2, 4, or 6 processors. In particular, the Flex Processor Bay configuration option offers processor numbering either sequentially or in even/odd format. For more information, see NonStop BladeSystem Platform Configurations NB50000c and NB54000c (page 25)). 48 CLuster I/O Modules (CLIMs) Storage, IP, Telco, and IB. 0 CLIMs (if there are IOAM enclosures) 2 Storage CLIMs (if there are no IOAM enclosures) 2 Networking CLIMs IP, Telco, and IB (if there are no IOAM enclosures) A Storage CLIM pair supports a maximum of 4 SAS disk enclosures. This maximum applies to all Storage CLIM types: G2, G5, G6. 00 per Storage CLIM pair 4 Fibre Channel disk modules (FCDMs) daisy-chained with 4 disk drives per FCDM 4 IOAMs for 2-8 processors; 6 IOAMs for 0-6 processors Supported Supported Supported Supported for token-ring ServerNet connectivity or System Signaling Seven (SS7) connectivity for up to 4 S-series I/O enclosures. (This option is only supported when using the Platform Configuration 9 option. For more information see (page 25). This file is required as of J06.3. Refer to Core Licensing Introduction (page 55) Supported as of J06.4 or later RVUs. For more information, refer to HP Power Regulator for NonStop BladeSystems NB54000c and NB54000c-cg (page 35). 23

24 When CLIMs are also included in the configuration, the maximum number of IOAMs might be smaller. Check with your HP representative to determine your system's maximum for IOAMs. Figure Example of NonStop BladeSystems (Front Views) Migrating Commercial and Carrier Grade BladeSystems Migrating a NonStop BladeSystem NB50000c or NB50000c-cg to an NB54000c or NB54000-cg requires removing all BL860c server blades and installing BL860c i2 server blades (BL860c and BL860c i2 server blades cannot coexist in the same system). No application changes or recompilation are required. The NB54000c runs on J06. and later RVUs only and the NB54000c-cg runs on J06.2 and later RVUs only. The NB54000c and NB54000c-cg require a minimum CLIM DVD version for the RVU. For version information, refer to the CLuster I/O Module (CLIM) Software Compatability Guide. Note: this file is preinstalled on new NB54000c/NB54000c-cg systems. Starting with the J06.3 RVU, HP introduces a new core licensing feature on BladeSystems NB54000c and NB54000c-cg. The number of cores enabled on your system is determined by the core license file. For more information about this required file, refer to Core Licensing Introduction (page 55). 24 NonStop BladeSystems Overview

25 NonStop BladeSystem Platform Configurations NB50000c and NB54000c NonStop BladeSystems support platform configuration options via OSM Low-Level Link. All new BladeSystems arrive pre-configured with one of these options: Configuration Option Platform Configuration Platform Configuration 0 Description This option is for NonStop BladeSystems NB50000c and NB54000c and supports: Processors that are configured sequentially as shown in Figure 3 (page 27): 0 7 in enclosure in enclosure 0 This option is the Flex Processor Bay Configuration (supported on NB54000 or NB54000c-cg only) and which: Offers two choices for processor numbering: Sequential numbering Processors are numbered 0 through 7 in enclosure 00 and 8 through 5 in enclosure 0 as shown in Figure 3 (page 27) Even/odd numbering Alternately, processors are numbered starting with 0 in enclosure 00 and in enclosure 0 and continue alternating between the two enclosures in an even/odd manner as shown in Figure 2 (page 26). NOTE: Even/odd numbering or having more than 8 sequential server blades in your configuration requires a second c7000 enclosure. If you want to add a second c7000 enclosure, ask your service provider to refer to the Adding a c7000 Enclosure in a NonStop BladeSystem service procedure. TIP: The platform configuration is pre-configured on a new system. However, if you want to change an NB54000c platform configuration to or from the Flex Processor Bay Configuration or change processor numbering from even/odd to sequential or vice-versa, ask your service provider to refer to the Changing a NonStop NB54000c BladeSystem To or From Flex Processor Bay Configuration service procedure. Platform Configuration 9 This option is for NonStop S-series I/O enclosure connections to NonStop BladeSystems NB50000c and NB54000c and supports: Processors that are configured sequentially only as shown in Figure 3 (page 27): 0 7 in enclosure in enclosure 0 Platform configuration 9 is the only configuration option that supports connections to NonStop S-series I/O enclosures. This configuration option does not support even/odd processor numbering. NonStop BladeSystem Platform Configurations NB50000c and NB54000c 25

26 NOTE: The above platform configuration options are also supported on carrier grade BladeSystems. For more information about carrier grade BladeSystems, refer to NonStop BladeSystems Carrier Grade Overview (page 44). Figure 2 Example of NonStop BladeSystem NB54000c with 6 Processors (Front View) Flexible Processor Bay Configuration 26 NonStop BladeSystems Overview

27 Figure 3 Example of NonStop BladeSystem NB54000c with 6 Processors (Front View) Sequential Numbering NonStop BladeSystem Hardware NB50000c and NB54000c This subsection describes the standard and optional hardware components for NonStop BladeSystems: c7000 Enclosure (page 29) NonStop Server Blades (page 3) CLuster I/O Modules (CLIMs) (page 3) SAS Disk Enclosures (page 40) CLIM Cable Management Ethernet Patch Panel (page 4) IOAM Enclosure (page 4) Fibre Channel Disk Module (FCDM) (page 42) Maintenance Switch (page 42) System Console (page 42) UPS and ERM (Optional) (page 43) Enterprise Storage System (Optional) (page 45) NonStop BladeSystem Hardware NB50000c and NB54000c 27

28 Tape Drive and Interface Hardware (Optional) (page 46) NonStop S-Series I/O Enclosures (Optional) (page 46) A large number of enclosure combinations is possible within the modular cabinets of a NonStop BladeSystem. The applications and purpose of any NonStop BladeSystem determine the number and combinations of hardware within the cabinet. All NonStop BladeSystem components are field-replaceable units that can only be serviced by service providers trained by HP. Because of the number of possible configurations, you should calculate the total power consumption, heat dissipation, and weight of each modular cabinet based on the hardware configuration that you order from HP. For site preparation specifications for the modular cabinets and the individual enclosures, refer to System Installation Specifications NB50000c and NB54000c (page 68). 28 NonStop BladeSystems Overview

29 c7000 Enclosure The c7000 enclosure provides integrated processing, power, and cooling capabilities along with connections to the I/O infrastructure. The c7000 enclosure features include: Up to 8 NonStop Server Blades per c7000 enclosure populated in pairs Two Onboard Administrator (OA) management modules that provide detection, identification, management, and control services for the NonStop BladeSystem. The HP Insight Display provides information about the health and operation of the enclosure. For more information about the HP Insight Display, which is the visual interface located at the bottom front of the OA, refer to the HP BladeSystem Onboard Administrator User Guide. Two Interconnect Ethernet switches that download Halted State Services (HSS) bootcode via the maintenance LAN. Two ServerNet switches that provide ServerNet connectivity between processors, between processors and I/O, and between systems (through connections to cluster switches). There are two types of ServerNet switches: Standard I/O or High I/O. NOTE: If your BladeSystem participates in a BladeCluster, you must use the BladeCluster ServerNet (Cluster High I/O) switch. This manual does not describe the BladeCluster ServerNet switch or its connections. Instead, refer to the BladeCluster Solution Manual. Six power supplies that implement Dynamic Power Saving Mode. This mode is enabled by the OA module, and when enabled, monitors the total power consumed by the c7000 enclosure in real-time and automatically adjusts to changes in power demand. Ten Active Cool fans use the parallel, redundant, scalable, enclosure-based cooling (PARSEC) architecture where fresh, cool air flows over all the blades (in the front of the enclosure) and all the interconnect modules (in the back of the enclosure). Figure 4 shows all of these c7000 features, except the HP Insight Display: NonStop BladeSystem Hardware NB50000c and NB54000c 29

30 Figure 4 c7000 Enclosure Features For information about the LEDs associated with the c7000 enclosure components, refer to the HP BladeSystem c7000 Enclosure Setup and Installation Guide. 30 NonStop BladeSystems Overview

31 NonStop Server Blades The NonStop BladeSystem achieves full software fault tolerance by running the NonStop operating system on a NonStop Server Blade. With the server blade's multiple core microprocessor architecture, a set of cores comprised of instruction processing units (IPUs) share the same memory map (except in low-level software) extending the traditional NonStop logical processor to a scaleable multiprocessor. BladeSystem Server Blade Server Blade Characteristics Supported RVUs NonStop NB50000c BL860c Up to 48 GB of memory supported per server blade with these features: J06.04 or later RVUs A two-socket full-height server blade that features an Intel Itanium 900 dual-core processor and contains a ServerNet interface mezzanine card to provide ServerNet fabric connectivity. Includes four integrated Gigabit Ethernet ports for redundant network boot paths and 2 DDR2 DIMM slots that provide a maximum of 48 GB of memory per server blade. NonStop NB54000c BL860c i2 2 Up to 48 GB of memory supported per server blade with these features: J06. or later RVUs A two-socket full-height server blade that features an Intel Itanium 9300 quad-core processor and contains a ServerNet interface mezzanine card to provide ServerNet fabric connectivity. Base configuration includes four integrated Gigabit Ethernet ports for redundant network boot paths, one CPU socket populated with a quad-core processor, and 24 DDR3 DIMM slots (only 2 slots are supported) to provide a maximum of 48 GB of memory per server blade. Up to 64 GB of memory supported per server blade with these features: J06.3 or later RVUs 3 A two-socket full-height server blade that features an Intel Itanium 9300 quad-core processor and contains a ServerNet interface mezzanine card to provide ServerNet fabric connectivity. Base configuration includes four integrated Gigabit Ethernet ports for redundant network boot paths, one CPU socket populated with a quad-core processor, and 24 DDR3 DIMM slots (only 2 slots are supported) to provide a maximum of 64 GB of memory per server blade. NB54000c and NB50000c server blades cannot coexist within the same system 2 NB54000c blades with up to 48 GB of memory and NB54000c blades with 64 GB of memory can coexist in the same system if the RVU is J06.3 or later 3 All pre-j06.3 NB54000c and NB54000c-cg server blades with less than 64 GB of memory can be upgraded to 64 GB of memory. For instructions on upgrading a server blade to the 64 GB memory configuration, have your service provider refer to the latest version of the Replacing a FRU in a NonStop BladeSystem NB54000c or NB54000c-cg Server Blade or Adding Memory or a Server Blade service procedure. CLuster I/O Modules (CLIMs) CLIMs are rack-mounted servers that can function as ServerNet Ethernet or I/O adapters. The CLIM complies with Internet Protocol version 6 (IPv6), an Internet Layer protocol for packet-switched networks, and has passed official certification of IPv6 readiness. NonStop BladeSystem Hardware NB50000c and NB54000c 3

32 Two models of base servers are used for CLIMs. You can determine a CLIM's model by looking at the label on the back of the unit (behind the cable arm). This label refers to the number as a PID, but it is not the actual PID of the CLIM. It is the Number on Label in the following table. This same number is listed as the part number in OSM. Below is the mapping for CLIM models, base servers, and earliest supported RVUs: Model Number on Label Base Server Supported RVUs G B2 DL385 J06.04 and later RVUs G B2 DL385 J06.04 and later RVUs G B2 DL380 For IB CLIMs: J06.2 or later RVUs J06.2 CLIM DVD For IP or Storage CLIMs: J06.06 through J06.09 with required SPRs listed in the CLuster I/O Module (CLIM) Software Compatibility Reference installed J06.0 or later RVUs These are the front views of each CLIM model. For an illustration of the back views, refer to each supported CLIM configuration. These CLIM configurations are supported: IP CLuster I/O Module (CLIM) (page 33) Telco CLuster I/O Module (CLIM) (page 35) IB CLuster I/O Module (CLIM) (page 36) Storage CLuster I/O Module (CLIM) (page 38) 32 NonStop BladeSystems Overview

33 IP CLuster I/O Module (CLIM) The IP CLIM is a rack-mounted server that is part of some NonStop BladeSystem configurations. The IP CLIM functions as a ServerNet Ethernet adapter providing HP standard Gigabit Ethernet Network Interface Cards (NICs) to implement one of these IP CLIM configurations: DL385 G2 or DL385 G5 IP CLIM Option Five Ethernet Copper Ports (page 33) DL385 G2 or DL385 G5 IP CLIM Option 2 Three Ethernet Copper and Two Ethernet Optical Ports (page 33) DL380 G6 IP CLIM Option Five Ethernet Copper Ports (page 34) DL380 G6 IP CLIM Option 2 Three Ethernet Copper and Two Ethernet Optical Ports (page 34) DL385 G2 or DL385 G5 IP CLIM Option Five Ethernet Copper Ports Slot 3 contains a ServerNet PCIe interface card, which provides the ServerNet fabric connections. Slot contains a PCIe NIC that provides four Gb Ethernet copper ports for customer interfaces. Eth port provides one embedded Gigabit NIC for customer data. DL385 G2 or DL385 G5 IP CLIM Option 2 Three Ethernet Copper and Two Ethernet Optical Ports Slot contains a NIC that provides two Gb Ethernet copper ports for customer interfaces. Eth port provides one embedded Gigabit NIC for customer data. Slots 2 contains a NIC that provides one Gb Ethernet optical port for customer interfaces. Slot 3 contains a ServerNet interface PCIe card, which provides the ServerNet fabric connections. Slot 4 contains a NIC that provides one Gb Ethernet optical port for customer interface. NonStop BladeSystem Hardware NB50000c and NB54000c 33

34 DL380 G6 IP CLIM Option Five Ethernet Copper Ports Slot contains a ServerNet PCIe interface card, which provides the ServerNet fabric connections. Slot 2 contains a PCIe NIC that provides two Gb Ethernet copper ports for customer interfaces. Eth, eth2, and eth3 ports provide three embedded Gb Ethernet copper ports for customer data. DL380 G6 IP CLIM Option 2 Three Ethernet Copper and Two Ethernet Optical Ports Slot contains a ServerNet PCIe interface card, which provides the ServerNet fabric connections. Slots 2 and 3 each contain a PCIe NIC that provides one Gb Ethernet optical port for customer interfaces. Eth, eth2, and eth3 ports provide three embedded Gb Ethernet copper ports for customer data. 34 NonStop BladeSystems Overview

35 NOTE: The IP CLIM uses the Cluster I/O Protocols (CIP) subsystem. For more information about the CIP subsystem, refer to the Cluster I/O Protocols (CIP) Configuration and Management Manual. Telco CLuster I/O Module (CLIM) The Telco CLIM is a rack-mounted server that is part of some NonStop BladeSystem configurations. The Telco CLIM utilizes the Message Transfer Part Level 3 User Adaptation layer (M3UA) protocol and functions as a ServerNet Ethernet adapter with one of these Telco CLIM configurations: DL385 G2 or DL385 G5 Telco CLIM Five Ethernet Copper Ports (page 35) DL380 G6 Telco CLIM Five Ethernet Copper Ports (page 36) DL385 G2 or DL385 G5 Telco CLIM Five Ethernet Copper Ports Slot 3 contains a ServerNet PCIe interface card, which provides the ServerNet fabric connections. Slot contains a PCIe NIC that provides four Gb Ethernet copper ports for customer interfaces. Eth port provides one embedded Gigabit NIC for customer data. NonStop BladeSystem Hardware NB50000c and NB54000c 35

36 DL380 G6 Telco CLIM Five Ethernet Copper Ports Slot contains a ServerNet PCIe interface card, which provides the ServerNet fabric connections. Slot 2 contains a PCIe NIC that provides two Gb Ethernet copper ports for customer interfaces. Eth, eth2, and eth3 ports provide three embedded Gb Ethernet copper ports for customer data. NOTE: The Telco CLIM uses the Cluster I/O Protocols (CIP) subsystem. For more information about the CIP subsystem, refer to the Cluster I/O Protocols (CIP) Configuration and Management Manual. IB CLuster I/O Module (CLIM) The IB CLIM is a rack-mounted HP Proliant DL380 G6 server that is used in some NonStop BladeSystem configurations to provide InfiniBand connectivity via dual-ported Host Channel Adapter (HCA) InfiniBand interfaces. The HCA IB interface on the IB CLIM connects to a customer-supplied 36 NonStop BladeSystems Overview

37 IB switch using a customer-supplied cable as part of the Low Latency Solution. The Low Latency Solution also requires Subnet Manager software either installed on the IB switch or running on another server. NOTE: IB CLIMs are only used as a Low Latency Solution. They do not provide general purpose InfiniBand connectivity for NonStop Systems. The Low Latency Solution architecture provides a high speed and low latency messaging system for stock exchange trading from the incoming trade server to the NonStop operating system. The solution utilizes third-party software from Informatica for messaging and order sequencing which must be installed separately. For information about obtaining Informatica software, contact your service provider. For the list of supported third-party software, refer to the CLuster I/O Module (CLIM) Software Compatibility Reference. The following illustration shows the IB and Ethernet interfaces and ServerNet fabric connections on an IB CLIM. IB CLIM Port or Slot Eth, Eth 2, Eth 3 ports Slot Slot 2 and Slot 3 Slot 4 Slot 5 and Slot 6 Description Each Eth port provides one Gb Ethernet copper interface via embedded Gigabit NIC ServerNet fabric connections via a PCIe 4x adapter Unused Two InfiniBand interfaces (ib0 and ib ports) via the IB HCA card. Only one IB interface port is utilized by the Informatica software. HP recommends connecting to the ib0 interface for ease of manageability. Unused NOTE: All CLIMs use the Cluster I/O Protocols (CIP) subsystem. For more information about the CIP subsystem, refer to the Cluster I/O Protocols (CIP) Configuration and Management Manual. NonStop BladeSystem Hardware NB50000c and NB54000c 37

38 Storage CLuster I/O Module (CLIM) The Storage CLuster I/O Module (CLIM) is part of some NonStop BladeSystem configurations. The Storage CLIM is a rack-mounted server and functions as a ServerNet I/O adapter with these characteristics: Dual ServerNet fabric connections A Serial Attached SCSI (SAS) interface for the storage subsystem via a SAS Host Bus Adapter (HBA) supporting SAS disk drives and SAS tapes A Fibre Channel (FC) interface for ESS and FC tape devices via a customer-ordered FC HBA. Connections to FCDMs are not supported. Two storage CLIM configurations are available: DL385 G2 or DL385 G5 Storage CLIM (page 38) DL380 G6 Storage CLIM (page 39) NOTE: A Storage CLIM pair supports a maximum of 4 SAS disk enclosures. This maximum applies to all Storage CLIM types: G2, G5, G6. DL385 G2 or DL385 G5 Storage CLIM The DL385 G2 and G5 Storage CLIMs contain 5 PCIe HBA slots with these characteristics: Storage CLIM HBA Slot Configuration Optional customer order Optional customer order Part of base configuration Part of base configuration Part of base configuration Provides SAS or Fibre Channel NOTE: Not part of base configuration. Optional customer order. SAS or Fibre Channel NOTE: Not part of base configuration. Optional customer order. ServerNet fabric connections via a PCIe 4x adapter. One SAS external connector with four SAS links per connector and 3 Gbps per link is provided by the PCIe 8x slot. One SAS external and internal connector with four SAS links per connector and 3 Gbps per link is provided by the PCIe 8x slot. 38 NonStop BladeSystems Overview

39 DL380 G6 Storage CLIM The DL380 G6 Storage CLIM contains 4 PCIe HBA slots with these characteristics: Storage CLIM HBA Slot , 6 Configuration Part of base configuration Part of base configuration Optional customer order Optional customer order Not used Provides ServerNet fabric connections via a PCIe 4x adapter. Two SAS external connectors with four SAS links per connector and 6 Gbps per link is provided by the PCIe 8x slot SAS or Fibre Channel NOTE: Not part of base configuration. Optional customer order. SAS or Fibre Channel NOTE: Not part of base configuration. Optional customer order. NOTE: The Storage CLIM uses the Cluster I/O Protocols (CIP) subsystem. For more information about the CIP subsystem, refer to the Cluster I/O Protocols (CIP) Configuration and Management Manual. NonStop BladeSystem Hardware NB50000c and NB54000c 39

40 SAS Disk Enclosures The SAS disk enclosure is a rack-mounted enclosure and is part of some NonStop BladeSystem configurations. The SAS disk enclosure supports SAS hard disk drives (HDDs) and SAS Solid State Drives (SSDs). As of J06.2 and later RVUs, you can partition some SAS HDDs and all SSDs in SAS disk enclosures connected to G6 CLIMs. Only these newer HDDs support disk partioning: M GB, 5k rpm M GB, 0k rpm Up to four partitions are supported for the above SAS HDDs and up to eight partitions are supported for a SAS SSD. For more information about using the disk partitioning feature, refer to the SCF Reference Manual for the Storage Subsystem. CAUTION: If the WRITECACHE attribute is enabled on an HDD or SSD disk volume that is connected to a Storage CLIM, HP strongly recommends using a rack-mounted UPS to prevent data loss on that volume. The WRITECACHE attribute is supported on J06.03 and later RVUs and controls whether write caching is performed for disk writes. For details and considerations for this attribute, refer to the SCF Reference Manual for the Storage Subsystem. Two models of SAS disk enclosures are supported. This table describes the type of SAS disk enclosures and shows compatibility between different CLIM and SAS disk enclosure models: SAS Disk Enclosure Model Description Compatibility With CLIM Models DL385 G2 or DL385 G5 DL380 G6 MSA70 HP StorageWorks 70 Modular Smart Array Enclosure, holding Gb protocol SAS HDDs with redundant power supply and cooling Yes No D2700 HP StorageWorks D2700 Disk Enclosure, holding Gb protocol SAS HDDs and SSDs with redundant power supply and cooling A Storage CLIM pair supports a maximum of 4 SAS disk enclosures. This maximum applies to all Storage CLIM types: G2, G5, G6. No Yes NOTE: Only MSA70 and M8390-2cg SAS disk enclosures attached to DL385 G2 or G5 Storage CLIMs can be daisy chained. Daisy chaining is not permitted with DL380 G6 Storage CLIMs or D2700 SAS disk enclosures. MSA70 models of the SAS disk enclosure contain: 25, Gb SAS protocol disk drive slots. Two independent I/O modules: SAS Domain A SAS Domain B Two fans Two power supplies 40 NonStop BladeSystems Overview

41 D2700 models of the SAS disk enclosure contain: 25, Gb SAS protocol disk drive slots. Two independent I/O modules: SAS Domain A SAS Domain B Two fans Two power supplies For more information about the SAS disk enclosure, refer to the manual for your SAS disk enclosure model (for example, the HP StorageWorks 70 Modular Smart Array Enclosure Maintenance and Service Guide). CLIM Cable Management Ethernet Patch Panel The HP CLIM Cable Management Ethernet patch panel cable management product is used for cabling the IP and Telco CLIM connections and is preinstalled in all new systems that are configured with IP or Telco CLIMs. The CLIM patch panel simplifies and organizes the cable connections to allow easy access to the CLIM's customer-usable interfaces. Figure 5 Ethernet Patch Panel IP and Telco CLIMs each have five customer-usable interfaces. The CLIM patch panel connects these interfaces and brings the usable interface ports to the CLIM patch panel. Each CLIM patch panel has 24 slots, is U high, and should be the topmost unit to the rear of the rack. Each CLIM patch panel can handle cables for up to five CLIMs. It has no power connection. Each CLIM patch panel has 6 panes labeled A, B, C, D, E, and F. Each pane has 4 RJ45 ports, and each port is labeled, 2, 3, or 4. The RJ45 ports in Panel A have port names: A, A2, A3, and A4. The factory default configuration depends on how many IP or Telco CLIMs and CLIM patch panels are configured in the system. For a new system, QMS Tech Doc generates a cable table with the CLIM interface name. This table identifies how the connections between the CLIM physical ports and CLIM patch panel ports were configured at the factory. If you are adding a CLIM patch panel to an existing system, have your service provider refer to the CLuster I/O (CLIM) Installation and Configuration Guide for installation instructions. IOAM Enclosure NOTE: M820R Fibre Channel to SCSI router is not supported on NonStop BladeSystems. The IOAM enclosure is part of some NonStop BladeSystem configurations. The IOAM enclosure uses Gigabit Ethernet 4-port ServerNet adapters (G4SAs) for networking connectivity and Fibre Channel ServerNet adapters (FCSAs) for Fibre Channel connectivity between the system and Fibre Channel disk modules (FCDMs), ESS, and Fibre Channel tape. NonStop BladeSystem Hardware NB50000c and NB54000c 4

42 Fibre Channel Disk Module (FCDM) The Fibre Channel disk module (FCDM) is a rack-mounted enclosure that can only be used with NonStop BladeSystems that have IOAM enclosures. The FCDM connects to an FCSA in an IOAM enclosure and contains: Up to 4 Fibre Channel arbitrated loop disk drives (enclosure front) Environmental monitoring unit (EMU) (enclosure rear) Two fans and two power supplies Fibre Channel arbitrated loop (FC-AL) modules (enclosure rear) You can daisy-chain together up to four FCDMs with 4 drives in each one. Maintenance Switch The HP ProCurve maintenance switch provides the communication between the NonStop BladeSystem through the Onboard Administrator, c7000 enclosure interconnect Ethernet switch, IP, Storage, and Telco CLIMs, IOAM enclosures, the optional UPS, and the system consoles running HP NonStop Open System Management (OSM). The NonStop BladeSystem requires multiple connections to the maintenance switch. The following describes the required connections for each hardware component. BladeSystem Connections to Maintenance Switch One connection per Onboard Administrator on the NonStop BladeSystem One connection per Interconnect Ethernet switch on the NonStop BladeSystem One connection to the optional UPS module One connection for each system console running OSM CLIM Connections to Maintenance Switch One connection to the ilo port on a CLIM One connection to an eth0 port on a CLIM IOAM Enclosure Connections to Maintenance Switch One connection to each of the two ServerNet switch boards in one I/O adapter module (IOAM) enclosure. At least two connections to any two Gigabit Ethernet 4-port ServerNet adapters (G4SAs), if the NonStop BladeSystem maintenance LAN is implemented through G4SAs. System Console A system console is a Windows Server purchased from HP that runs maintenance and diagnostic software for NonStop BladeSystems. When supplied with a new NonStop BladeSystem, system consoles have factory-installed HP and third-party software for managing the system. You can install software upgrades from the HP NonStop System Console Installer DVD. Some system console hardware, including the Windows Server's system unit, monitor, and keyboard, can be mounted in the NonStop BladeSystem's 9-inch rack. Other Windows Servers are installed outside the rack and require separate provisions or furniture to hold the Windows Server's hardware. Two system consoles, a primary and a backup, are required to manage NonStop BladeSystems. For more information on the system console, refer to System Consoles (page 85). 42 NonStop BladeSystems Overview

43 UPS and ERM (Optional) An uninterruptible power supply (UPS) is optional but recommended where a site UPS is not available. Depending on your power configuration, HP supports these UPS and ERM models: UPS and ERM for a Single-Phase Power Configuration (Optional) (page 43) UPS and ERM for a Three-Phase Power Configuration (Optional) (page 44) UPS and ERM for a Single-Phase Power Configuration (Optional) HP supports the R5000 UPS and R5500 XR UPS for a single-phase power configuration because either utilizes the power fail support provided by OSM for this configuration. A minimum of two UPS's is required for a single-phase configuration. For information about the requirements for installing a UPS, refer to Uninterruptible Power Supply (UPS) (page 64). There are two different versions of the R5000 UPS: For North America and Japan, the HP AF460A is utilized and uses an L6-30P (30A) input connector with 200 to 240V single-phase. For International, the HP AF46A is utilized and uses an IEC P6 (32A) input connector with 200V to 240V AC single phase power. There are two different versions of the R5500 XR UPS: For North America and Japan, the HP AF426A is utilized and uses an L6-30P (30A) input connector with 200 to 240V single-phase. For International, the HP AF46A is utilized and uses an IEC P6 (32A) input connector with 200V to 240V AC single phase power. NOTE: Single-phase power set-up is determined by rack type: For the Intelligent rack, refer to Power Distribution for NonStop BladeSystems in Intelligent Racks (page 68) For the G2 rack, refer to NonStop BladeSystem Single-Phase Power Distribution G2 Rack (page 20) Cabinet configurations that include the HP UPS can also include extended runtime modules (ERMs). An ERM is a battery module that extends the overall battery-supported system run time. Up to two ERMs can be used for even longer battery-supported system run time. You cannot mix UPS or ERM types in a single-phase power configuration. The single-phase UPS and ERM are only supported in this combination: Single-phase R5000 UPS with HP AF464A ERM Single-phase R5500 XR UPS with HP AF47A ERM WARNING! UPS's and ERMs must be mounted in the lowest portion of the NonStop BladeSystem to avoid tipping and stability issues. For the R5000 UPS and R5500 XR UPS power and environmental requirements, refer to System Installation Specifications NB50000c and NB54000c (page 68). For planning, installation, NonStop BladeSystem Hardware NB50000c and NB54000c 43

44 emergency power-off (EPO), and module management instructions, refer to the manual for your UPS: R5000 UPS Manuals HP UPS R5000 User Guide located at: HP UPS Network Module User Guide located at: R5500 XR UPS Manuals HP UPS R5500 User Guide located at: HP UPS Management Module User Guide located at: For other UPS's, refer to the documentation shipped with the UPS. UPS and ERM for a Three-Phase Power Configuration (Optional) HP supports the HP model R2000/3 UPS for the three-phase power configuration because it utilizes the power fail support provided by OSM for this configuration. For information about the requirements for installing a UPS, refer to Uninterruptible Power Supply (UPS) (page 64). There are two different versions of the R2000/3 UPS: For North America and Japan, the HP AF429A is utilized and uses an IEC P9 (60A) input connector with 208V three-phase (20V phase-to-neutral) For International, the HP AF430A is utilized and uses an IEC P6 (32A) input connector with 400V three-phase (230V phase-to-neutral). Cabinet configurations that include the HP UPS can also include extended runtime modules (ERMs). An ERM is a battery module that extends the overall battery-supported system run time. Up to four ERMs can be used for even longer battery-supported system run time. HP supports the HP AF434A ERM for the three-phase power configuration. WARNING! UPS's and ERMs must be mounted in the lowest portion of the NonStop BladeSystem to avoid tipping and stability issues. NOTE: The R2000/3 UPS has two output connectors. For power feed setup instructions, refer to NonStop BladeSystem Three-Phase Power Distribution in a G2 Rack (page 23). For the R2000/3 UPS power and environmental requirements, refer to System Installation Specifications NB50000c and NB54000c (page 68). For planning, installation, emergency power-off (EPO), and module management instructions for this UPS, refer to these manuals: HP 3 Phase UPS User Guide located at: HP UPS Management Module User Guide located at: For other UPS's, refer to the documentation shipped with the UPS. 44 NonStop BladeSystems Overview

45 Enterprise Storage System (Optional) An Enterprise Storage System (ESS) is a collection of magnetic disks, their controllers, and a disk cache in one or more standalone cabinets. ESS connects to the NonStop BladeSystem via the Storage CLIM's Fibre Channel HBA ports (direct connect), Fibre Channel ports on the IOAM enclosures (direct connect), or through a separate storage area network (SAN) using a Fibre Channel SAN switch (switched connect). For more information about these connection types, see your HP service provider. NOTE: The Fibre Channel SAN switch power cords might not be compatible with the modular cabinet PDU. Contact your HP service provider to order replacement power cords for the SAN switch that are compatible with the modular cabinet PDU. Cables and switches vary, depending on whether the connection is direct, switched, or a combination: Connection Direct connect Switched Combination of direct and switched Cables 2 Fibre Channel ports on IOAM (LC-LC) 2 Fibre Channel HBA interfaces on Storage CLIM (LC-MMF) 4 Fibre Channel ports (LC-LC) 4 Fibre Channel HBA interfaces on Storage CLIM (LC-MMF) 2 Fibre Channel ports for each direct connection 4 Fibre Channel ports for each switched connection Fibre Channel Switches 0 0 or more or more Customer must order the FC HBA interfaces on the Storage CLIM. Figure 6 shows an example of connections between two Storage CLIMs and an ESS via separate Fibre Channel switches: NonStop BladeSystem Hardware NB50000c and NB54000c 45

46 Figure 6 Connections Between Storage CLIMs and ESS For fault tolerance, the primary and backup paths to an ESS logical device (LDEV) must go through different Fibre Channel switches. Some storage area procedures, such as reconfiguration, can cause the affected switches to pause. If the pause is long enough, I/O failure occurs on all paths connected to that switch. If both the primary and the backup paths are connected to the same switch, the LDEV goes down. Refer to the documentation that accompanies the ESS. Tape Drive and Interface Hardware (Optional) For an overview of tape drives and the interface hardware, refer to Fibre Channel Ports to Fibre Tape Devices (page ) or SAS Ports to SAS Tape Devices (page ). For a list of supported tape devices, refer to the NonStop Storage Overview. NonStop S-Series I/O Enclosures (Optional) NOTE: For a NonStop BladeSystem NB54000c, only the Platform Configuration 9 option supports connections to NonStop S-series I/O enclosures. For more information, refer to NonStop BladeSystem Platform Configurations NB50000c and NB54000c (page 25). Up to four NonStop S-series I/O enclosures (Groups -4) can be connected to the ServerNet switches in a c7000 enclosure (Group 00 only). The supported S-series I/O connections are for the Token-Ring ServerNet Adapter (TRSA) or 6763 Common Communications ServerNet Adapter 2 (CCSA 2). These adapters can coexist in the same system configuration. No other S-series ServerNet adapters are supported. NonStop S-series I/O enclosures in NonStop BladeSystems have these characteristics: Connections are made using an MTP-SC cable between the IOMF 2 FRUs (slots 50 and 55) in the S-series I/O enclosure and port 3 of the ServerNet switches in the c7000 enclosure. Each IOMF 2 connection supports up to 2 TRSAs or 2 CCSA 2s. Only ServerNet switch port 3 supports S-series I/O connections and all S-series connections are made to this one port. 46 NonStop BladeSystems Overview

47 The group numbers for the four I/O S-series enclosures are, 2, 3, and 4. These enclosures can be connected to the Group 00 c7000 enclosure only. New NonStop S-series I/O enclosures can be installed in G2 racks. If you have existing S-series I/O enclosures in other types of cabinets, you can also connect these I/O enclosures to a NonStop BladeSystem. The NonStop S-series I/O enclosure assembly requires a total of 20 U of contiguous space. A rack can only contain up to two NonStop S-series I/O enclosure assemblies. If NonStop S-series I/O enclosures are present, CLIMs cannot be connected to port 3 of the ServerNet switches in the c7000 enclosure. For more information about these connections, ask your HP service provider to refer to the NonStop BladeSystem Hardware Installation Manual Preparation for Other Server Hardware This guide provides the specifications only for the NonStop BladeSystem modular cabinets and enclosures identified earlier in this section. For site preparation specifications for other HP hardware that will be installed with the NonStop BladeSystems, consult your HP account team. For site preparation specifications relating to hardware from other manufacturers, refer to the documentation for those devices. Management Tools for NonStop BladeSystems NOTE: For information about changing the default passwords for NonStop BladeSystem components and associated software, refer to Changing Customer Passwords (page 35). OSM Package This subsection describes the management tools available on your NonStop BladeSystem. The HP Open System Management (OSM) product is the required system management tool for NonStop BladeSystems. OSM works together with the Onboard Administrator (OA) and Integrated Lights Out (ilo) management interfaces to manage c7000 enclosures. A new client-based component, the OSM Certificate Tool, facilitates communication between OSM and the OA. For more information on the OSM package, including a description of the individual applications refer to the OSM Migration and Configuration Guide and the OSM Service Connection User's Guide. Onboard Administrator (OA) The Onboard Administrator (OA) is the enclosure's management, processor, subsystem, and firmware base and supports the c7000 enclosure and NonStop Server Blades. The OA software is integrated with OSM and the Integrated Lights Out (ilo) management interface. The Onboard Administrator (OA) can generate a full inventory, status and configuration report of all the components the OA supports; this is the so called SHOW ALL report. For details on how to generate this report, refer to: objectid=c Integrated Lights Out (ilo) ilo allows you to perform activities on the NonStop Bladesystem from a remote location and provides anytime access to system management information such as hardware health, event logs and configuration is available to troubleshoot and maintain the NonStop Server Blades. Preparation for Other Server Hardware 47

48 Cluster I/O Protocols (CIP) Subsystem The Cluster I/O Protocols (CIP) subsystem provides a configuration and management interface for I/O on NonStop BladeSystems. The CIP subsystem has several tools for monitoring and managing the subsystem. For more information about these tools and the CIP subsystem, refer to the Cluster I/O Protocols (CIP) Configuration and Management Manual. Subsystem Control Facility (SCF) Subsystem The Subsystem Control Facility (SCF) also provides monitoring and management of the CIP subsystem on the NonStop BladeSystem. See the Cluster I/O Protocols (CIP) Configuration and Management Manual for more information about two subsystems with NonStop BladeSystems. Component Location and Identification Terminology These are terms used in locating and describing components: Term Cabinet Rack Rack Offset Group Module Slot (or Bay or Position) Port Fiber Group-Module-Slot (GMS) Group-Module-Slot-Bay (GMSB) Group-Module-Slot-Port (GMSP) Group-Module-Slot-Port-Fiber (GMSPF) NonStop Server Blade Definition Computer system housing that includes a structure of external panels, front and rear doors, internal racking, and dual PDUs. Structure integrated into the cabinet into which rackmountable components are assembled. The rack uses this naming convention: system-name-racknumber The physical location of components installed in a modular cabinet, measured in U values numbered to 42, with U at the bottom of the cabinet. A U is.75 inches (44 millimeters). A subset of a system that contains one or more modules. A group does not necessarily correspond to a single physical object, such as an enclosure. A subset of a group that is usually contained in an enclosure. A module contains one or more slots (or bays). A module can consist of components sharing a common interconnect, such as a backplane, or it can be a logical grouping of components performing a particular function. A subset of a module that is the logical or physical location of a component within that module. A connector to which a cable can be attached and which transmits and receives data. Number (one to four) of the fiber pair (LC connector) within an MTP-LC fiber cable. An MTP-LC fiber cable has a single MTP connector on one end and four LC connectors, each containing a pair of fibers, at the other end. The MTP connector connects to the ServerNet switch in the c7000 enclosure and the LC connectors connect to the CLIM A notation method used by hardware and software in NonStop systems for organizing and identifying the location of certain hardware components. A server blade that provides processing and ServerNet connections. 48 NonStop BladeSystems Overview

49 On NonStop BladeSystems, locations of the modular components are identified by: Physical location: Rack number Rack offset Logical location: group, module, and slot (GMS) notation as defined by their position on the ServerNet rather than the physical location OSM uses GMS notation in many places, including the Tree view and Attributes window, and it uses rack and offset information to create displays of the server and its components. Rack and Offset Physical Location Rack name and rack offset identify the physical location of components in a NonStop BladeSystem. The rack name is located on an external label affixed to the rack, which includes the system name plus a 2-digit rack number. Rack offset is labeled on the rails in each side of the rack. These rails are measured vertically in units called U, with one U measuring.75 inches (44 millimeters). The rack is 42U with U located at the bottom and 42U at the top. The rack offset is the lowest number on the rack that the component occupies. ServerNet Switch Group-Module-Slot Numbering Group (00-0): Group 00 is the first c7000 processor enclosure containing logical sequential processors 0-7 or even processors 0, 2, 4, 6, 8, 0, 2, and 4. Group 0 is the second c7000 processor enclosure containing logical sequential processors 8-5 or odd processors, 3, 5, 7, 9,, 3, and 5. Module (2-3): Module 2 is the X fabric. Module 3 is the Y fabric. Slot (5 or 7): Slot 5 contains the double-wide ServerNet switch for the X fabric. Slot 7 contains the double-wide ServerNet switch for the Y fabric. NOTE: There are two types of c7000 ServerNet switches: Standard I/O and High I/O. For more information and illustrations of the ServerNet switch ports, refer to I/O Connections (Standard and High I/O ServerNet Switch Configurations) (page 09). Port (-8): Ports through 2 support the inter-enclosure links. Port is marked GA. Port 2 is marked GB. Ports 3 through 8 support the I/O links (IP CLIM, Storage CLIM, Telco CLIM, IB CLIM, IOAM, or NonStop S-series I/O enclosures). If NonStop S-series I/O enclosures are present, CLIMs cannot be connected to port 3 of the ServerNet switches in a c7000 enclosure. NOTE: IOAMs must use Ports 4 through 7. These ports support 4-way IOAM links. Component Location and Identification 49

50 Ports 9 and 0 support the cross links between two ServerNet switches in the same enclosure. Ports and 2 support the links to a cluster switch. SH on Port stands for short haul. LH on Port 2 stands for long haul. Ports 3 through 8 are only supported if your BladeSystem participates in a BladeCluster and uses the BladeCluster ServerNet switch. Refer to the BladeCluster Solution Manual for information about the BladeCluster Solution. Fiber (-4) These fibers support up to 4 ServerNet links on ports 3-8 of the c7000 enclosure ServerNet switch. NonStop Server Blade Group-Module-Slot Numbering The next two tables show the default numbering for the NonStop Server Blades of a NonStop BladeSystem when the server blades are powered on and functioning. Depending on your platform configuration, default numbering is either sequential as shown in the table below or even/odd as shown in Even/Odd Processor Numbering Sequential Processor Numbering GMS Numbering For the Logical Processors Sequential Numbering: Processor ID Group* Module Slot* *In the OSM Service Connection, the term Enclosure is used for the group and the term Bay is used for the slot. 50 NonStop BladeSystems Overview

51 Figure 3 (page 27)shows the sequential processor numbering of server blades in a c7000 enclosure. Even/Odd Processor Numbering GMS Numbering For the Logical Processors Even/Odd Numbering: Processor ID Group* Module Slot* *In the OSM Service Connection, the term Enclosure is used for the group and the term Bay is used for the slot. Figure 2 (page 26) shows the even/odd processor numbering of server blades in a c7000 enclosure. CLIM Enclosure Group-Module-Slot-Port-Fiber Numbering This table shows the valid values for GMSPF numbering for the X ServerNet switch connection point to a CLIM: Group Module Slots Ports Fibers ServerNet switch , 3 5, 7 3 to 8-4 IOAM Enclosure Group-Module-Slot Numbering A NonStop BladeSystem supports IOAM enclosures, identified as group 0 through 5: IOAM Group Module Slot Port Fiber (EA) (EA) (EC) (EC) - 4 Component Location and Identification 5

52 IOAM Group Module Slot Port Fiber (EB) (EB) (ED) (ED) (EA) (EA) (EC) (EC) - 4 IOAM Group X ServerNet Module Y ServerNet Module Slot Item Port 0-5 (See preceding table.) 2 3 to 5 ServerNet adapters - n: where n is number of ports on adapter 4 ServerNet switch logic board - 4 5, 8 Power supplies - 6, 7 Fans - This illustration shows the slot locations for the IOAM enclosure: 52 NonStop BladeSystems Overview

53 Fibre Channel Disk Module Group-Module-Slot Numbering This table shows the default numbering for the Fibre Channel disk module: IOAM Enclosure FCDM Group Module Slot FCSA F-SACs Shelf Slot Item X fabric; 3 - Y fabric - 5, 2-4 if daisy-chained; if single disk enclosure Fibre Channel disk module Disk drive bays Transceiver A 90 Transceiver A2 9 Transceiver B 92 Transceiver B2 93 Left FC-AL board 94 Right FC-AL board 95 Left power supply 96 Right power supply 97 Left blower 98 Right blower 99 EMU The form of the GMS numbering for a disk in a Fibre Channel disk module is: This example shows the disk in bay 03 of the Fibre Channel disk module that connects to the FCSA in the IOAM group, module 2, slot, FSAC : Component Location and Identification 53

54 System Installation Document Packet To keep track of the hardware configuration, internal and external communications cabling, IP addresses, and connect networks, assemble and retain as the systems records an Installation Document Packet. This packet can include: Technical Document for the Factory-Installed Hardware Configuration (page 54) Configuration Forms for CLIMs and the ServerNet Adapters (page 54) Technical Document for the Factory-Installed Hardware Configuration Each new NonStop BladeSystem includes a document that describes: The cabinet included with the system Each hardware enclosure installed in the cabinet Cabinet U location of the bottom edge of each enclosure Each ServerNet cable with: Source and destination enclosure, component, and connector Cable part number Source and destination connection labels This document is called a technical document and serves as the physical location and connection map for the system. Configuration Forms for CLIMs and the ServerNet Adapters To add configuration forms for ServerNet adapters or CLIMs to your Installation Document Packet, copy the necessary forms from the adapter manuals or the Cluster I/O Protocols (CIP) Configuration and Management manual for the CLIMs to be configured. Follow any planning instructions in these manuals. 54 NonStop BladeSystems Overview

55 2 Core Licensing NB54000c and NB54000c-cg This chapter describes the features and function of the new core license file that is required on NonStop BladeSystems NB54000c or NB54000c-cg starting with J06.3 or later RVUs. Core Licensing Introduction Starting with the J06.3 RVU, HP introduces a new core licensing feature on BladeSystems NB54000c and NB54000c-cg. This feature allows you to order a system with 2 cores (IPUs) enabled in each processor or the current default of 4 cores enabled. The number of cores enabled on your system is determined by the core license file. This file has been pre-installed on NB54000c/NB54000c-cg since J06. and is now utilized on J06.3 or later RVUs. For more information about this required file, refer to Core License File Requirements and Attributes (page 57). Core Licensing Feature Core licensing allows you to upgrade a 2-core system to 4-cores at anytime in the future by purchasing a new core license file. After installing this file on the system using the new OSM Install Core License File guided procedure (page 56), you issue OSM actions to verify the license file and update to all 4 cores enabled. This upgrade does not require any hardware changes or application downtime. Note that an upgrade from 2-cores to 4-cores is permanent. For other scenarios that require a new core license file, refer to When to Order a New Core License File (page 55). Migration and Expansion Verifying the Core License File on Pre-J06.3 RVUs If you are migrating or adding processors to your system from an earlier RVU to J06.3 or later RVUs, a Core License Validation tool (CLICVAL) has been created to verify and ensure your core license file is valid prior to running your system on J06.3 or later RVUs. When to use CLICVAL and how to order this tool is described in Core License Validation Tool (CLICVAL) (page 56). TIP: This manual provides a Task Checklist for Core License Scenarios (page 60) that includes a checklist for core licensing tasks when migrating or adding processors as well as other scenarios. When to Order a New Core License File You need to order a new core license file when: Upgrading your system from 2 cores to 4 cores per processor CLICVAL or the OSM Read Core License action reports a problem with your license file The information displayed by CLICVAL or OSM: does not match the number of cores you believe your system should be running does not match the number of processors you have on your system Adding new processors to a BladeSystem NB54000c or NB54000c-cg Migrating a BladeSystem NB50000c to NB54000c or NB50000c-cg to NB54000c-cg NOTE: If your system is running a pre-j06.3 RVU, then ordering a new core license file for the above cases is not required, but is recommended, and will be required prior to upgrading the system to a J06.3 or later RVU. NOTE: If your core license file is missing or invalid, refer to What to do if the Core License File is Missing or Invalid (page 58). Core Licensing Introduction 55

56 Ordering the License File for NonStop BladeSystems NB54000c or NB54000c-cg You can order the license file from your sales representative, or you can request it yourself by sending a request via to license.manager@hp.com. Provide this information in the Subject line: Request for new core license file for <customer-name> Content: Name and phone contact information System type System serial number Number of cores to be enabled per processor (for example, 4-cores) Total number of processors on the system The license manager help desk verifies this information matches HP s records. A new license file is returned by along with instructions on how to install the file on pre-j06.3 systems. Allow 24 hours for a response from license.manager@hp.com OSM Install Core License File guided procedure NOTE: If you need to verify the core license file on a pre-j06.3 RVU, refer to Core License Validation Tool (CLICVAL) (page 56). Starting with J06.3 or later RVUs, the OSM Service Connection provides a new OSM Install Core License File guided procedure with OSM actions for installing, verifying, and updating the core license file. This procedure can only be launched by a super group user and includes online help. OSM Actions Install Core License File Read Core License Use this OSM Action to... Install a core license file. Read and verify the contents of the core license file. For more information about OSM actions related to the core license file, refer to the OSM Service Connection online help. For more information about using the guided procedure and the related OSM actions, refer to the help within the procedure. Core License Validation Tool (CLICVAL) The new Core License Validation tool (CLICVAL) can be used to verify and ensure your core license file is correct prior to migrating your system from an earlier RVU to J06.3 or later RVUs. HP recommends using CLICVAL whenever you: Migrate an NB50000c or NB50000c-cg system to an NB54000c or NB54000c-cg system running J06.3 or later RVUs Migrate an NB54000c or NB54000c-cg system to J06.3 or later RVUs Add processors to an NB54000c or NB54000c-cg system and plan to migrate this system to J06.3 or later RVUs To validate a core license file using CLICVAL, run CLICVAL at a TACL prompt. To obtain the CLICVAL tool and instructions for its use, license.manager@hp.com 56 Core Licensing NB54000c and NB54000c-cg

57 To facilitate processing, the subject line of the should be Request for CLICVAL and the should contain: Your name and phone number address where you want HP to the CLICVAL tool System serial number and the current RVU running on the system The license manager help desk will the CLICVAL tool. Allow 24 hours for a response from license.manager@hp.com TIP: CLICVAL is also available on the J06.3 SUT as part of the T9050 product. After migrating to J06.3, HP recommends that you use the OSM action, Read Core License to verify that the core license is valid. For more information on using the CLICVAL tool, refer to the TACL Reference Manual. Core License File Requirements and Attributes Requirements of Core License File Non-transferable to other systems Only generated by HP Installed in the $SYSTEM.ZLICENSE subvolume and named either: VLICNEW VLICENSE File type 407 Secured for network read: NGGG IMPORTANT: $SYSTEM. Your core license file should be regularly backed up with the other files on Contents of Core License File Each core license file contains: System serial number of the system that is being licensed. Number of processors enabled to run on that system. Number of cores that are enabled to run in all processors in the system. (Only 2 core or 4 core enablement is supported.) For information on displaying the core license file's information, refer to Displaying Processor Core and Licensing Information (page 58). Core License File Requirements and Attributes 57

58 Displaying Processor Core and Licensing Information To display information on how many cores are physically present and enabled, use the OSM Service Connection. In the OSM Service Connection:. Locate and select a Logical Processor object. 2. In the Attributes tab, under the Logical heading, locate the following attributes, which are displayed for NonStop BladeSystems NB54000c and NB54000c-cg only: Physical Cores -- this attribute displays the number of cores physically available for this Logical Processor Number of Enabled Cores -- this attribute displays the number of licensed cores for this Logical Processor For more information on displaying attributes using OSM, refer to the online help within the OSM Service Connection or the NonStop Operations Guide for H- and J-Series RVUs. To programmatically obtain the number of cores available and enabled on your system, refer to Attributes 74 and 8 in the PROCESSOR GETINFOLIST procedure in the Guardian Procedure Calls Reference Manual. What to do if the Core License File is Missing or Invalid If the core license file is missing, cannot be read or is otherwise invalid, the following behavior will occur when the system is cold loaded running J06.3 or later: The system will come up and default to running 4 cores, and allow all processors to load. EMS events and OSM alarms and dial outs will report that the core license of the system is missing, corrupt or invalid. To correct this problem, you must:. Order and install a new license file (or retrieve the correct file from backup). To order, refer to Ordering the License File for NonStop BladeSystems NB54000c or NB54000c-cg (page 56). To install, refer to OSM Install Core License File guided procedure (page 56). 2. After ordering and installing the correct license file, see one of the following based on the type of license file purchased: 2-Core License File is Missing or Invalid at Coldload (page 58) 4-Core License File is Missing or Invalid at Coldload (page 59) 2-Core License File is Missing or Invalid at Coldload For 2-core enabled systems, another coldload is required to correct this problem. After installing a corrected core license file and completing the coldload, the system alarm messages should go away. IMPORTANT: Until the missing or corrupted license file is replaced, your 2-core system runs at 4-core enablement. This is a violation of your software license agreement with HP. If you cannot perform a coldload in 30 days to correct this, contact your HP sales representative to discuss arrangements. 58 Core Licensing NB54000c and NB54000c-cg

59 4-Core License File is Missing or Invalid at Coldload In this case with the J06.3 RVU running on the system, installing a corrected 4-core license file should eliminate events, alarms, and dial outs. Any processors that failed to start should be reloaded. NOTE: If the core license file was correctly installed and valid at the last coldload, but has been inadvertently purged or renamed since, the system will continue to operate with the previously enabled values until the next time you coldload. In this case you ll need to make sure you restore the license file from backup or obtain a new license file from HP prior to your next coldload. What to do if Processors Fail During Coldload with %70 Halt Code If processors have been added to a NonStop BladeSystem NB54000c or NB54000c-cg since the system was purchased and these new processors are not included in the license file, these processors may fail to reload and can halt with %70. If this occurs, with the J06.3 RVU running on the system, install a corrected core license file to enable the additional processors. Any processors that failed to start can then be reloaded. What to do if the Core License File is Missing or Invalid 59

60 Task Checklist for Core License Scenarios This checklist assists you in managing your core license file tasks. Select the scenario that applies to your core license file. Table 3 Task Checklist for Core License Scenarios Upgrading a 2 Core System to a 4 Core System For more information... After purchasing the upgrade to 4 cores, obtain a new core license file from HP license manager. Refer to: Ordering the License File for NonStop BladeSystems NB54000c or NB54000c-cg (page 56) Core License File Requirements and Attributes (page 57) Install the license file on your system by launching the OSM Install Core License File guided procedure in the OSM Service Connection. Refer to: OSM Install Core License File guided procedure in the OSM Service Connection and the help within the procedure. Ensure that Enable license file after transfer is checked. OSM Install Core License File guided procedure (page 56). Verify the core license file using the OSM Read Core License action. Migrating NB50000c/NB50000c-cg to NB54000c/NB54000c-cg Obtain the license file and the CLICVAL tool. Before migrating to J06.3, your service provider should use the migration service procedure. This step-by-step procedure describes when to use the CLICVAL tool to verify the core license file is valid as well as performing other migration tasks. After migrating to J06.3, HP recommends that you use the OSM Read Core License action to verify that the core license is valid. For more information... Refer to: Ordering the License File for NonStop BladeSystems NB54000c or NB54000c-cg (page 56) Core License Validation Tool (CLICVAL) (page 56) Core License File Requirements and Attributes (page 57) Refer your service provider to the: Migrating from NB50000c to NB54000c or NB54000c to NB54000c-cg service procedure. Refer to: The online help within the OSM Install Core License File guided procedure. OSM Install Core License File guided procedure (page 56). Adding Processors to Your NB54000c/NB54000c-cg System Obtain and install the core license file for the new processors. For more information... Refer to: Ordering the License File for NonStop BladeSystems NB54000c or NB54000c-cg (page 56) Core License Validation Tool (CLICVAL) (page 56) Core License File Requirements and Attributes (page 57) 60 Core Licensing NB54000c and NB54000c-cg

61 Table 3 Task Checklist for Core License Scenarios (continued) Adding Processors to Your NB54000c/NB54000c-cg System (continued) If the new processors will be on a system that is migrating to J06.3, before migrating, use the CLICVAL tool to verify the core license file is valid. Your service provider should use the service procedure for adding processors. For more information... Refer to: Core License Validation Tool (CLICVAL) (page 56) Core License File Requirements and Attributes (page 57) Refer your service provider to: Replacing a FRU in a Nonstop BladeSystem NB54000c or NB54000c-cg Server Blade or Adding Memory or Server Blade service procedure. Moving to the J06.3 or later RVU on an NB54000c/NB54000c-cg running a prior RVU If the core license file is not present on the system, obtain a new license file. Obtain the CLICVAL tool. For more information... Refer to: Ordering the License File for NonStop BladeSystems NB54000c or NB54000c-cg (page 56) Core License File Requirements and Attributes (page 57) Refer to Core License Validation Tool (CLICVAL) (page 56). Before migrating to J06.3, use the CLICVAL tool to verify the core license file is valid. The Software Installation and Interactive Upgrade Guide for this RVU provides a checklist that includes when to use CLICVAL during an upgrade. After migrating to J06.3, HP recommends that you use the OSM Read Core License action to verify that the core license is valid. Refer to the Upgrade Checklist in the J06.3 Software Installation and Interactive Upgrade Guide. Refer to the help within the OSM Install Core License File guided procedure. Refer to the OSM Install Core License File guided procedure (page 56). Task Checklist for Core License Scenarios 6

62 3 Site Preparation Guidelines NB50000c and NB54000c This section describes power, environmental, and space considerations for your site. Modular Cabinet Power and I/O Cable Entry Power and I/O cables can enter the NonStop BladeSystem from either the top or the bottom rear of the modular cabinets, depending on how the cabinets are ordered from HP and the routing of the AC power feeds at the site. NonStop BladeSystem cabinets can be ordered with the AC power cords for the PDUs exiting either: Top: Power and I/O cables are routed from above the modular cabinet. Bottom: Power and I/O cables are routed from below the modular cabinet For information about modular cabinet power and cable options, refer to Enclosure AC Input G2 Rack (page 252). Emergency Power-Off Switches Emergency power off (EPO) switches are required by local codes or other applicable regulations when computer equipment contains batteries capable of supplying more than 750 volt-amperes (VA) for more that five minutes. Systems that have these batteries also have internal EPO hardware for connection to a site EPO switch or relay. In an emergency, activating the EPO switch or relay removes power from all electrical equipment in the computer room (except that used for lighting and fire-related sensors and alarms). EPO Requirement for NonStop BladeSystems NonStop BladeSystems without an optional UPS (such as an HP R2000/3, HP R5000, or HP R5500 XR UPS) installed in the modular cabinet do not contain batteries capable of supplying more than 750 volt-amperes (VA) for more that five minutes, so they do not require connection to a site EPO switch. EPO Requirement for HP R5000 and R5500 XR UPS NOTE: For a single-phase power configuration, two UPS's are required; you use either two R5000 UPS's or two R5500 XR UPS's. Either the rack-mounted HP R5000 or R5500 XR UPS are supported for a single-phase power configuration. Each UPS contains batteries, has an EPO circuit, and can be optionally installed in a modular cabinet. For site EPO switches or relays, consult your HP site preparation specialist or electrical engineer regarding requirements. If an EPO switch or relay connector is required for your site, contact your HP representative or refer to the manual for your UPS for connectors and wiring for the UPS. Depending on your UPS, refer to either: HP UPS R5500 User Guide located at: HP UPS R5000 User Guide located at: 62 Site Preparation Guidelines NB50000c and NB54000c

63 EPO Requirement for HP R2000/3 UPS The rack-mounted HP R2000/3, UPS is supported for a three-phase power configuration. This UPS contains batteries, has a remote EPO (REPO) port, and can be optionally installed in a modular cabinet. For site EPO switches or relays, consult your HP site preparation specialist or electrical engineer regarding requirements. If an EPO switch or relay connector is required for your site, contact your HP representative or refer to the HP 3 Phase UPS User Guide for connectors and wiring for the HP R2000/3 UPS. This guide is available at: For information about the R2000/3 UPS's management module, refer to the HP UPS Management Module User Guide located at: Electrical Power and Grounding Quality Proper design and installation of a power distribution system for a NonStop BladeSystem requires specialized skills, knowledge, and understanding of appropriate electrical codes and the limitations of the power systems for computer and data processing equipment. For power and grounding specifications, refer to Enclosure AC Input (page 93). Power Quality This equipment is designed to operate reliably over a wide range of voltages and frequencies, described in Enclosure AC Input (page 93). However, damage can occur if these ranges are exceeded. Severe electrical disturbances can exceed the design specifications of the equipment. Common sources of such disturbances are: Fluctuations occurring within the facility s distribution system Utility service low-voltage conditions (such as sags or brownouts) Wide and rapid variations in input voltage levels Wide and rapid variations in input power frequency Electrical storms Large inductive sources (such as motors and welders) Faults in the distribution system wiring (such as loose connections) Computer systems can be protected from the sources of many of these electrical disturbances by using: A dedicated power distribution system Power conditioning equipment Lightning arresters on power cables to protect equipment against electrical storms For steps to take to ensure proper power for the servers, consult with your HP site preparation specialist or power engineer. Grounding Systems The site building must provide a power distribution safety ground/protective earth for each AC service entrance to all NonStop BladeSystem equipment. This safety grounding system must comply with local codes and any other applicable regulations for the installation locale. For proper grounding/protective earth connection, consult with your HP site preparation specialist or power engineer. Electrical Power and Grounding Quality 63

64 Power Consumption In a NonStop BladeSystem, the power consumption and inrush currents per connection can vary because of the unique combination of enclosures housed in the modular cabinet. Thus, the total power consumption for the hardware installed in the cabinet should be calculated as described in Enclosure Power Loads (page 94). Uninterruptible Power Supply (UPS) NOTE: HP supports the HP models R5000 UPSand R5500 XR UPS for a single-phase power configuration and the HP model R2000/3 UPS for a three-phase power configuration. Modular cabinets do not have built-in batteries to provide power during power failures. To support system operation and ride-through support during a power failure, NonStop BladeSystems require either an optional UPS installed in each modular cabinet or a site UPS to support system operation through a power failure. This system operation support can include a planned orderly shutdown at a predetermined time in the event of an extended power failure. A timely and orderly shutdown prevents an uncontrolled and asymmetric shutdown of the system resources from depleted UPS batteries. For additional information, refer to AC Power Monitoring (page 94). NOTE: Retrofitting a system in the field with a UPS and ERMs will likely require moving all installed enclosures in the rack to provide space for the new hardware. One or more of the enclosures that formerly resided in the rack might be displaced and therefore have to be installed in another rack that would also need a UPS and ERMs installed. Additionally, lifting equipment might be required to lift heavy enclosures to their new location. For information and specifications on the R5000 UPS and R5500 XR UPS which are supported for a single-phase power configuration, refer to System Installation Specifications NB50000c and NB54000c (page 68) and to the manual for your UPS: R5000 UPS Manuals HP UPS R5000 User Guide located at: HP UPS Network Module User Guide located at: R5500 XR UPS Manuals HP UPS R5500 User Guide located at: HP UPS Management Module User Guide located at: 64 Site Preparation Guidelines NB50000c and NB54000c

65 R2000/3 UPS Manuals For information and specifications on the R2000/3 UPS and the UPS management module that are supported for a three-phase power configuration, refer to System Installation Specifications NB50000c and NB54000c (page 68) and to these manuals: HP 3 Phase UPS User Guide located at: HP UPS Management Module User Guide located at: If you install a UPS other than the HP model R5000 UPS, R5500 XR UPS, or the HP model R2000/3 UPS in each modular cabinet of a NonStop BladeSystem, these requirements must be met to insure the system can survive a total AC power fail: The UPS output voltage can support the HP PDU input voltage requirements. The UPS phase output matches the PDU phase input. For details, refer to System Installation Specifications NB50000c and NB54000c (page 68). The UPS output can support the targeted system in the event of an AC power failure. Calculate each cabinet load to insure the UPS can support a proper ride-through time in the event of a total AC power failure. For more information, refer to Enclosure Power Loads (page 94). NOTE: A UPS other than the HP model R5000, R5500 XR, or R2000/3 will not be able to utilize the power fail support of the Configure a Power Source as UPS OSM action. If your applications require a UPS that supports the entire system or even a UPS or motor generator for all computer and support equipment in the site, you must plan the site s electrical infrastructure accordingly. Cooling and Humidity Control Do not rely on an intuitive approach to design cooling or to simply achieve an energy balance that is, summing up to the total power dissipation from all the hardware and sizing a comparable air conditioning capacity. Today s high-performance NonStop BladeSystems use semiconductors that integrate multiple functions on a single chip with very high power densities. These chips, plus high-power-density mass storage and power supplies, are mounted in ultra-thin system and storage enclosures, and then deployed into computer racks in large numbers. This higher concentration of devices results in localized heat, which increases the potential for hot spots that can damage the equipment. Additionally, variables in the installation site layout can adversely affect air flows and create hot spots by allowing hot and cool air streams to mix. Studies have shown that above 70 F (20 C), every increase of 8 F (0 C) reduces long-term electronics reliability by 50%. Cooling airflow through each enclosure in the NonStop BladeSystem is front-to-back. Because of high heat densities and hot spots, an accurate assessment of air flow around and through the system equipment and specialized cooling design is essential for reliable system operation. For an airflow assessment, consult with your HP cooling consultant or your heating, ventilation, and air conditioning (HVAC) engineer. NOTE: Failure of site cooling with the NonStop BladeSystem continuing to run can cause rapid heat buildup and excessive temperatures within the hardware. Excessive internal temperatures can result in full or partial system shutdown. Ensure that the site s cooling system remains fully operational when the NonStop BladeSystem is running. Because each modular cabinet houses a unique combination of enclosures, use the Heat Dissipation Specifications and Worksheet NB50000c (page 02) or the Heat Dissipation Specifications and Worksheet NB54000c (page 02)to calculate the total heat dissipation for the hardware Cooling and Humidity Control 65

66 Weight Flooring installed in each cabinet. For air temperature levels at the site, refer to Operating Temperature, Humidity, and Altitude (page 03). Because modular cabinets for NonStop BladeSystems house a unique combination of enclosures, total weight must be calculated based on what is in the specific cabinet, as described in Modular Cabinet and Enclosure Weights With Worksheet (page 99). NonStop BladeSystems can be installed either on the site s floor with the cables entering from above the equipment or on raised flooring with power and I/O cables entering from underneath. Because cooling airflow through each enclosure in the modular cabinets is front-to-back, raised flooring is not required for system cooling. The site floor structure and any raised flooring (if used) must be able to support the total weight of the installed computer system as well as the weight of the individual modular cabinets and their enclosures as they are moved into position. To determine the total weight of each modular cabinet with its installed enclosures, refer to Modular Cabinet and Enclosure Weights With Worksheet (page 99). For your site s floor system, consult with your HP site preparation specialist or an appropriate floor system engineer. If raised flooring is to be used, the design of the NonStop BladeSystem modular cabinet is optimized for placement on 24-inch floor panels. Dust and Pollution Control NonStop BladeSystems do not have air filters. Any computer equipment can be adversely affected by dust and microscopic particles in the site environment. Airborne dust can blanket electronic components on printed circuit boards, inhibiting cooling airflow and causing premature failure from excess heat, humidity, or both. Metallically conductive particles can short circuit electronic components. Tape drives and some other mechanical devices can experience failures resulting from airborne abrasive particles. For recommendations to keep the site as free of dust and pollution as possible, consult with your heating, ventilation, and air conditioning (HVAC) engineer or your HP site preparation specialist. Zinc Particulates Over time, fine whiskers of pure metal can form on electroplated zinc, cadmium, or tin surfaces such as aged raised flooring panels and supports. If these whiskers are disturbed, they can break off and become airborne, possibly causing computer failures or operational interruptions. This metallic particulate contamination is a relatively rare but possible threat. Kits are available to test for metallic particulate contamination, or you can request that your site preparation specialist or HVAC engineer test the site for contamination before installing any electronic equipment. Space for Receiving and Unpacking the System Identify areas that are large enough to receive and to unpack the system from its shipping cartons and pallets. Be sure to allow adequate space to remove the system equipment from the shipping pallets using supplied ramps. Also be sure adequate personnel are present to remove each cabinet from its shipping pallet and to safely move it to the installation site. WARNING! A fully populated cabinet is unstable when moving down the unloading ramp from its shipping pallet. Arrange for enough personnel to stabilize each cabinet during removal from the pallet and to prevent the cabinet from falling. A falling cabinet can cause serious or fatal personal injury. 66 Site Preparation Guidelines NB50000c and NB54000c

67 Ensure sufficient pathways and clearances for moving the NonStop BladeSystem equipment safely from the receiving and unpacking areas to the installation site. Verify that door and hallway width and height as well as floor and elevator loading will accommodate not only the system equipment but also all required personnel and lifting or moving devices. If necessary, enlarge or remove any obstructing doorway or wall. All modular cabinets have small casters to facilitate moving them on hard flooring from the unpacking area to the site. Because of these small casters, rolling modular cabinets along carpeted or tiled pathways might be difficult. If necessary, plan for a temporary hard floor covering in affected pathways for easier movement of the equipment. For physical dimensions of the NonStop BladeSystem equipment, refer to Dimensions and Weights (page 95). Operational Space When planning the layout of the NonStop BladeSystem site, use the equipment dimensions, door swing, and service clearances listed in Dimensions and Weights (page 95). Because location of the lighting fixtures and electrical outlets affects servicing operations, consider an equipment layout that takes advantage of existing lighting and electrical outlets. Also consider the location and orientation of current or future air conditioning ducts and airflow direction and eliminate any obstructions to equipment intake or exhaust air flow. Refer to Cooling and Humidity Control (page 65). Space planning should also include the possible addition of equipment or other changes in space requirements. Depending on the current or future equipment installed at your site, layout plans can also include provisions for: Channels or fixtures used for routing data cables and power cables Access to air conditioning ducts, filters, lighting, and electrical power hardware Communications cables, patch panels, and switch equipment Power conditioning equipment Storage area or cabinets for supplies, media, and spare parts Operational Space 67

68 4 System Installation Specifications NB50000c and NB54000c This section provides specifications necessary for system installation planning. NOTE: All specifications provided in this section assume that each enclosure in the modular cabinet is fully populated. The maximum current for each AC service depends on the number and type of enclosures installed in the modular cabinet. Power, weight, and heat loads are less when enclosures are not fully populated; for example, a Fibre Channel disk module with fewer disks. Modular Cabinets The modular cabinet is an EIA standard 9-inch, 42U rack for mounting modular components. The modular cabinet comes equipped with front and rear doors and includes a rear extension that makes it deeper than some industry-standard racks. The PDUs described in Power Distribution Unit (PDU) Types for an Intelligent Rack (page 69) are mounted along the rear extension without occupying any U-space in the cabinet and are oriented inward, facing the components within the rack. NOTE: For instructions on grounding the modular cabinet's G2 rack using the HP Rack Grounding Kit (AF074A), ask your service provider to refer to the instructions in the HP 0000 G2 Series Rack Options Installation Guide. This manual is available at: NOTE: For instructions on grounding the modular cabinet's Intelligent rack using the HP Intelligent Rack Ground Bonding Kit (BW89A), ask your service provider to refer to the instructions in the HP Intelligent Rack Family Options Installation Guide. This manual is available at: Power Distribution for NonStop BladeSystems in Intelligent Racks NOTE: This section describes power distribution for NonStop NB50000c and NB54000c systems in the Intelligent rack. For information on earlier power configurations used in the G2 rack, see Power Configurations for NonStop BladeSystems in G2 Racks (page 20). This subsection describes these power distribution topics: Power Distribution Unit (PDU) Types for an Intelligent Rack (page 69) AC Power Feeds in an Intelligent Rack (page 83) 68 System Installation Specifications NB50000c and NB54000c

69 Power Distribution Unit (PDU) Types for an Intelligent Rack Two types of PDUs are available for the Intelligent rack: Intelligent PDU Modular PDU Both PDU types use a core and extension bar design with these characteristics: PDU cores power the extension bars and c7000 enclosure. PDU cores are mounted at the lowest possible U location in the rack. Two PDUs are mounted in the same U location (rear and front). Extension bars are mounted on the rear vertical rails of the rack. Rear-mounted PDU cores connect to the extension bars on the right side of the rack. Front-mounted PDU cores connect to the extension bars on the left side of the rack. If the rack is equipped with a UPS, the UPS outputs connect to the front-mounted PDU cores. NOTE: An Intelligent rack with a c7000 enclosure requires a 4 PDU core configuration. All Intelligent racks without a c7000 enclosure use a 2 core PDU configuration. An Intelligent Rack supports the following PDU configurations for North America/Japan (NA/JPN) and International (INTL) regions. Intelligent PDU, single-phase 2 PDU cores without UPS 2 PDU cores with UPS 4 PDU cores without UPS 4 PDU cores with UPS Intelligent PDU, three-phase 2 PDU cores without UPS 2 PDU cores with UPS 4 PDU cores without UPS 4 PDU cores with UPS Modular PDU, single-phase 2 PDU cores without UPS 2 PDU cores with UPS 4 PDU cores without UPS 4 PDU cores with UPS Modular PDU, three-phase 2 PDU cores without UPS 2 PDU cores with UPS 4 PDU cores without UPS 4 PDU cores with UPS Power Distribution for NonStop BladeSystems in Intelligent Racks 69

70 Illustrations of Supported PDU Configurations for an Intelligent Rack The following illustrations show the connections between the PDU cores and the extension bars for the supported PDU configurations. ipdu Connections in an Intelligent Rack Four Intelligent PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase) (page 7) Four Intelligent PDUs With Single-Phase UPS (NA/JPN and INTL) (page 72) Four Intelligent PDUs With Three-Phase UPS (NA/JPN and INTL) (page 73) Two Intelligent PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase) (page 74) Two Intelligent PDUs With Single-Phase UPS (NA/JPN and INTL) (page 75) Two Intelligent PDUs With Three-Phase UPS (NA/JPN and INTL) (page 76) Modular PDU Connections in an Intelligent Rack Four Modular PDUs Without UPS (NA and JPN, Single-Phase and Three-Phase) (page 77) Four Modular PDUs With Single-Phase UPS (NA/JPN and INTL) (page 78) Four Modular PDUs With Three-Phase UPS (NA/JPN and INTL) (page 79) Two Modular PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase) (page 80) Two Modular PDU Connections With Single-Phase UPS (NA/JPN and INTL) (page 8) Two Modular PDU Connections With Three-Phase UPS (NA/JPN and INTL) (page 82) 70 System Installation Specifications NB50000c and NB54000c

71 Four Intelligent PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase) This illustration shows the power configuration for 4 ipdus (without UPS) in an Intelligent rack. For detailed power specifications and connector types, refer to Power Specification Tables Intelligent Rack (page 90). Figure 7 Four ipdus Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase) Power Distribution for NonStop BladeSystems in Intelligent Racks 7

72 Four Intelligent PDUs With Single-Phase UPS (NA/JPN and INTL) This illustration shows the power configuration for 4 ipdus and 2 single-phase UPS's in an Intelligent rack. For detailed power specifications and connector types, refer to Power Specification Tables Intelligent Rack (page 90). Figure 8 Four ipdus With Single-Phase UPS (NA/JPN and INTL) 72 System Installation Specifications NB50000c and NB54000c

73 Four Intelligent PDUs With Three-Phase UPS (NA/JPN and INTL) This illustration shows the power configuration for 4 ipdus and 2 single-phase UPS's in an Intelligent rack. For detailed power specifications and connector types, refer to Power Specification Tables Intelligent Rack (page 90). Figure 9 Four ipdus With Three-Phase UPS (NA/JPN and INTL) Power Distribution for NonStop BladeSystems in Intelligent Racks 73

74 Two Intelligent PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase) This illustration shows the connections for two ipdus in an Intelligent rack without a UPS. For detailed power specifications and connector types, refer to Power Specification Tables Intelligent Rack (page 90). Figure 0 Two Intelligent PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase) 74 System Installation Specifications NB50000c and NB54000c

75 Two Intelligent PDUs With Single-Phase UPS (NA/JPN and INTL) This illustration shows the power configuration for 2 ipdus and a single-phase UPS in an Intelligent rack. For detailed power specifications and connector types, refer to Power Specification Tables Intelligent Rack (page 90). Figure Two Intelligent PDUs With Single-Phase (NA/JPN and INTL) Power Distribution for NonStop BladeSystems in Intelligent Racks 75

76 Two Intelligent PDUs With Three-Phase UPS (NA/JPN and INTL) This illustration shows the power configuration for 2 ipdus and a three-phase UPS in an Intelligent rack. For detailed power specifications and connector types, refer to Power Specification Tables Intelligent Rack (page 90). Figure 2 Two Intelligent PDUs With Three-Phase UPS (NA/JPN and INTL) 76 System Installation Specifications NB50000c and NB54000c

77 Four Modular PDUs Without UPS (NA and JPN, Single-Phase and Three-Phase) This illustration shows the power configuration for four modular PDUs in an Intelligent rack without a UPS. For detailed power specifications and connector types, refer to Power Specification Tables Intelligent Rack (page 90). Figure 3 Four Modular PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase) Power Distribution for NonStop BladeSystems in Intelligent Racks 77

78 Four Modular PDUs With Single-Phase UPS (NA/JPN and INTL) This illustration shows the power configuration for 4 modular PDUs and 2 single-phase UPS's in an Intelligent rack. For detailed power specifications and connector types, refer to Power Specification Tables Intelligent Rack (page 90). Figure 4 Four Modular PDUs With Single-Phase UPS (NA/JPN and INTL) 78 System Installation Specifications NB50000c and NB54000c

79 Four Modular PDUs With Three-Phase UPS (NA/JPN and INTL) This illustration shows the power configuration for 4 modular PDUs and a three-phase UPS in an Intelligent rack. For detailed power specifications and connector types, refer to Power Specification Tables Intelligent Rack (page 90). Figure 5 Four Modular PDUs With Three-Phase UPS (NA/JPN and INTL) Power Distribution for NonStop BladeSystems in Intelligent Racks 79

80 Two Modular PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase) This illustration shows the power configuration for 2 modular PDUs without a UPS in an Intelligent rack. For detailed power specifications and connector types, refer to Power Specification Tables Intelligent Rack (page 90). Figure 6 Two Modular PDUs Without UPS (NA/JPN and INTL, Single-Phase and Three-Phase) 80 System Installation Specifications NB50000c and NB54000c

81 Two Modular PDU Connections With Single-Phase UPS (NA/JPN and INTL) This illustration shows the connections for 2 modular PDUs with a single-phase UPS in an Intelligent rack. For detailed power specifications and connector types, refer to Power Specification Tables Intelligent Rack (page 90). Figure 7 Two Modular PDUs With a Single-Phase UPS (NA/JPN and INTL) Power Distribution for NonStop BladeSystems in Intelligent Racks 8

82 Two Modular PDU Connections With Three-Phase UPS (NA/JPN and INTL) This illustration shows the connections for 2 modular PDUs with a three-phase UPS in an Intelligent rack. For detailed power specifications and connector types, refer to Power Specification Tables Intelligent Rack (page 90). Figure 8 Two Modular PDUs With a Three-Phase UPS (NA/JPN and INTL) 82 System Installation Specifications NB50000c and NB54000c

83 AC Power Feeds in an Intelligent Rack Power can enter the NonStop BladeSystem from either the top or the bottom rear of the modular cabinets, depending on how the cabinets are ordered from HP and how the AC power feeds are routed at the site. NonStop BladeSystems can be ordered with the AC power cords for the PDU installed either: Top: Power and I/O cables are routed from above the modular cabinet. Bottom: Power and I/O cables are routed from below the modular cabinet NOTE: The example power feed illustrations on the following pages show the connections to two PDUs and one UPS. If you have a power configuration with four PDUs and two UPS's, you will need to make additional connections. Here are some typical power feed configurations for a NonStop BladeSystem: AC Power Feeds in an Intelligent Rack (Without UPS) Example of Bottom AC Power Feed in an Intelligent Rack (Without UPS) (page 84) Example of Top AC Power Feed in an Intelligent Rack (Without UPS) (page 85) AC Power Feeds in an Intelligent Rack (With Single-Phase UPS) Example of Top AC Power Feed in an Intelligent Rack (With Single-Phase UPS) (page 86) Example of Bottom AC Power Feed in an Intelligent Rack (With Single-Phase UPS) (page 87) AC Power Feeds in an Intelligent Rack (With Three-Phase UPS) Example of Top AC Power Feed in an Intelligent Rack (With Three-Phase UPS) (page 88) Example of Bottom AC Power Feed in an Intelligent Rack (With Three-Phase UPS) (page 89) Power Distribution for NonStop BladeSystems in Intelligent Racks 83

84 Figure 9 Example of Bottom AC Power Feed in an Intelligent Rack (Without UPS) 84 System Installation Specifications NB50000c and NB54000c

85 Figure 20 Example of Top AC Power Feed in an Intelligent Rack (Without UPS) Power Distribution for NonStop BladeSystems in Intelligent Racks 85

86 Figure 2 Example of Top AC Power Feed in an Intelligent Rack (With Single-Phase UPS) 86 System Installation Specifications NB50000c and NB54000c

87 Figure 22 Example of Bottom AC Power Feed in an Intelligent Rack (With Single-Phase UPS) Power Distribution for NonStop BladeSystems in Intelligent Racks 87

88 Figure 23 Example of Top AC Power Feed in an Intelligent Rack (With Three-Phase UPS) 88 System Installation Specifications NB50000c and NB54000c

89 Figure 24 Example of Bottom AC Power Feed in an Intelligent Rack (With Three-Phase UPS) Each PDU is wired to distribute the load segments to its receptacles. Power Distribution for NonStop BladeSystems in Intelligent Racks 89

90 AC Input Power for Modular Cabinets Intelligent Rack This subsection provides power specifications for AC input power in NonStop BladeSystem Intelligent rack modular cabinets. Power Specification Tables Intelligent Rack Table 4 (page 90) provides the single-phase power specifications and connector types for NA/JPN regions. Table 5 (page 9) provides the three-phase power specifications and connector types for NA/JPN regions. Table 6 (page 9) provides the single-phase power specifications and connector types for INTL regions. Table 7 (page 92) provides the three-phase power specifications and connector types for INTL regions. CAUTION: Be sure the hardware configuration and resultant power loads of each cabinet within the system do not exceed the capacity of the branch circuit according to applicable electrical codes and regulations. Select circuit breaker ratings according to local codes and any applicable regulations for the circuit capacity. Note that circuit breaker ratings vary if your system includes an optional rack-mounted UPS. Table 4 North America/Japan Single-Phase Power Specifications R5000 phase UPS ipdu phase Modular PDU phase Output Load 4500 W 24 A 24 A Input Voltage V V V Input Connector NEMA L6-30P NEMA L6-30P NEMA L6-30P Output Voltage V N/A N/A Output Connectors x L6-30R 4 x C9 4 x C3 6 x C9 (20 x C3) 4 x C9 (28 x C3) Notes UPS outputs are connected to the compatible PDU inputs. 90 System Installation Specifications NB50000c and NB54000c

91 Table 5 North America/Japan Three-Phase Power Specifications R phase UPS ipdu 3 phase Modular PDU 3 phase Output Load 2 kw 24 A 24 A Input Voltage 208V 3P Wye 208V 3P Delta 208V 3P Delta Input Connector IEC P9 NEMA L5-30P NEMA L5-30P Output Voltage 208V 3P Delta N/A N/A Output Connectors 2 x NEMA L5-30R 6 x C9 (20 x C3) 6 x C9 (42 x C3) Notes UPS outputs are connected to the compatible PDU inputs. Table 6 International Single-Phase Power Specifications R5000 phase UPS ipdu phase Modular PDU phase Output Load 4500 W 32 A 32 A Input Voltage V V V Input Connector IEC P6 (32 A) IEC P6 (32 A) IEC P6 (32 A) Output Voltage V N/A N/A Output Connectors x IEC R6 4 x C9 4 x C3 6 x C9 (20 x C3) 4x C9 (28 x C3) Notes UPS outputs are connected to the compatible PDU inputs. AC Input Power for Modular Cabinets Intelligent Rack 9

92 Table 7 International Three-Phase Power Specifications R phase UPS ipdu 3 phase Modular PDU 3 phase Output Load 2 kw 6 A/phase 6 A/phase Input Voltage V 3P Wye V 3P Wye V 3P Wye Input Connector IEC P6 (32 A) IEC309 56P6 (32 A) IEC309 56P6 (32 A) Output Voltage 400V 3P Wye N/A N/A Output Connectors 2 x IEC C6 6 x C9 (20 x C3) 6 x C9 (20 A) (42 x C3) Notes UPS outputs are connected to the compatible PDU inputs. 92 System Installation Specifications NB50000c and NB54000c

93 Enclosure AC Input NOTE: For instructions on grounding the modular cabinet's G2 rack by using the HP Rack Grounding Kit (AF074A), ask your service provider to refer to the instructions in the HP 0000 G2 Series Rack Options Installation Guide. This manual is available at: NOTE: For instructions on grounding the modular cabinet's Intelligent rack using the HP Intelligent Rack Ground Bonding Kit (BW89A), ask your service provider to refer to the instructions in the HP Intelligent Rack Family Options Installation Guide. This manual is available at: Enclosures (IP CLIM, IOAM enclosure, and so forth) require: Specification Nominal input voltage Voltage range Nominal line frequency Frequency ranges Number of phases Value 200/208/220/230/240 V AC RMS V AC 50 or 60 Hz Hz or Hz Single-phase c7000 enclosures require: Specification Value Voltage range Nominal line frequency Frequency ranges Number of phases VAC 50 or 60 Hz Hz or Hz AC Input Power for Modular Cabinets Intelligent Rack 93

94 Enclosure Power Loads The total power and current load for a modular cabinet depends on the number and type of enclosures installed in it. Therefore, the total load is the sum of the loads for all enclosures installed. For examples of calculating the power and current load for various enclosure combinations, refer to Calculating Specifications for Enclosure Combinations NB50000c (page 04) or Calculating Specifications for Enclosure Combinations NB54000c (page 05). In normal operation, the AC power is split equally between the power feeds on the two sides (left and right) of the modular cabinet. However, if AC power fails on one side of the cabinet, the power feed(s) on the remaining side must carry the power for all enclosures in that cabinet. Power and current specifications for each type of enclosure are: Enclosure Type AC Power Lines per Enclosure Typical Power Consumption (VA) 2 Maximum Power Consumption (VA) 3 Peak Inrush current (Amps) NB50000c Server Blade: BL860 NonStop Server blades NB54000c Server Blades: BL860c i2 blade 5 (6GB RAM) BL860c i2 blade (24GB RAM) BL860c i2 blade (32GB RAM) BL860c i2 blade (48GB RAM) BL860c i2 blade (64GB RAM) Common products used by both the NB50000c and NB54000c: c7000 (3 phase) c7000 ( phase) CLIM, G2 or G Networking CLIM (IP, Telco, or IB), G Storage CLIM, G MSA70 SAS disk enclosure, empty D2700 SAS disk enclosure, empty SAS HDD 2.5 inches, 0k rpm System Installation Specifications NB50000c and NB54000c

95 Enclosure Type AC Power Lines per Enclosure Typical Power Consumption (VA) 2 Maximum Power Consumption (VA) 3 Peak Inrush current (Amps) SAS HDD 2.5 inches, 5k rpm SAS SSD, 2.5 inches (5V current).0 (2V current) IOAM enclosure Fibre Channel disk module (no disks) Fibre Channel disk drive Rack-mounted system console (BLCR4) Rack-mounted system console (NSCR20) Rack-mounted keyboard and monitor Maintenance switch (Ethernet) See Three-Phase Power Setup in a G2 Rack, Monitored PDUs (page 23) for c7000 enclosure power feed requirements. 2 Typical = measured at 22C ambient temp 3 Maximum = measured at 35C ambient temp 4 All BL860 server blades measured with one.6ghz dual-core processor, 6GB RAM, and one ServerNet Mezzanine Card 5 All BL860c i2 server blades measured with.73ghz quad-core processor and ServerNet mezzanine card. 6 Measured with 6 power supplies, 0 system fans, 2 GbE2c switches, and 2 OnBoard Adminstrator Modules 7 Measured with 6 power supplies, 0 system fans, 2 GbE2c switches, and 2 OnBoard Adminstrator Modules 8 Measured with 0 Fibre Channel ServerNet Adapters installed and active. Each FCSA or G4SA consumes 30W. 9 Maintenance switch has only one AC plug. Dimensions and Weights This subsection provides information about the dimensions and weights for modular cabinets and enclosures installed in the cabinets. Dimensions and Weights 95

96 Plan View of the 42U Modular Cabinet G2 and Intelligent Racks Plan View of G2 Rack Plan View of Intelligent Rack Service Clearances for Modular Cabinets Unit Sizes Aisles: 6 feet (82.9 centimeters) Front: 3 feet (9.4 centimeters) Rear: 3 feet (9.4 centimeters) Enclosure Type Modular cabinet c7000 enclosure Height (U) System Installation Specifications NB50000c and NB54000c

97 Enclosure Type CLIMs SAS disk enclosures CLIM patch panel IOAM enclosure Fibre Channel disk module (FCDM) Maintenance switch (Ethernet) R5000 UPS (single-phase power) ERM (AF464A) used with R5000 UPS R5500 XR UPS (single-phase power) ERM (AF47A) used with R5500 XR UPS R2000/3 UPS (three-phase power) ERM for three-phase power (AF434A ) Rack-mounted system console and rack-mounted keyboard and monitor Height (U) U G2 Rack Modular Cabinet Physical Specifications Item Height Width Depth Weight in. cm in. cm in. cm Modular cabinet Rack Front door Left-rear door Right-rear door Depends on the enclosures installed. Refer to Modular Cabinet and Enclosure Weights With Worksheet (page 99). Shipping (palletized) U Intelligent Rack Modular Cabinet Physical Specifications Item Height Width Depth Weight in. cm in. cm in. cm Modular cabinet Rack Shipping (palletized) Depends on the enclosures installed. Refer to Modular Cabinet and Enclosure Weights With Worksheet (page 99). Dimensions and Weights 97

98 Enclosure Dimensions Enclosure Type Height Width Depth in cm in cm in cm c7000 enclosure DL385 G2 or G5 CLIM DL380 G6 CLIM MSA70 SAS disk enclosure D2700 SAS disk enclosure CLIM patch panel IOAM enclosure Fibre Channel disk module Maintenance switch (Ethernet) Rack-mounted system console with keyboard and display Modular PDU (in Intelligent rack) Intelligent PDU (in Intelligent rack) R5000 UPS (single-phase power) ERM (AF464A) for single-phase power with R5000 UPS R5500 XR UPS (single-phase power) ERM (AF47A) for single-phase power with R5500 XR UPS R2000/3 UPS (three-phase power) ERM for three-phase power (AF434A) System Installation Specifications NB50000c and NB54000c

99 Modular Cabinet and Enclosure Weights With Worksheet The total weight of each modular cabinet is the sum the weights of the cabinet plus each enclosure installed in it. Use this worksheet to determine the total weight: Enclosure Type Number of Enclosures Weight lbs kg Total lbs kg 42U Intelligent rack Modular cabinet 2 42U G2 Rack (single-phase) U G2 Rack (three-phase with modular PDUs) U G2 Rack (three-phase with monitored PDUs) c7000 Enclosure, empty (single phase power) c7000 Enclosure, empty (3-phase power) IOAM enclosure Fibre Channel disk module (FCDM) Blade ServerNet switch 7 3 BL860c Blade Processor (NB50000c) 22 0 BL860c i2 Blade Processor (NB54000c) 22 0 DL385 G2 or G5 CLIM DL380 G6 CLIM MSA70 SAS disk enclosure, empty D2700 SAS disk enclosure, empty 38 7 SAS HDD, Gb SAS protocol.20 SAS HDD, Gb SAS protocol.45 SAS SSD, Dimensions and Weights 99

100 Enclosure Type Number of Enclosures Weight lbs kg Total lbs kg 6 Gb SAS protocol Disk blank..04 CLIM patch panel Maintenance switch (Ethernet) 6 3 Rack-mounted system console with keyboard and display 4 8 Modular PDU core (in Intelligent rack) 2 NOTE: One modular PDU core weighs 2 pounds. A 4 modular PDU core configuration in an Intelligent rack would weigh 60 lbs (48 lbs for the PDU cores + 2 lbs for extension bars). 5.4 Extension bar (for Modular PDU in Intelligent rack) Intelligent PDU core (in Intelligent rack) 20 NOTE: One ipdu core weights 20 lbs. A 4 ipdu core configuration would weigh 00 lbs (80 lbs for ipdu cores + 20 lbs for extension bars. 9. Extension bar (for Intelligent PDU in Intelligent rack) 2.5. R5000 UPS (single-phase power) ERM (AF464A) single-phase power only used with R5000 UPS R5500 XR UPS (single-phase power) ERM (AF47A) single-phase power only used System Installation Specifications NB50000c and NB54000c

101 Enclosure Type Number of Enclosures Weight lbs kg Total lbs kg with R5500XR UPS R2000/3 UPS (three-phase power) 307 (with batteries) 35 (without batteries) 39.2 (with batteries) 59.8 (without batteries) Extended runtime module (ERM) for three-phase power (AF434A) ERM for single-phase power (AF47A) Total Maximum payload weight for the 42U Intelligent rack cabinet: 3000 lbs (360 kg). 2 Modular cabinet weight includes the PDUs and their associated wiring and receptacles. For examples of calculating the weight for various enclosure combinations, refer to Calculating Specifications for Enclosure Combinations NB50000c (page 04). Modular Cabinet Stability Cabinet stabilizers are required when you have less than four cabinets bayed together. NOTE: Cabinet stability is of special concern when equipment is routinely installed, removed, or accessed within the cabinet. Stability is addressed through the use of leveling feet, baying kits, fixed stabilizers, and/or ballast. Use baying kits to bay G2 racks to G2 racks of the same height or Intelligent racks to Intelligent racks of the same height. Use baying plates to bay Intelligent racks to G2 racks of the same height. In all cases, a rack cannot be bayed with another rack of a different height. For information about best practices for cabinets, your service provider can consult: HP 0000 G2 Series Rack User Guide Best practices for HP 0000 Series and HP 0000 G2 Series Racks HP Intelligent Rack Family User Guide HP Intelligent Rack Family Options Installation Guide NOTE: For instructions on grounding the modular cabinet's G2 rack using the HP Rack Grounding Kit (AF074A), ask your service provider to refer to the instructions in the HP 0000 G2 Series Rack Options Installation Guide. This manual is available at: NOTE: For instructions on grounding the modular cabinet's Intelligent rack using the HP Intelligent Rack Ground Bonding Kit (BW89A), ask your service provider to refer to the instructions in the HP Intelligent Rack Family Options Installation Guide. This manual is available at: Environmental Specifications This subsection provides information about environmental specifications. Modular Cabinet Stability 0

102 Heat Dissipation Specifications and Worksheet NB50000c Enclosure Type Number Installed Unit Heat (BTU/hour, typical power consumption) Unit Heat (BTU/hour, maximum power consumption) Total (BTU/hour) c CLIM SAS disk enclosure, empty SAS HDD, 2.5 inches, 0k rpm 7 30 SAS HDD, 2.5 inches, 5k rpm 3 23 SAS SSD, 2.5 inches IOAM Fibre Channel disk module (FCDM) Maintenance switch (Ethernet) Rack-mounted system console (BLCR4) with keyboard and monitor Decrease the BTU/hour specification by 023 BTU/hour for each empty NonStop Server Blade slot. For example, a c7000 that only has four NonStop Server Blades installed would be rated 2037 BTU/hour minus 4092 BTU/hour (4 server blades x 023 BTU/hour) = 7945 BTU/hour. 2 Measured with 0 Fibre Channel ServerNet adapters installed and active. 3 Measured with 4 disk drives installed and active. 4 Maintenance switch has only one plug. Heat Dissipation Specifications and Worksheet NB54000c Enclosure Type Number Installed Unit Heat (BTU/hour, typical power consumption) Unit Heat (BTU/hour, maximum power consumption) Total (BTU/hour) c CLIM SAS disk enclosure, empty SAS HDD, 2.5 inches, 0k rpm 7 30 SAS HDD, 2.5 inches, 5k rpm 3 23 SAS SSD, 2.5 inches IOAM Fibre Channel disk module (FCDM) System Installation Specifications NB50000c and NB54000c

103 Enclosure Type Number Installed Unit Heat (BTU/hour, typical power consumption) Unit Heat (BTU/hour, maximum power consumption) Total (BTU/hour) Maintenance switch (Ethernet) Rack-mounted system console (BLCR4) (includes keyboard and monitor) Decrease the BTU/hour specification by 023 BTU/hour for each empty NonStop Server Blade slot. For example, a c7000 that only has four NonStop Server Blades installed would be rated 2037 BTU/hour minus 4092 BTU/hour (4 server blades x 023 BTU/hour) = 7945 BTU/hour. 2 c7000 measured with eight 48GB BL860 i2 Server Blades and two High I/O BladeCluster ServerNet switches. 3 Measured with 0 Fibre Channel ServerNet adapters installed and active. 4 Measured with 4 disk drives installed and active. 5 Maintenance switch has only one plug. Operating Temperature, Humidity, and Altitude Specification Operating Range Recommended Range Maximum Rate of Change per Hour Temperature (IOAM, rack-mounted system console, and maintenance switch) 4 to 95 F (5 to 35 C) 68 to 72 F (20 to 25 C) 9 F (5 C) Repetitive 36 F (20 C) Nonrepetitive Temperature (c7000, CLIMs, SAS disk enclosures, and Fibre Channel disk module) 50 to 95 F (0 to 35 C) -.8 F ( C) Repetitive 5.4 F (3 C) Nonrepetitive Humidity (all except c7000 enclosure) 5% to 80%, noncondensing 40% to 50%, noncondensing 6%, noncondensing Humidity (c7000 enclosure) Altitude 2 20% to 80%, noncondensing 0 to 0,000 feet (0 to 3,048 meters) 40% to 55%, noncondensing - 6%, noncondensing Operating and recommended ranges refer to the ambient air temperature and humidity measured 9.7 in. (50 cm) from the front of the air intake cooling vents. 2 For each 000 feet (305 m) increase in altitude above 0,000 feet (up to a maximum of 5,000 feet), subtract.5 F (0.83 C) from the upper limit of the operating and recommended temperature ranges. Nonoperating Temperature, Humidity, and Altitude Temperature: Up to 72-hour storage: - 40 to 5 F (-40 to 66 C) Up to 6-month storage: -20 to 3 F (-29 to 55 C) Reasonable rate of change with noncondensing relative humidity during the transition from warm to cold Relative humidity: 0% to 80%, noncondensing Altitude: 0 to 40,000 feet (0 to 2,92 meters) - Environmental Specifications 03

104 Cooling Airflow Direction NOTE: Because the front door of the enclosure must be adequately ventilated to allow air to enter the enclosure and the rear door must be adequately ventilated to allow air to escape, do not block the ventilation apertures of a NonStop BladeSystem. Each NonStop BladeSystem includes 0 Active Cool fans that provide high-volume, high pressure airflow at even the slowest fan speeds. Air flow for each NonStop BladeSystem enters through a slot in the front of the c7000 enclosure and is pulled into the interconnect bays. Ducts allow the air to move from the front to the rear of the enclosure where it is pulled into the interconnects and the center plenum. The air is then exhausted out the rear of the enclosure. Blanking Panels If the NonStop BladeSystem is not completely filled with components, the gaps between these components can cause adverse changes in the airflow, negatively impacting cooling within the rack. You must cover any gaps with blanking panels. In high density environments, air gaps in the enclosure and between adjacent enclosures should be sealed to prevent recirculation of hot-air from the rear of the enclosure to the front. Typical Acoustic Noise Emissions 84 db(a) (sound pressure level at operator position) Tested Electrostatic Immunity Contact discharge: 8 KV Air discharge: 20 KV Calculating Specifications for Enclosure Combinations NB50000c Power and thermal calculations assume that each enclosure in the cabinet is fully populated. The power and heat load is less when enclosures are not fully populated, such as a Fibre Channel disk module with fewer disk drives. AC power calculations assume that the power feed(s) on one side of the cabinet (left or right) deliver all power to the cabinet. In normal operation, the power is split equally between the two sides. However, calculate the power load to assume delivery from only one side to allow the system to continue to operate if power to one of the sides fails. Example of Cabinet Load Calculations NB50000c (page 05) lists the weight, power, and thermal calculations for a system with: One c7000 enclosure with eight NonStop Server Blades Two DL385 G2 or G5 IP or Storage CLIMs One CLIM patch panel Two MSA70 SAS disk enclosures, containing 25 hard disk drives (HDDs) with 3 Gb SAS protocol, 0k rpm per each HDD One IOAM enclosure Two Fibre channel disk modules One rack-mounted system console with keyboard/monitor units One maintenance switch One 42U high cabinet (single-phase) For a total thermal load for a system with multiple cabinets, add the heat outputs for all the cabinets in the system. 04 System Installation Specifications NB50000c and NB54000c

105 Table 8 Example of Cabinet Load Calculations NB50000c Component Quantity Height (U) Weight Total Volt-amps (VA) BTU/hour 2 (lbs) (kg) Typical Power Consumption Maximum Power Consumption Typical Heat Dissipation Maximum Heat Dissipation c7000 enclosure DL385 G2 or G5 CLIM MSA70 SAS disk enclosure, containing 25 HDDs with 3 Gb protocol, 0k rpm per HDD IOAM enclosure Fibre Channel disk module Rack-mounted System Console (BLCR4) (includes keyboard and monitor) Maintenance switch CLIM patch panel Cabinet (single-phase with G2 rack) Total Decrease the apparent power VA specification by 300VA for each empty NonStop Server Blade slot. For example, a c7000 that only has four NonStop Server Blades installed would be rated 3530 VA minus 200 VA (4 server blades x 300 VA) = 2330 VA apparent power. 2 Decrease the BTU/hour specification by 023 BTU/hour for each empty NonStop Server Blade slot. For example, a c7000 that only has four NonStop Server Blades installed would be rated 2037 BTU/hour minus 4092 BTU/hour (4 server blades x 023 BTU/hour) = 7945 BTU/hour. Calculating Specifications for Enclosure Combinations NB54000c Power and thermal calculations assume that each enclosure in the cabinet is fully populated. The power and heat load is less when enclosures are not fully populated, such as a Fibre Channel disk module with fewer disk drives. AC power calculations assume that the power feed(s) on one side of the cabinet (left or right) deliver all power to the cabinet. In normal operation, the power is split equally between the two sides. However, calculate the power load to assume delivery from only one side to allow the system to continue to operate if power to one of the sides fails. Example of Cabinet Load Calculations NB54000c (page 06) lists the weight, power, and thermal calculations for a system with: One c7000 enclosure with eight 48GB NonStop Server Blades Two DL380 G6 IP or Storage CLIMs Calculating Specifications for Enclosure Combinations NB54000c 05

106 One CLIM patch panel Two D2700 SAS disk enclosures, containing 25 hard disk drives (HDDs) with 300GB, 0k rpm, 6GB SAS protocol per HDD One IOAM enclosure Two Fibre channel disk modules One rack-mounted system console with keyboard/monitor units One maintenance switch One 42U high cabinet (single-phase) For a total thermal load for a system with multiple cabinets, add the heat outputs for all the cabinets in the system. Table 9 Example of Cabinet Load Calculations NB54000c Component Quantity Height (U) Weight Total Volt-amps (VA) BTU/hour 2 (lbs) (kg) Typical Power Consumption Maximum Power Consumption Typical Heat Dissipation Maximum Heat Dissipation c7000 enclosure DL380 G6 CLIM D2700 SAS disk enclosure, containing 25 HDDs with 300GB, 0k rpm, 6 GB SAS protocol per HDD IOAM enclosure Fibre Channel disk module Rack-mounted System Console (BLCR4) (includes keyboard and monitor) Maintenance switch CLIM patch panel Cabinet (single-phase with G2 rack) Total System Installation Specifications NB50000c and NB54000c

107 Decrease the apparent power VA specification by 300VA for each empty NonStop Server Blade slot. For example, a c7000 that only has four NonStop Server Blades installed would be rated 3530 VA minus 200 VA (4 server blades x 300 VA) = 2330 VA apparent power. 2 Decrease the BTU/hour specification by 023 BTU/hour for each empty NonStop Server Blade slot. For example, a c7000 that only has four NonStop Server Blades installed would be rated 2037 BTU/hour minus 4092 BTU/hour (4 server blades x 023 BTU/hour) = 7945 BTU/hour. 3 c7000 enclosure with eight 48 GB BL860 i2 server blades. Calculating Specifications for Enclosure Combinations NB54000c 07

108 5 System Configuration Guidelines NB50000c and NB54000c This chapter provides configuration guidelines for a NonStop BladeSystem. NonStop BladeSystems use a flexible modular architecture. Therefore, various configurations of the system s modular components are possible within configuration restrictions stated in this section and Hardware Configuration in Modular Cabinets (page 42). Internal ServerNet Interconnect Cabling Dedicated Service LAN Cables The NonStop BladeSystem can use Category 5e or Category 6, unshielded twisted-pair Ethernet cables for the internal dedicated service LAN and for connections between the application LAN equipment and IP CLIM, Telco CLIM, or IOAM enclosure. Length Restrictions for Optional Cables Connecting Outside the Modular Cabinet NOTE: For all BladeSystem cable lengths and product IDs, refer to Cable Types, Connectors, Length Restrictions, and Product IDs (page 88). Although a considerable cable length can exist between the modular enclosures in the system, HP recommends that cable length between each of the enclosures be as short as possible. Cable Product IDs For product IDs, refer to Cable Types, Connectors, Length Restrictions, and Product IDs (page 88) ServerNet Fabric and Supported Connections The ServerNet X and Y fabrics for the NonStop BladeSystem are provided by the double-wide ServerNet switch in the c7000 enclosure. Each c7000 enclosure requires two ServerNet switches for fault tolerance and each switch has four ServerNet connection groups: ServerNet Cluster Connections ServerNet Fabric Cross-Link Connections Interconnections between c7000 enclosures I/O Connections (Standard I/O and High I/O options) The I/O connectivity to each of these groups is provided by one of two ServerNet switch options: either Standard I/O or High I/O. ServerNet Cluster Connections At J06.03, only standard ServerNet cluster connections via cluster switches using connections to both types of ServerNet-based cluster switches (6770 and 6780) is supported. There are two small form-factor pluggable (SFP) ports on each c7000 enclosure ServerNet switch: a single mode fiber (SMF) port (port 2) and a multi mode fiber (MMF) port (port ) for the two ServerNet style connections. Only one of these ports can be used at a time and only one connection per fabric (from the appropriate ServerNet switch for that fabric in group 00) to the system's cluster fabric is supported. 08 System Configuration Guidelines NB50000c and NB54000c

109 ServerNet cluster connections on NonStop BladeSystems follow the ServerNet cluster and cable length rules and restrictions. For more information, refer to these manuals: ServerNet Cluster Supplement for NonStop BladeSystems For 6770 switches and star topologies: ServerNet Cluster Manual For 6780 switches and layered topology: ServerNet Cluster 6780 Planning and Installation Guide BladeCluster Solution Connections As of J06.04, you can cluster BladeSystems to participate in a BladeCluster Solution. A BladeCluster Solution is comprised of network topologies that interconnect NonStop BladeSystems and NonStop NS6000 series systems as nodes, which can also cluster with 6770/6780 ServerNet clusters. For more information, see the BladeCluster Solution Manual. ServerNet Fabric Cross-Link Connections A pair of small form-factor pluggable (SFPs) with standard LC-Duplex connectors are provided to allow for the ServerNet fabric cross-link connection. Connections are made to ports 9 and 0 (labeled X and X2) on the c7000 enclosure ServerNet switch. Interconnections Between c7000 Enclosures A single c7000 enclosure can contain eight NonStop Server Blades. Two c7000 enclosures are interconnected to create a 6 processor system. These interconnections are provided by two quad optic ports: ports and 2 (labeled GA and GB) located on the c7000 enclosure ServerNet switches in the 5 and 7 interconnect bays. The GA port on the first c7000 enclosure is connected to the GA port on the second c7000 enclosure (same fabric) and then likewise the GB port to the GB port. These connections provide eight Servernet cross-links between the two sets of eight NonStop processors and the ServerNet routers on the c7000 enclosure ServerNet switch. I/O Connections (Standard and High I/O ServerNet Switch Configurations) NOTE: If your BladeSystems participate in a BladeCluster, you must use the BladeCluster ServerNet switch. This manual does not describe the BladeCluster ServerNet switch or its connections. Instead, refer to the BladeCluster Solution Manual. There are two types of c7000 enclosure ServerNet switches: Standard I/O and High I/O. Each pair of ServerNet switches in a c7000 enclosure must be identical, either Standard I/O or High I/O. However, you can mix ServerNet switches between enclosures. The main difference between the Standard I/O or High I/O switches is the number and type of quad optics modules that are installed for I/O connectivity. The Standard I/O ServerNet switch has three quad optic modules: ports 3, 4, and 8 (labeled GC, EA, and EE) for a total of 2 Servernet links as shown following: ServerNet Fabric and Supported Connections 09

110 Figure 25 ServerNet Switch Standard I/O Supported Connections The High I/O ServerNet switch has six quad optic modules ports 3, 4, 5, 6, 7, and 8 (labelled GC, EA, EB, EC, and ED) for a total of 24 Servernet links as shown following. If both c7000 enclosures in a 6 processor system contain High I/O ServerNet switches, there are a total of 48 ServerNet connections for I/O. Figure 26 ServerNet Switch High I/O Supported Connections Connections to IOAM Enclosures The NonStop BladeSystem supports connections to an IOAM Enclosure. The IOAM Enclosure requires 4-way Servernet links. If you want 4 IOAMs in the first enclosure, only the ServerNet High I/O Switch provides these number of connections, which are available on quad optic ports 4, 5, 6, and 7 (labelled EA, EB, EC, and ED) as illustrated in Figure 26. The NonStop BladeSystem supports a maximum of six IOAMs in a NonStop BladeSystem system with 6 processors. For a 6 processor system, the connection points are asymmetrical between the ServerNet Switches. Only ports EA and EC support connections to an IOAM enclosures on the second ServerNet switch. For the Standard I/O ServerNet switch, only one IOAM module can be attached per c7000 enclosure. Additionally, if a Standard I/O ServerNet switch is used for the first c7000 enclosure for one IOAM enclosure, then the second c7000 enclosure only supports one more IOAM enclosure regardless of the type of ServerNet switch (Standard I/O or High I/O). 0 System Configuration Guidelines NB50000c and NB54000c

111 Connections to CLIMs NOTE: If NonStop S-series I/O enclosures are present, CLIMs cannot be connected to port 3 of the ServerNet switches in a c7000 enclosure. The NonStop BladeSystem supports a maximum of 48 CLIMs per 6 processor system. A CLIM uses either one or two connections per ServerNet fabric. The Storage CLIM typically uses two connections per fabric to achieve high disk bandwidth. A networking CLIM (IP, Telco, or IB CLIM) typically uses one connection per ServerNet fabric. For I/O connections, a breakout cable is used on the back panel of the c7000 enclosure ServerNet switch to convert to standard LC-Duplex style connections. Connections to NonStop S-Series I/O Enclosures Up to four NonStop S-series I/O enclosures (Groups -4) can be connected to the ServerNet switches in a c7000 enclosure (Group 00 only). The supported S-series I/O connections are for the Token-Ring ServerNet Adapter (TRSA) or 6763 Common Communications ServerNet 2 Adapter (CCSA 2). These adapters can coexist in the same system configuration. No other S-series ServerNet adapters are supported. These connections are made using an MTP-SC cable between the IOMF 2 FRUs (slots 50 and 55) in the S-series I/O enclosure and port 3 of the ServerNet switches in the c7000 enclosure. Each IOMF 2 connection supports up to 2 TRSAs or 2 CCSA 2s. Only ServerNet switch port 3 supports S-series I/O connections and all S-series connections are made to this one port. NOTE: If CLIMs are already connected to port 3, you must move them to another port on the ServerNet switches and reconfigure them. NonStop BladeSystem Port Connections Fibre Channel Ports to Fibre Channel Disk Modules Fibre Channel disk modules (FCDMs) can only be connected to the FCSA in an IOAM enclosure. FCDMs are directly connected to the Fibre Channel ports on an IOAM enclosure with this exception: Up to four FCDMs (or up to four daisy-chained configurations with each daisy-chain configuration containing 4 FCDMs) can be connected to the FCSA ports on an IOAM enclosure in a NonStop Blades System. Fibre Channel Ports to Fibre Tape Devices Fibre Channel tape devices can be directly connected to the Fibre Channel ports on a Storage CLIM or an FCSA in an IOAM enclosure. With a Fibre Channel tape drive connected to the system, you can use the BACKUP and RESTORE utilities to save data to and restore data from tape. SAS Ports to SAS Disk Enclosures SAS disk enclosures can be connected directly to the HBA SAS ports on a Storage CLIM. A Storage CLIM pair supports a maximum of four SAS disk enclosures. If it is necessary to minimize SAS HBAs, four SAS disk enclosures connected to DL385 G2 or G5 CLIMs can be configured as daisy-chains of two SAS disk enclosures on two SAS ports. The four SAS disk enclosures connected to DL380 G6 CLIMs cannot be daisy chained. SAS Ports to SAS Tape Devices SAS tape devices have one SAS port that can be directly connected to the HBA SAS port on a Storage CLIM. Each SAS tape enclosure supports two tape drives. With a SAS tape drive connected to the system, you can use the BACKUP and RESTORE utilities to save data to and restore data from tape. NonStop BladeSystem Port Connections

112 Storage CLIM Devices The NonStop BladeSystem uses the rack-mounted SAS disk enclosure and its SAS disk drives are controlled through the Storage CLIM. This illustration shows the ports on each Storage CLIM model: NOTE: All CLIMs use the Cluster I/O Protocols (CIP) subsystem. For more information about the CIP subsystem, refer to the Cluster I/O Protocols (CIP) Configuration and Management Manual. 2 System Configuration Guidelines NB50000c and NB54000c

113 This illustration shows the locations of the hardware in each SAS disk enclosure model as well as the I/O modules on the rear of the enclosures for connecting to the Storage CLIM. SAS disk enclosures connect to Storage CLIMs via SAS cables. For details on cable types, refer to Cable Types, Connectors, Length Restrictions, and Product IDs (page 88). NOTE: To determine compatibility of Storage CLIM models and SAS disk enclosure models, refer to SAS Disk Enclosures (page 40). Factory-Default Disk Volume Locations for SAS Disk Devices This illustration shows where the factory-default locations for the primary and mirror system disk volumes reside in separate disk enclosures: Storage CLIM Devices 3

114 Configuration Restrictions for Storage CLIMs The maximum number of logical unit numbers (LUNs) for each CLIM, including SAS disks, ESS and tapes is 52. Each primary, backup, mirror and mirror backup path is counted in this maximum. SAS disk enclosures connected to DL385 G2 or G5 CLIMs support a maximum daisy-chain depth of two SAS disk enclosures. SAS disk enclosures connected to DL380 G6 CLIMs cannot be daisy-chained. Use only the supported configurations as described below. Configurations for Storage CLIM and SAS Disk Enclosures These subsections show the supported configurations for Storage CLIMs and SAS disk enclosures: DL385 G2 or G5 Storage CLIM and SAS Disk Enclosure Configurations (page 4) DL380 G6 Storage CLIM and SAS Disk Enclosure Configurations (page 6) DL385 G2 or G5 Storage CLIM and SAS Disk Enclosure Configurations Two DL385 G2 or G5 Storage CLIMs, Two MSA70 SAS Disk Enclosures (page 4) Four DL385 G2 or G5 Storage CLIMs, Four MSA70 SAS Disk Enclosures (page 5) Daisy-Chain Configurations (DL385 G2 or G5 Storage CLIMs Only with MSA70 SAS Disk Enclosures) (page 6) Two DL385 G2 or G5 Storage CLIMs, Two MSA70 SAS Disk Enclosures This illustration shows example cable connections between the two DL385 G2 or G5 Storage CLIM, two MSA70 SAS disk enclosure configuration: Figure 27 Two DL385 G2 or G5 Storage CLIMs, Two MSA70 SAS Disk Enclosure Configuration This table lists the Storage CLIM, LUN, and bay identification for the factory-default system disk locations in the configuration of two DL385 G2 or G5 Storage CLIMs and two MSA70 SAS disk 4 System Configuration Guidelines NB50000c and NB54000c

115 enclosures. In this case, $SYSTEM, $DSMSCM, $AUDIT, and OSS are configured as mirrored SAS disk volumes: Disk Volume Name Primary CLIM Backup CLIM Mirror CLIM Mirror-Backup CLIM Primay LUN Mirror LUN Primary Disk Bay in Primary SAS Enclosure Mirror Disk Location in Mirror SAS Enclosure $SYSTEM $DSMSCM $AUDIT $OSS For an illustration of the factory-default slot locations for a SAS disk enclosure, refer to Factory-Default Disk Volume Locations for SAS Disk Devices (page 3). Four DL385 G2 or G5 Storage CLIMs, Four MSA70 SAS Disk Enclosures This illustration shows example cable connections for the four DL385 G2 or G5 Storage CLIM, four MSA70 SAS disk enclosures configuration: Figure 28 Four DL385 G2 or G5 Storage CLIMs, Four MSA70 SAS Disk Enclosure Configuration This table lists the Storage CLIM, LUN, and bay identification for the factory-default system disk locations in the configuration of four DL385 G2 or G5 Storage CLIMs and four MSA70 SAS disk Storage CLIM Devices 5

116 enclosures. In this case, $SYSTEM, $DSMSCM, $AUDIT, and OSS are configured as mirrored SAS disk volumes: Disk Volume Name Primary CLIM Backup CLIM Mirror CLIM Mirror-Backup CLIM Primay LUN Mirror LUN Primary Disk Bay in Primary SAS Enclosure Mirror Disk Location in Mirror SAS Enclosure $SYSTEM $DSMSCM $AUDIT $OSS Daisy-Chain Configurations (DL385 G2 or G5 Storage CLIMs Only with MSA70 SAS Disk Enclosures) This illustration shows an example of cable connections between the two DL385 G2 or G5 Storage CLIMs and four MSA70 SAS disk enclosures in a single daisy-chain configuration: NOTE: SAS disk enclosures connected to DL385 G2 or G5 CLIMs support a maximum daisy-chain depth of two SAS disk enclosures. DL380 G6 Storage CLIM and SAS Disk Enclosure Configurations Two DL380 G6 Storage CLIMs, Two D2700 SAS Disk Enclosures (page 6) Two DL380 G6 Storage CLIMs, Four D2700 SAS Disk Enclosures (page 8) Four DL380 G6 Storage CLIMs, Four D2700 SAS Disk Enclosures (page 8) Four DL380 G6 Storage CLIMs, Eight D2700 SAS Disk Enclosures (page 9) Two DL380 G6 Storage CLIMs, Two D2700 SAS Disk Enclosures This illustration shows example cable connections between the two DL380 G6 Storage CLIM, two D2700 SAS disk enclosure configuration. 6 System Configuration Guidelines NB50000c and NB54000c

117 Figure 29 Two DL380 G6 Storage CLIMs, Two D2700 SAS Disk Enclosure Configuration This table lists the Storage CLIM, LUN, and bay identification for the factory-default system disk locations in the configuration of two DL380 G6 Storage CLIMs and two D2700 SAS disk enclosures. In this case, $SYSTEM, $DSMSCM, $AUDIT, and OSS are configured as mirrored SAS disk volumes: Disk Volume Name Primary CLIM Backup CLIM Mirror CLIM Mirror-Backup CLIM Primay LUN Mirror LUN Primary Disk Bay in Primary SAS Enclosure Mirror Disk Location in Mirror SAS Enclosure $SYSTEM $DSMSCM $AUDIT $OSS For an illustration of the factory-default slot locations for a SAS disk enclosure, refer to Factory-Default Disk Volume Locations for SAS Disk Devices (page 3). Storage CLIM Devices 7

118 Two DL380 G6 Storage CLIMs, Four D2700 SAS Disk Enclosures This illustration shows example cable connections between two DL380 G6 Storage CLIM, four D2700 SAS disk enclosures configuration. This configuration uses two SAS ports in slots 2 and 3 of each DL380 G6 Storage CLIM. Figure 30 Two DL380 G6 Storage CLIMs, Four D2700 SAS Disk Enclosure Configuration This table lists the Storage CLIM, LUN, and bay identification for the factory-default system disk locations in the configuration of four DL380 G6 Storage CLIMs and two D2700 SAS disk enclosures. In this case, $SYSTEM, $DSMSCM, $AUDIT, and OSS are configured as mirrored SAS disk volumes: Disk Volume Name Primary CLIM Backup CLIM Mirror CLIM Mirror-Backup CLIM Primay LUN Mirror LUN Primary Disk Bay in Primary SAS Enclosure Mirror Disk Location in Mirror SAS Enclosure $SYSTEM $DSMSCM $AUDIT $OSS Four DL380 G6 Storage CLIMs, Four D2700 SAS Disk Enclosures This illustration shows example cable connections for the four DL380 G6 Storage CLIM, four D2700 SAS disk enclosures configuration. This configuration uses two SAS ports in slot 2 of each DL380 G6 Storage CLIM. 8 System Configuration Guidelines NB50000c and NB54000c

119 Figure 3 Four DL380 G6 Storage CLIMs, Four D2700 SAS Disk Enclosure Configuration This table lists the Storage CLIM, LUN, and bay identification for the factory-default system disk locations in the configuration of four DL380 G6 Storage CLIMs and four D2700 SAS disk enclosures. In this case, $SYSTEM, $DSMSCM, $AUDIT, and OSS are configured as mirrored SAS disk volumes: Disk Volume Name Primary CLIM Backup CLIM Mirror CLIM Mirror-Backup CLIM Primay LUN Mirror LUN Primary Disk Bay in Primary SAS Enclosure Mirror Disk Location in Mirror SAS Enclosure $SYSTEM $DSMSCM $AUDIT $OSS Four DL380 G6 Storage CLIMs, Eight D2700 SAS Disk Enclosures This illustration shows example cable connections for the four DL380 G6 Storage CLIM, eight D2700 SAS disk enclosures configuration. This configuration uses two SAS ports in slot 2 and slot 3 of each DL380 G6 Storage CLIM. Storage CLIM Devices 9

120 Figure 32 Four DL380 G6 Storage CLIMs, Eight D2700 SAS Disk Enclosure Configuration This table lists the Storage CLIM, LUN, and bay identification for the factory-default system disk locations in the configuration of four DL380 G6 Storage CLIMs and eight D2700 SAS disk enclosures. In this case, $SYSTEM, $DSMSCM, $AUDIT, and OSS are configured as mirrored SAS disk volumes: Disk Volume Name Primary CLIM Backup CLIM Mirror CLIM Mirror-Backup CLIM Primay LUN Mirror LUN Primary Disk Bay in Primary SAS Enclosure Mirror Disk Location in Mirror SAS Enclosure $SYSTEM $DSMSCM $AUDIT $OSS Fibre Channel Devices This subsection describes Fibre Channel devices. The rack-mounted Fibre Channel disk module (FCDM) can only be used with NonStop BladeSystems that have IOAM enclosures. An FCDM and its disk drives are controlled through the Fibre Channel 20 System Configuration Guidelines NB50000c and NB54000c

121 ServerNet adapter (FCSA). For more information on the FCSA, refer to the Fibre-Channel ServerNet Adapter Installation and Support Guide. For more information on the Fibre Channel disk module (FCDM), refer to Fibre Channel Disk Module (FCDM) (page 42). For examples of cable connections between FCSAs and FCDMs, refer to Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module (page 24). This illustration shows an FCSA with indicators and ports: This illustration shows the locations of the hardware in the Fibre Channel disk module as well as the Fibre Channel port connectors at the back of the enclosure: Fibre Channel disk modules connect to Fibre Channel ServerNet adapters (FCSAs) via Fiber Channel arbitrated loop (FC-AL) cables. This drawing shows the two Fibre Channel arbitrated loops implemented within the Fibre Channel disk module: Fibre Channel Devices 2

122 Factory-Default Disk Volume Locations for FCDMs This illustration shows where the factory-default locations for the primary and mirror system disk volumes reside in separate Fibre Channel disk modules: FCSA location and cable connections vary according to the various controller and Fibre Channel disk module combinations. Configurations for Fibre Channel Devices Storage subsystems in NonStop S-series systems used a fixed hardware layout. Each enclosure can have up to four controllers for storage devices and up to 6 internal disk drives. The controllers and disk drives always have a fixed logical location with standardized location IDs of group-module-slot. Only the group number changes as determined by the enclosure position in the ServerNet topology. However, the NonStop BladeSystems have no fixed boundaries for the Fibre Channel hardware layout. Up to 60 FCSA (or 20 ServerNet addressable controllers) and 240 Fibre Channel disk enclosures, with identification depending on the ServerNet connection of the IOAM and slot housing in the FCSAs. 22 System Configuration Guidelines NB50000c and NB54000c

123 Configuration Restrictions for Fibre Channel Devices These configuration restrictions apply and are invoked by Subsystem Control Facility (SCF): Primary and mirror disk drives cannot connect to the same Fibre Channel loop. Loss of the Fibre Channel loop makes both the primary volume and the mirrored volume inaccessible. This configuration inhibits fault tolerance. Disk drives in different Fibre Channel disk modules on a daisy chain connect to the same Fibre Channel loop. The primary path and backup Fibre Channel communication links to a disk drive should not connect to FCSAs in the same module of an IOAM enclosure. In a fully populated system, loss of one FCSA can make up to 56 disk drives inaccessible on a single Fibre Channel communications path. This configuration is allowed, but only if you override an SCF warning message. The mirror path and mirror backup Fibre Channel communication links to a disk drive should not connect to FCSAs in the same module of an IOAM enclosure. In a fully populated system, loss of one FCSA can make up to 56 disk drives inaccessible on a single Fibre Channel communications path. This configuration is allowed, but only if you override an SCF warning message. Recommendations for Fibre Channel Device Configuration These recommendations apply to FCSA and Fibre Channel disk module configurations: Primary Fibre Channel disk module connects to the FCSA F-SAC. Mirror Fibre Channel disk module connects to the FCSA F-SAC 2. FC-AL port A is the incoming port from an FCSA or from another Fibre Channel disk module. FC-AL port A2 is the outbound port to another Fibre Channel disk module. FC-AL port B2 is the incoming port from an FCSA or from a Fibre Channel disk module. FC-AL port B is the outbound port to another Fibre Channel disk module In a daisy-chain configuration, the ID expander harness determines the enclosure number. Enclosure is always at the bottom of the chain. FCSAs can be installed in slots through 5 in an IOAM. G4SAs can be installed in slots through 5 in an IOAM. In systems with two or more cabinets, primary and mirror Fibre Channel disk modules reside in separate cabinets to prevent application or system outage if a power outage affects one cabinet. With primary and mirror Fibre Channel disk modules in the same cabinet, the primary Fibre Channel disk module resides in a lower U than the mirror Fibre Channel disk module. Fibre Channel disk drives are configured with dual paths. Where possible, FCSAs and Fibre Channel disk modules are configured with four FCSAs and four Fibre Channel disk modules for maximum fault tolerance. If FCSAs are not in groups of four, the remaining FCSAs and Fibre Channel disk modules can be configured in other fault-tolerant configurations such as with two FCSAs and two Fibre Channel disk modules or four FCSAs and three Fibre Channel disk modules. Fibre Channel Devices 23

124 In systems with one IOAM enclosure: With two FCSAs and two Fibre Channel disk modules, the primary FCSA resides in module 2 of the IOAM enclosure, and the backup FCSA resides in module 3. (See the example configuration in Two FCSAs, Two FCDMs, One IOAM Enclosure (page 24). With four FCSAs and four Fibre Channel disk modules, FCSA and FCSA 2 reside in module 2 of the IOAM enclosure, and FCSA 3 and FCSA 4 reside in module 3. (See the example configuration in Four FCSAs, Four FCDMs, One IOAM Enclosure (page 25).) In systems with two or more IOAM enclosures: With two FCSAs and two Fibre Channel disk modules, the primary FCSA resides in IOAM enclosure, and the backup FCSA resides in IOAM enclosure 2. (See the example configuration in Two FCSAs, Two FCDMs, Two IOAM Enclosures (page 26).) With four FCSAs and four Fibre Channel disk modules, FCSA and FCSA 2 reside in IOAM enclosure, and FCSA 3 and FCSA 4 reside in IOAM enclosure 2. (See the example configuration in Four FCSAs, Four FCDMs, Two IOAM Enclosures (page 27).) Daisy-chain configurations follow the same configuration restrictions and rules that apply to configurations that are not daisy-chained. (See Daisy-Chain Configurations (FCDMs) (page 28).) Fibre Channel disk modules containing mirrored volumes must be installed in separate daisy chains. Daisy-chained configurations require that all Fibre Channel disk modules reside in the same cabinet and be physically grouped together. Daisy-chain configurations require an ID expander harness with terminators for proper Fibre Channel disk module and disk drive identification. After you connect all Fibre Channel disk modules in configurations of four FCSAs and four Fibre Channel disk modules, yet three Fibre Channel disk modules remain not connected, connect them to the four FCSAs. (See the example configuration in Four FCSAs, Three FCDMs, One IOAM Enclosure (page 29).) Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module These subsections show various example configurations of FCSA controllers and Fibre Channel disk modules with IOAM enclosures. NOTE: Although it is not a requirement for fault tolerance to house the primary and mirror disk drives in separate FCDMs. the example configurations show FCDMs housing only primary or mirror drives, mainly for simplicity in keeping track of the physical locations of the drives. Two FCSAs, Two FCDMs, One IOAM Enclosure (page 24) Four FCSAs, Four FCDMs, One IOAM Enclosure (page 25) Two FCSAs, Two FCDMs, Two IOAM Enclosures (page 26) Four FCSAs, Four FCDMs, Two IOAM Enclosures (page 27) Daisy-Chain Configurations (FCDMs) (page 28) Four FCSAs, Three FCDMs, One IOAM Enclosure (page 29) Two FCSAs, Two FCDMs, One IOAM Enclosure This illustration shows example cable connections between the two FCSAs and the primary and mirror Fibre Channel disk modules: 24 System Configuration Guidelines NB50000c and NB54000c

125 This table lists the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identification for the factory-default system disk locations in the configuration of four FCSAs, two Fibre Channel disk modules, and one IOAM enclosure: Disk Volume Name $SYSTEM (primary) $DSMSCM (primary) $AUDIT (primary) $OSS (primary) $SYSTEM (mirror) $DSMSCM (mirror) $AUDIT (mirror) $OSS (mirror) FCSA GMSP and and and and and and and and Disk GMSB* * For an illustration of the factory-default slot locations for a Fibre Channel disk module, refer to Factory-Default Disk Volume Locations for FCDMs (page 22). Four FCSAs, Four FCDMs, One IOAM Enclosure This illustration shows example cable connections between the four FCSAs and the two sets of primary and mirror Fibre Channel disk modules: Fibre Channel Devices 25

126 This table lists the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identification for the factory-default system disk locations in the configuration of four FCSAs, four Fibre Channel disk modules, and one IOAM enclosure: Disk Volume Name $SYSTEM (primary ) $DSMSCM (primary ) $AUDIT (primary ) $OSS (primary ) $SYSTEM (mirror ) $DSMSCM (mirror ) $AUDIT (mirror ) $OSS (mirror ) FCSA GMSP and and and and and and and and Disk GMSB For an illustration of the factory-default slot locations for a Fibre Channel disk module, refer to Factory-Default Disk Volume Locations for FCDMs (page 22). Two FCSAs, Two FCDMs, Two IOAM Enclosures This illustration shows example cable connections between the two FCSAs split between two IOAM enclosures and one set of primary and mirror Fibre Channel disk modules: This table list the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identification for the factory-default system disk locations in the configuration of two FCSAs, two Fibre Channel disk modules, and two IOAM enclosures: Disk Volume Name $SYSTEM (primary ) $DSMSCM (primary ) $AUDIT (primary ) $OSS (primary ) $SYSTEM (mirror ) $DSMSCM (mirror ) FCSA GMSP and and and and and and.2..2 Disk GMSB System Configuration Guidelines NB50000c and NB54000c

127 Disk Volume Name $AUDIT (mirror ) $OSS (mirror ) FCSA GMSP and and.2..2 Disk GMSB For an illustration of the factory-default slot locations for a Fibre Channel disk module, refer to Factory-Default Disk Volume Locations for FCDMs (page 22). Four FCSAs, Four FCDMs, Two IOAM Enclosures This illustration shows example cable connections between the four FCSAs split between two IOAM enclosures and two sets of primary and mirror Fibre Channel disk modules: This table lists the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identification for the factory-default system disk locations in the configuration of four FCSAs, four Fibre Channel disk modules, and two IOAM enclosures: Disk Volume Name $SYSTEM (primary) $DSMSCM (primary) $AUDIT (primary) $OSS (primary) $SYSTEM (mirror) $DSMSCM (mirror) $AUDIT (mirror) $OSS (mirror) FCSA GMSP and and and and and and and and.2..2 Disk GMSB* * For an illustration of the factory-default slot locations for a Fibre Channel disk module, refer to Factory-Default Disk Volume Locations for FCDMs (page 22) Fibre Channel Devices 27

128 Daisy-Chain Configurations (FCDMs) When planning for possible use of daisy-chained disks, consider: Daisy-Chained Disks Recommended Cost-sensitive storage and applications using low-bandwidth disk I/O. Low-cost, high-capacity data storage is important. Daisy-Chained Disks Not Recommended Many volumes in a large Fibre Channel loop. The more volumes that exist in a larger loop, the higher the potential for negative impact from a failure that takes down a Fibre Channel loop. Applications with a highly mixed workload, such as transaction data bases or applications with high disk I/O. Requirements for Daisy-Chain All daisy-chained Fibre Channel disk modules reside in the same cabinet and are physically grouped together. ID expander harness with terminators is installed for proper Fibre Channel disk module and drive identification. FCSA for each Fibre Channel loop is installed in a different IOAM module for fault tolerance. Two Fibre Channel disk modules minimum, with four Fibre Channel disk modules maximum per daisy chain. See Fibre Channel Devices (page 20). This illustration shows an example of cable connections between the two FCSAs and four Fibre Channel disk modules in a single daisy-chain configuration: A second equivalent configuration, including an IOAM enclosure, two FCSAs, four Fibre Channel disk modules with an ID expander, is required for fault-tolerant mirrored disk storage. Installing 28 System Configuration Guidelines NB50000c and NB54000c

129 each mirrored disk in the same corresponding FCDM and bay number as its primary disk in not required, but it is recommend to simplify the physical management and identification of the disks. This table list the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identification for the factory-default system disk locations in a daisy-chained configuration: Disk Volume Name $SYSTEM $DSMSCM $AUDIT $OSS FCSA GMSP and and and and Disk GMSB* * For an illustration of the factory-default slot locations for a Fibre Channel disk module, refer to Factory-Default Disk Volume Locations for FCDMs (page 22). Four FCSAs, Three FCDMs, One IOAM Enclosure This illustration shows example cable connections between the four FCSAs and three Fibre Channel disk modules with the primary and mirror drives split within each Fibre Channel disk module: This table lists the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identification for the factory-default disk volumes for the configuration of four FCSAs, three Fibre Channel disk modules, and one IOAM enclosure: Disk Volume Name $SYSTEM (primary ) $DSMSCM (primary ) FCSA GMSP and and Disk GMSB Fibre Channel Devices 29

130 Disk Volume Name $AUDIT (primary ) $OSS (primary ) $SYSTEM (mirror ) $DSMSCM (mirror ) $AUDIT (mirror ) $OSS (mirror ) FCSA GMSP and and and and and and Disk GMSB This illustration shows the factory-default locations for the configurations of four FCSAs and three Fibre Channel disk modules where the primary system file disk volumes are in Fibre Channel disk module : This illustration shows the factory-default locations for the configurations of four FCSAs with three Fibre Channel disk modules where the mirror system file disk volumes are in Fibre Channel disk module 3: 30 System Configuration Guidelines NB50000c and NB54000c

131 InfiniBand to Networks InfiniBand connectivity is provided via an InfiniBand interface on the IB CLIM which connects to a customer-supplied IB switch as part of a Low Latency Solution. The Low Latency Solution also requires a customer-supplied IB switch and Subnet Manager software either installed on the IB switch or running on another server. NOTE: IB CLIMs are only used as a Low Latency Solution. They do not provide general purpose InfiniBand connectivity for NonStop Systems. This illustration shows the InfiniBand interfaces, three Gigabit Ethernet interfaces, and ServerNet fabric connections on the DL380 G6 IB CLIM. Ethernet to Networks Depending on your configuration, Gigabit Ethernet connectivity is provided by the Ethernet interfaces in one of these CLIMs or the Ethernet ports on a Gigabit Ethernet 4-port ServerNet Adapter (G4SA): IP CLIM Ethernet Interfaces (page 32) Telco CLIM Ethernet Interfaces (page 34) Gigabit Ethernet 4-Port ServerNet Adapter (G4SA) Ethernet Ports (page 34) The IP CLIM, Telco CLIM, or a G4SA installed in an IOAM enclosure provide Gigabit connectivity between NonStop BladeSystems and Ethernet LANs. The Ethernet port is an end node on the ServerNet and uses either fiber-optic or copper cable for connectivity to user application LANs, as well as for the dedicated service LAN. These are the front views of each CLIM model. For an illustration of the back views, refer to each supported CLIM Ethernet Interface. InfiniBand to Networks 3

132 IP CLIM Ethernet Interfaces The DL385 G2 or G5 and DL380 G6 IP CLIMs each have two types of Ethernet configurations: IP CLIM option and IP CLIM option 2. NOTE: The Telco CLIM Ethernet interfaces are identical to the IP CLIM option configuration. For more information, refer to Telco CLIM Ethernet Interfaces (page 34). These illustrations show the Ethernet interfaces and ServerNet fabric connections on DL385 G2 or G5 and DL380 G6 IP CLIMs with the IP CLIM option and option 2 configurations: 32 System Configuration Guidelines NB50000c and NB54000c

133 IP CLIM Ethernet Interfaces 33

134 All CLIMs use the Cluster I/O Protocols (CIP) subsystem. For more information about managing your CLIMs using the CIP subsystem, refer to the Cluster I/O Protocols (CIP) Configuration and Management Manual. Telco CLIM Ethernet Interfaces These illustrations show the Ethernet interfaces and ServerNet fabric connections on a DL385 G2 or G5 and DL380 G6 Telco CLIM: All CLIMs use the Cluster I/O Protocols (CIP) subsystem. For more information about the CIP subsystem, refer to the Cluster I/O Protocols (CIP) Configuration and Management Manual. Gigabit Ethernet 4-Port ServerNet Adapter (G4SA) Ethernet Ports For information on the Ethernet ports on a G4SA installed in an IOAM enclosure, refer to the Gigabit Ethernet 4-Port Adapter (G4SA) Installation and Support Guide. 34 System Configuration Guidelines NB50000c and NB54000c

135 Managing NonStop BladeSystem Resources This subsection provides procedures and information for managing your NonStop BladeSystem resources and includes these topics: HP Power Regulator for NonStop BladeSystems NB54000c and NB54000c-cg (page 35) Changing Customer Passwords (page 35) HP Power Regulator for NonStop BladeSystems NB54000c and NB54000c-cg NOTE: This feature requires OSM server SPR T0682ACV or later. As of J06.4 and later RVUs, HP Power Regulator is supported on NonStop BladeSystems NB54000c and NB54000c-cg and is used to manage power modes. Power Regulator must be enabled via the OSM Service Connection's Enable/Disable Blade Power Regulator Management action for the system. Once enabled, these Power Regulator modes are available: Power Regulator Modes Static High Performance Mode (default) Dynamic Power Savings Mode Description In Static High Performance Mode, the CPU runs at the maximum performance/power consumption supported by your NonStop configuration. This mode ensures maximum performance, but does not provide any power savings. This is the default mode for all systems, including shipped systems. In Dynamic Power Savings Mode, IPUs 2 run at maximum power/performance until an IPU idles. Once the IPU idles (and as long as it does not need to off-load work from other IPUs) it moves to the Itanium power saving state until a task needs to run on it, whereupon the IPU executes instructions at maximum power/performance again. Static Low Power Mode CPU is the logical processor that consists of a set of one or more IPUs. 2 An IPU (Instruction Processing Unit) is a microprocessor core. In Static Low Power Mode, the CPU is set to a lower power state. This state saves power by having the CPU operate at a lower frequency, with resulting lower performance capacity. The performance impact is workload-dependent. NOTE: The OS Control Power Regulator mode is not supported on NonStop systems. If the OS Control Power Regulator Mode is selected, the command fails and the existing power regulator setting is left unchanged. For information on using Power Regulator, refer to HP SIM for NonStop Manageability. For information on the required Server Firmware bundle (system firmware only), refer to the NonStop Firmware Matrices. Changing Customer Passwords NonStop BladeSystems are shipped with default user names and default passwords for the Administrator for certain components and software. Once your system is set up, you should change these passwords to your own passwords. Managing NonStop BladeSystem Resources 35

136 Table 0 Default User Names and Passwords NonStop Blade System Component Default User Name Default Password To change this password, refer to... Onboard Administrator (OA) Admin hpnonstop Change the Onboard Administrator (OA) Password (page 36) CLIM ilo Admin hpnonstop Change the CLIM ilo Password (page 36) CLIM Maintenance Interface (eth0) root hpnonstop Change the Maintenance Interface (Eth0) Password (page 37) Interconnect Ethernet switch Either: admin Either: admin Change the Interconnect Ethernet Switch Password (page 37) user user NonStop Server Blade MP (ilo) Admin hpnonstop Change the NonStop ServerBlade MP (ilo) Password NB50000c and NB54000c (page 39) Remote Desktop Admin (None) Change the Remote Desktop Password (page 39) Change the Onboard Administrator (OA) Password To change the OA password:. Login to the OA. (You can use the Launch OA URL action on the processor blade from the OSM Service Connection.) 2. Click the + (plus sign) in front of the Enclosure information on the left. 3. Click the + (plus sign) in front of Users/Authentication. 4. Click Local Users and all users are displayed on the right side. 5. Select Administrator and click Edit. 6. Enter the new password, then confirm it again. Click update user. 7. Keep track of your OA password. 8. Change the password for each OA. Change the CLIM ilo Password To change the CLIM ilo password:. In OSM, right click on the CLIM and select Actions. 2. In the next screen, in the Available Actions drop-down window, select Invoke ilo and click Perform Action. 3. Select the Administration tab. 4. Select User Administration. 5. Select Admin local user. 6. Select View/Modify. 7. Change the password. 8. Click Save User Information. 9. Keep track of your CLIM ilo password. 0. Change the ilo password for each CLIM. 36 System Configuration Guidelines NB50000c and NB54000c

137 Change the Maintenance Interface (Eth0) Password To change the maintenance interface (eth0) password:. From the NonStop host system, enter the climcmd command for password: >climcmd clim-name, ip-address, or host-name passwd It will ask for password two times. For example: $SYSTEM STARTUP 3> climcmd c00253 passwd comforte SSH client version T9999H06_Feb2008_comForte_SSH_0078 Enter new UNIX password: hpnonstop Retype new UNIX password: hpnonstop passwd: password updated successfully Termination Info: 0 2. Change the maintenace interface (eth0) password for each CLIM. The user name and password for the eth0:0 maintenance provider are the standard NonStop host system ones, for example, super.super, and so on. Other than standard procedures for setting up NonStop host system user names and passwords, nothing further is required for the eth0:0 maintenance provider passwords. Change the Interconnect Ethernet Switch Password To change the Interconnect Ethernet switch password:. Enable the password change in the Browser Based Interface (BBI). a. Access the switch via a Telnet login. b. Issue this command sequence to enable the password change in BBI: /c/sys/access/userbbi e c. Apply the new configuration by issuing: apply When the new configuration is applied, a message appears stating the apply is complete with a reminder to save the updated configuration. d. Save the new configuration by issuing: save When the new configuration is saved, a confirmation message appears stating that the new configuration was successfully saved to FLASH. 2. Launch the OA using the Launch OA interface action from the enclosure. 3. In the left-side OA window, click Interconnect Bays. 4. In the right-side Interconnect List, click the Management URL that corresponds to the switch. An Internet Explorer window appears and the system management console opens. 5. Login to the Switch Management window. 6. In the Management console window, click on the Configure context button in the toolbar. 7. In the left-pane window, double-click the folder icon next to the c-class GbE2c Switch to expand the folder's contents as illustrated below. 8. Double-click the System folder. Managing NonStop BladeSystem Resources 37

138 9. Double-click the User table. 0. The Configuration Table Window appears. Click Change User/Oper/Admin password.. The Change User/Oper/Admin password window appears:. a. From the Select User drop-down menu, select admin. 38 System Configuration Guidelines NB50000c and NB54000c

139 b. In the Set user password dialog box, enter the new password for your Admin account. c. In the Re-type user password dialog box, re-enter the new password. d. In the Enter the current admin password dialog box, enter the old admin password. e. Click Submit. 2. If you don't receive an error message, the submit is successful. TIP: If you do not click Apply in the next step, the password is not changed. 3. Click Apply on the toolbar: TIP: In the next step, if you do not click Save and only click Apply, the password change is temporary and only exists for this session. Once the session is closed, the old password is restored. 4. Click Save on the toolbar. 5. The Save Configuration window appears. Click Save. Change the NonStop ServerBlade MP (ilo) Password NB50000c and NB54000c To change the NonStop Server Blade MP (ilo) password:. Login to the ILO (You can use the Launch ilo URL action on the processor blade from the OSM Service Connection.) 2. Select the Administration tab. TIP: The NB54000c ilo does not have an Administration tab. Instead, use the left pane. 3. Click Local Accounts from the left side window. 4. Select the user on the right-hand side and click the Add/Edit button below. 5. In the new page, enter the new password in the Password confirmation fields, and click Submit. 6. Keep track of your NonStop ServerBlade MP (ilo) password. 7. Change the password for each NonStop ServerBlade MP. Change the Remote Desktop Password You must change the Remote Desktop Administrator's password to enable connections to the NonStop system console. To change the password for the Administrator's account (which you have logged onto):. Press the Ctrl+Alt+Del keys and the Windows Security dialogue appears. 2. Click Change Password. 3. In the Change Password window: a. Enter the old password. b. Enter the new password. c. Click OK. Managing NonStop BladeSystem Resources 39

140 Default Naming Conventions The NonStop BladeSystem implements default naming conventions in the same manner as Integrity NonStop NS-series systems. With a few exceptions, default naming conventions are not necessary for the modular resources that make up a NonStop BladeSystem. In most cases, users can name their resources at will and use the appropriate management applications and tools to find the location of the resource. However, default naming conventions for certain resources simplify creation of the initial configuration files and automatic generation of the names of the modular resources. NOTE: The naming conventions for a CLIM are based on the group, module, slot, port, fiber values of the CLIM s X attachment point. Preconfigured default resource names are: Type of Object Naming Convention Example Description IB CLuster I/O Module (CLIM) Bgroup module slot port fiber B IB CLIM which has the X attachment point to the ServerNet switch port at group 00, module 2, slot 5, port 3, fiber 4 IP CLuster I/O Module (CLIM) Ngroup module slot port fiber N IP CLIM which has the X attachment point to the ServerNet switch port at group 00, module 2, slot 5, port 3, fiber 2 Storage CLIM Sgroup module slot port fiber S Storage CLIM which has the X attachment point to the ServerNet switch port at group 00, module 2, slot 5, port 3, fiber 3 Telco CLIM Ogroup module slot port fiber O Telco CLIM which has the X attachment point to the ServerNet switch port at group 00, module 2, slot 5, port 3, fiber 4 SAS disk volume $SASnumber $SAS20 Twentieth SAS disk volume in the system ESS disk volume $ESSnumber $ESS20 Twentieth ESS disk drive in the system Fiber Channel disk drive $FCnumber $FC0 Tenth Fibre Channel disk drive in the system Tape drive $TAPEnumber $TAPE0 First tape drive in the system Maintenance CIPSAM process $ZTCPnumber $ZTCP0 First maintenance CIPSAM process for the system Maintenance provider ZTCPnumber ZTCP0 First maintenance provider for the system, associated with the CIPSAM process $ZTCP0 Maintenance CIPSAM process $ZTCPnumber $ZTCP Second maintenance CIPSAM process for the system Maintenance provider ZTCPnumber ZTCP Second maintenance provider for the system, associated with the CIPSAM process $ZTCP 40 System Configuration Guidelines NB50000c and NB54000c

141 Type of Object Naming Convention Example Description IPDATA CIPSAM process $ZTC number $ZTC0 First IPDATA CIPSAM process for the system IPDATA provider $ZTC number ZTC0 First IPDATA provider for the system Maintenance Telserv process $ZTNP number $ZTNP Second maintenance Telserv process for the system that is associated with the CIPSAM $ZTCP process Non-maintenance Telserv process $ZTN number $ZTN0 First non-maintenance Telserv process for the system that is associated with the CIPSAM $ZTC0 process Listener process $ZPRPnumber $ZPRP Second maintenance Listener process for the system that is associated with the CIPSAM $ZTCP process Non-maintenance Listener process $LSN number $LSN0 First non-maintenance Listener process for the system that is associated with the CIPSAM $ZTC0 process TFTP process Automatically created by WANMGR None None WANBOOT process Automatically created by WANMGR None None SWAN adapter Snumber S9 Nineteenth SWAN adapter in the system Possible Values of Disk and Tape LUNs The possible values of disk and tape LUN numbers depend on the type of the resource. For a SAS disk, the LUN number is calculated as base LUN + offset. base LUN is the base LUN number for the SAS enclosure. Its value can be 00, 200, 300, 400, 500, 600, 700, 800, or 900, and should be numbered sequentially for each of the SAS enclosures attached to the same CLIM. offset is the bay (slot) number of the disk in the SAS enclosure. For an ESS disk, the LUN number is calculated as base LUN + offset. base LUN is the base LUN number for the ESS port. Its value can be 000, 500, 2000, 2500, 3000, 3500, 4000, or 4500, and should be numbered sequentially for each of the ESS ports attached to the same CLIM. offset is the LUN number of the ESS LUN. For a physical Fibre Channel tape, the value of LUN number can be, 2, 3, 4, 5, 6, 7, 8, or 9, and should be numbered sequentially for each of the physical tapes attached to the same CLIM. For a VTS tape, the LUN number is calculated as base LUN + offset. base LUN is the base LUN number for the VTS port. Its value can be 5000, 500, 5020, 5030, 5040, 5050, 5060, 5070, 5080, or 5090, and should be numbered sequentially for each of the VTS ports attached to the same CLIM. offset is the LUN number of the VTS LUN. Managing NonStop BladeSystem Resources 4

142 6 Hardware Configuration in Modular Cabinets This chapter shows locations of hardware components within the 42U modular cabinet for a NonStop BladeSystem. A number of physical configurations are possible. NOTE: Hardware configuration drawings in this chapter represent the physical arrangement of the modular enclosures but do not show PDUs. For information about PDUs in a G2 rack, refer to NonStop BladeSystem Power Distribution G2 Rack (page 20). For information about PDUs in an Intelligent rack, refer to Power Distribution Unit (PDU) Types for an Intelligent Rack (page 69) Maximum Number of Modular Components NB50000c and NB54000c This table shows the maximum number of the modular components installed in a BladeSystem. These values might not reflect the system you are planning and are provided only as an example, not as exact values. 2 Processors 4 Processors 6 Processors 8 Processors c7000 enclosure ServerNet switch in c7000 enclosure IOAM enclosure CLIMs The IOAM maximum requires ServerNet High I/O Switches 2 The CLIM maximum requires ServerNet High I/O Switches Enclosure Locations in Cabinets NB50000c and NB54000c For details about the location of NonStop BladeSystem enclosures and components within a cabinet, including the U location and cabling for enclosures and components, refer to the Technical Document for your system. 42 Hardware Configuration in Modular Cabinets

143 Part III NonStop BladeSystem NB50000c-cg and NB54000c-cg (Carrier Grade) NonStop BladeSystem Carrier Grade Topics Chapters and Related Appendices NonStop BladeSystems Carrier Grade Overview (page 44) Site Preparation Guidelines NB50000c-cg and NB54000c-cg (page 60) System Installation Specifications NB50000c-cg and NB54000c-cg (page 65) System Configuration Guidelines NB50000c-cg and NB54000c-cg (page 76) CAUTION: Information provided here is for reference and planning. Only authorized service providers with specialized training can install or service the NonStop BladeSystem NB50000c-cg. NOTE: For information about the NonStop BladeSystem NB50000c, refer to NonStop BladeSystems NB50000c, NB54000c (page 2).

144 7 NonStop BladeSystems Carrier Grade Overview This part of the manual describes NonStop Carrier Grade BladeSystems, which are used in Central Office and telecommunication environments. NonStop Carrier Grade BladeSystems use a seismic rack with DC input power and use different hardware components, which are described in NonStop BladeSystem NB50000c-cg and NB54000c-cg Hardware (page 46). Characteristics of the individual NonStop Carrier Grade BladeSystems are described in: Characteristics of a NonStop Carrier Grade BladeSystem NB50000c-cg (page 44) Characteristics of a NonStop Carrier Grade BladeSystem NB54000c-cg (page 45) NOTE: NonStop Carrier Grade BladeSystems are also part of solution packages provided by the HP OpenCall software (OCS) group. This manual does not describe installing or configuring HP OCS solutions. Table Characteristics of a NonStop Carrier Grade BladeSystem NB50000c-cg Supported RVU Processor/Processor model Cabinet Main memory Maximum processors Supported processor configurations Maximum CLuster I/O Modules (CLIMs) Minimum CLIMs for fault-tolerance J06.04 and later Intel Itanium/NSE-M 36 U seismic rack 8 GB to 48 GB per logical processor 6 2, 4, 6, 8, 0, 2, 4, or 6 48 CLIMs (total of all types) in a 6 processor system. Maximum 20 CLIMs for Telco CLIM CG 2 Storage CLIMs CG 2 IP CLIMs CG 2 Telco CLIMs CG Disk Storage M8390-2CG SAS Disk Enclosure (page 52). M839-24CG SAS Disk Enclosure (page 52). Maximum M8390 2CG SAS disk drives per Storage CLIM pair Maximum M839 24CG SAS disk drives per Storage CLIM pair 5344-SE DAT units (2 DAT 60 drives each) System console (optional) Fibre Channel disk modules/ioam enclosures/swans NonStop S-series I/O enclosures Enterprise Storage System (ESS) Connection to ServerNet clusters Connection to BladeCluster Solution 48 (in 4 disk enclosures) 96 (in 4 disk enclosures) Supported. Connects to Storage CLIM. AC-powered consoles only. No rack-mounted consoles. Not supported Supported for token-ring ServerNet connectivity or System Signaling Seven (SS7) connectivity for up to 4 S-series I/O enclosures. Supported but not intended for CG environments Supported but not intended for CG environments Supported 44 NonStop BladeSystems Carrier Grade Overview

145 Table 2 Characteristics of a NonStop Carrier Grade BladeSystem NB54000c-cg Supported RVU CLIM DVD (Minimum DVD version required for RVU) Processor/Processor Model Cabinet Main memory Minimum/maximum processors Supported platform configuration Maximum CLuster I/O Modules (CLIMs) Minimum CLIMs for fault-tolerance J06.2 and later Refer to the CLuster I/O Module (CLIM) Software Compatability Guide. Note: this file is preinstalled on new NB54000c/NB54000c-cg systems. Intel Itanium/NSE-AB NOTE: NB50000c-cg and NB54000c-cg server blades cannot coexist in the same system. 36 U seismic rack 6 GB to 64 GB per logical processor (64 GB supported on J06.3 and later RVUs) 2 to 6 3 configuration options that support 2, 4, 6, 8, 0, 2, 4, or 6 processors. In particular, the Flex Processor Bay configuration option offers processor numbering either sequentially or in even/odd format. For more information, see NonStop BladeSystem Platform Configurations NB50000c and NB54000c (page 25). 48 CLIMs (total of all types) in a 6 processor system. Maximum 20 CLIMs for Telco CLIM CG 2 Storage CLIMs CG 2 IP CLIMs CG 2 Telco CLIMs CG Disk Storage M8390-2CG SAS Disk Enclosure (page 52). M839-24CG SAS Disk Enclosure (page 52). Maximum M8390 2CG SAS disk drives per Storage CLIM pair Maximum M839 24CG SAS disk drives per Storage CLIM pair 5344-SE DAT units (2 DAT 60 drives each) System console (optional) Fibre Channel disk modules/ioam enclosures/swans NonStop S-series I/O enclosures Enterprise Storage System (ESS) Connection to ServerNet clusters Connection to BladeCluster Solution HP Power Regulator 48 (in 4 disk enclosures) 96 (in 4 disk enclosures) Supported. Connects to Storage CLIM. AC-powered consoles only. No rack-mounted consoles. Not supported Supported for token-ring ServerNet connectivity or System Signaling Seven (SS7) connectivity for up to 4 S-series I/O enclosures. Supported but not intended for CG environments Supported but not intended for CG environments Supported Supported as of J06.4 or later RVUs. For more information, refer to HP Power Regulator for NonStop BladeSystems NB54000c and NB54000c-cg (page 35). IMPORTANT: There are license considerations for the NonStop BladeSystem NB54000c-cg. Refer to Core Licensing Introduction (page 55). 45

146 NEBS Required Statements NonStop BladeSystems NB50000c-cg are designed for installation into a Central Office or similar telecommunications environment. NonStop BladeSystems NB50000c-cg are suitable for installation as part of a Common Bonding Network (CBN). To ensure proper electrical contact when grounding a BladeSystem CG rack: Use a ground cable of the same or larger size than the largest DC input power conductors, using a 40º C correction factor. Use a cable constructed of copper, 75 C minimum rated. Use only Listed two-hole copper compression-type lugs for the ground connector. Before making any crimp connections, coat bare wire and base connectors with anti-oxidant. Use star washers between the lug and rack ground rail to ensure proper ground contact and anti-rotation. The Battery Return (BR) Input Terminals are considered to be an Isolate DC Return (DC-I). NonStop BladeSystems NB50000c-cg are suitable for connection to intrabuilding or non-exposed wiring or cabling only. Unshielded, twisted-pair (UTP) cables may be used for NEBS and non-nebs installation. WARNING! The intra-building port(s) of the equipment or subassembly is suitable for connection to intra-building or unexposed wiring or cabling only. The intra-building port(s) of the equipment or subassembly MUST NOT be metallically connected to interfaces that connect to the OSP or its wiring. These interfaces are designed for use as intra-building interfaces only (Type 2 or Type 4 ports as described in GR-089-CORE, Issue 5) and require isolation from the exposed OSP cabling. The addition of Primary Protectors is not sufficient protection in order to connect these interfaces metallically to OSP wiring. This symbol indicates HP systems and peripherals that contain assemblies and components that are sensitive to electrostatic discharge (ESD). Carefully observe the precautions and recommended procedures in this document to prevent component damage from static electricity. NonStop BladeSystem NB50000c-cg and NB54000c-cg Hardware This subsection describes the standard and optional hardware components for NonStop BladeSystem carrier grade systems. Seismic Rack (page 47) c7000 CG Enclosure (page 47) IP CLIM CG (page 50) Telco CLIM CG (page 50) CLIM Cable Management Ethernet Patch Panel (page 4) Storage CLIM CG (page 5) M8390-2CG SAS Disk Enclosure (page 52) 46 NonStop BladeSystems Carrier Grade Overview

147 Seismic Rack M839-24CG SAS Disk Enclosure (page 52) 5344-SE DAT Tape Unit (page 53) Breaker Panel (page 53) Fuse Panel (page 54) System Alarm Panel (page 55) CG Maintenance Switch (page 57) System Console (page 58) NonStop S-Series CO I/O Enclosures (Optional) (page 58) There are two versions of racks for seismic systems: original seismic rack and seismic rack R2. The original seismic rack has a rectangular HP logo. The seismic rack R2 is black and has a round HP logo. Each seismic rack is 36U and has a 200 lb payload limit. For detailed specifications for the seismic rack, see Seismic Rack Specifications NB50000c-cg and NB54000c-cg (page 68). In this document, the word rack and cabinet are used interchangeably. c7000 CG Enclosure The c7000 CG enclosure is a Carrier Grade version of a rack-mounted c7000 enclosure NonStop BladeSystem. This enclosure provides integrated processing, power, and cooling capabilities along with connections to the I/O infrastructure and ServerNet connectivity via the ServerNet CG switches. There are two types of ServerNet CG switches: Standard I/O or High I/O. Each pair of ServerNet CG switches in a c7000 CG enclosure must be identical, either Standard I/O or High I/O. However, you can mix ServerNet switches between enclosures. NOTE: A BladeSystem CG that participates in a BladeCluster uses the BladeCluster ServerNet (Cluster High I/O) CG switch and the Advanced Cluster Hub (ACH) CG. This manual does not describe BladeCluster connections or components. Instead, refer to the BladeCluster Solution Manual. A c7000 CG enclosure in a NonStop Carrier Grade BladeSystem differs from a commercial grade c7000 enclosure in a few ways. It has: DC power supplies and DC power inputs instead of AC power supplies and AC power inputs Seismic bracing seismic brace is U high and is installed directly above the c7000 CG enclosure Three separate ground connections between the chassis and the cabinet All other aspects, including features, default names, configuration, and group, module, and slot numbering are the same. For the physical specifications for this enclosure, see System Installation Specifications NB50000c-cg and NB54000c-cg (page 65). NonStop BladeSystem NB50000c-cg and NB54000c-cg Hardware 47

148 DC Input Module Numbering 48 NonStop BladeSystems Carrier Grade Overview

149 Power Supply Bay Numbering Power Supply LEDs Power LED (green) On Off Off Fault LED 2 (amber) Off On Off Condition Power supply DC output on and OK DC present/standby output on Power supply failure No DC power to power supply NonStop BladeSystem NB50000c-cg and NB54000c-cg Hardware 49

150 Cable Connections IP CLIM CG Connections to Other System Components For information about connecting the c7000 CG enclosure to NonStop S-series CO I/O enclosures and to other system components, including the dedicated LAN, your support provider can refer to the NonStop BladeSystem Hardware Installation Manual. NOTE: If NonStop S-Series CO I/O enclosures are present, CLIMs cannot be connected to port 3 of the ServerNet switches in the c7000 CG enclosure. Ground Cable Connections For detailed information about grounding this enclosure, your support provider can see the BladeSystem Hardware Installation Manual. Power Cable Connections For detailed information about connecting power to this enclosure, your support provider can see the BladeSystem Hardware Installation Manual. The IP CLuster I/O Module (CLIM) CG is a Carrier Grade version of the commercial DL385 G5 or DL380 G6 IP CLIM and functions as a ServerNet Ethernet adapter. For more information, see CLuster I/O Modules (CLIMs) (page 3). The IP CLIM CG: Supports five Gigabit Ethernet ports. These ports can have a copper or fiber-optic interface, depending on the configuration. Can coexist with other types of CLIMs in the same system. Is configured as part of a pair for fault-tolerance. Connects to the maintenance switch using an embedded Ethernet port. Uses DC power supplies instead of AC supplies. Has a ground connection between the CLIM chassis and the cabinet. Has a retainer for the cable management arm, which keeps the cable management arm from moving during a seismic event. For the physical specifications for this CLIM, see System Installation Specifications NB50000c-cg and NB54000c-cg (page 65). Telco CLIM CG The Telco CLuster I/O Module (CLIM) CG is a Carrier Grade version of the commercial DL385 G5 or DL380 G6 Telco CLIM. For more information, see CLuster I/O Modules (CLIMs) (page 3). The Telco CLIM CG is a rack-mounted device that provides I/O communications using protocols common to the Telco environment, including: Session Initiated Protocol (SIP) over internet protocol (IP) Message Transfer Part (MTP) Level 3 User Adaptation Layer (M3UA) over IP The Telco CLIM CG: Supports five Gigabit Ethernet Ports that are available for SIP or M3UA processes. Can coexist with other types of CLIMs in the same system. Is configured as part of a pair for fault-tolerance. Connects to the maintenance switch using an embedded Ethernet port. Uses DC power supplies instead of AC supplies. 50 NonStop BladeSystems Carrier Grade Overview

151 Has a ground connection between the CLIM chassis and the cabinet. Has a retainer for the cable management arm, which keeps the cable management arm from moving during a seismic event. For the physical specifications for this CLIM, see System Installation Specifications NB50000c-cg and NB54000c-cg (page 65). CLIM CG Cable Management Ethernet Patch Panel The HP CLIM CG Cable Management Ethernet patch panel is a Carrier Grade version of the commercial cable management product used for cabling the IP and Telco CLIM CG connections and is preinstalled in all new systems that are configured with IP or Telco CLIM CGs. The CLIM CG patch panel simplifies and organizes the cable connections to allow easy access to the CLIM's customer-usable interfaces. Figure 33 Ethernet Patch Panel for CLIM CG IP and Telco CLIM CGs each have five customer-usable interfaces. The CLIM CG patch panel connects these interfaces and brings the usable interface ports to the CLIM CG patch panel. Each CLIM patch panel has 24 slots, is U high, and should be the topmost unit to the rear of the rack. Each CLIM CG patch panel can handle cables for up to five CLIM CGs. It has no power connection. Each CLIM CG patch panel has 6 panes labeled A, B, C, D, E, and F. Each pane has 4 RJ45 ports, and each port is labeled, 2, 3, or 4. The RJ45 ports in Panel A have port names: A, A2, A3, and A4. The factory default configuration depends on how many IP or Telco CLIM CGs and CLIM CG patch panels are configured in the system. For a new system, QMS Tech Doc generates a cable table with the CLIM interface name. This table identifies how the connections between the CLIM CG physical ports and CLIM CG patch panel ports were configured at the factory. If you are adding a CLIM CG patch panel to an existing system, have your service provider refer to the CLuster I/O (CLIM) Installation and Configuration Guide for installation instructions. Storage CLIM CG and Storage Products Storage CLIM CG The Storage CLuster I/O Module (CLIM) CG is a Carrier Grade version of the commercial DL385 G5 or DL380 G6 Storage CLIM and functions as a ServerNet adapter through which several types of storage devices connect to the system. For more information, see CLuster I/O Modules (CLIMs) (page 3). The Storage CLIM CG: Can coexist with other types of CLIMs in the same system. Is configured as part of a pair for fault-tolerance. Connects to the maintenance switch using an embedded Ethernet port. Uses DC power supplies instead of AC supplies. Has a ground connection between the CLIM chassis and the cabinet. Has a retainer for the cable management arm, which keeps the cable management arm from moving during a seismic event. NonStop BladeSystem NB50000c-cg and NB54000c-cg Hardware 5

152 The Storage CLIM CG can connect to these storage devices: SAS Disk Enclosure Model Description Compatibility With CLIM Models DL385 G5 DL380 G6 M8390-2CG SAS Disk Enclosure (page 52) HP M8390-2CG SAS CG Enclosure, a DC-powered NEBS Level 3-certified drive enclosure holding up to 2 SAS drives. Yes Yes 5344-SE DAT Tape Unit (page 53) HP StorageWorks U SAS tape enclosure holding up to SE tape drives. Yes Yes M839-24CG SAS Disk Enclosure (page 52) HP M839-24CG SAS CG Enclosure, a DC-powered NEBS Level 3-certified drive enclosure holding up to 24 SAS drives. No Yes For the physical specifications for this CLIM, see System Installation Specifications NB50000c-cg and NB54000c-cg (page 65). M8390-2CG SAS Disk Enclosure The M8390-2CG SAS disk enclosure is 2U high and supports up to 2 dual-ported 3.5 SAS disk drives. Figure 34 M8390-2CG SAS Disk Enclosure, Front Figure 35 M8390-2CG SAS Disk Enclosure, Back For the physical specifications for this enclosure, see System Installation Specifications NB50000c-cg and NB54000c-cg (page 65). M839-24CG SAS Disk Enclosure The M839-24CG SAS disk enclosure is 2U high and supports up to 24 dual-ported 2.5 SAS disk drives. Figure 36 M CG SAS Disk Enclosure, Front 52 NonStop BladeSystems Carrier Grade Overview

153 Figure 37 M CG SAS Disk Enclosure, Back For the physical specifications for this enclosure, see System Installation Specifications NB50000c-cg and NB54000c-cg (page 65) SE DAT Tape Unit The 5344-SE DAT tape unit is U high and supports two HP DAT60 internal tape drives. The tape drives are neither dual powered nor dual ported: Each tape drive is independently powered and has its own SAS input. Figure SE DAT Tape Unit For the physical specifications for this enclosure, see System Installation Specifications NB50000c-cg and NB54000c-cg (page 65). For detailed information about this tape unit, see the SE Tape Drive Installation and User's Guide. Enterprise Storage System (ESS) Breaker Panel An Enterprise Storage System (ESS) is a collection of magnetic disks, their controllers, and a disk cache in one or more standalone cabinets. Like the commercial Nonstop BladeSystem NB50000c, the NonStop BladeSystem NB50000c-cg supports connecting to the ESS. However, the ESS is not intended for a Carrier Grade environment. The breaker panel requires 2U of space and occupies the top 2U of the cabinet. Power cabling is accessed from the rear of the cabinet. The breaker panel has a current rating of 240A and has dual inputs and 4 outputs (7 outputs per power rail). The breaker panel is configured based on the required outputs: Outputs, 2, and 3 have dual-post terminal blocks, are rated for a maximum of 80A, use 80A breakers, and connect to a single c7000 CG enclosure. If these outputs are not being NonStop BladeSystem NB50000c-cg and NB54000c-cg Hardware 53

154 used for a c7000 CG enclosure, you can install smaller breakers for use with other equipment (see outputs 4, 5, 6, and 7). Outputs 4, 5, 6, and 7 have single-post terminal blocks, are rated for a maximum of 50A and use: 30A breakers if configured for a M8390-2CG SAS disk enclosure 5A breakers if configured for a CLIM 2A breakers if configured for the maintenance switch, alarm panel, or 5344-SE DAT unit CAUTION: Outputs 4, 5, 6, and 7 are not rated for 80A breakers. Do not install 80A breakers in these outputs. For information about the physical specifications for the breaker panel, see System Installation Specifications NB50000c-cg and NB54000c-cg (page 65). Figure 39 Breaker Panel Front View Figure 40 Breaker Panel Rear View Fuse Panel The fuse panel (Figure 4) fuse panel requires 2U of space. The fuses are accessed from the front of the cabinet. Power cabling is accessed from the rear of the cabinet. The fuse panel has a current total output load rating of 80A. It accepts power from both power rails and distributes it to four individually fused TPA outputs and to five individually fused GMT outputs per rail. GMT outputs are rated up to 5A. TPA outputs are rated up to 50A. Power cables 54 NonStop BladeSystems Carrier Grade Overview

155 from enclosures that receive power from the fuse panel connect to the outputs on the rear of the fuse panel (Figure 4). The fuse panel does not have preconfigured fuses. New systems come configured by manufacturing according to the order specifications. This fuse panel is the same fuse panel used in NonStop NS-Series Carrier Grade Servers, but uses fuses of different sizes. For the physical specifications for this enclosure, see System Installation Specifications NB50000c-cg and NB54000c-cg (page 65). Figure 4 Fuse Panel NOTE: For your safety, only specially certified HP personnel can work with the fuse panel or any DC-powered components. The descriptions in this section are provided for reference only. System Alarm Panel The system alarm panel is a rack-mounted device that provides SNMP, visual, and audible alarm indicators and relays for generating four levels of alarms to alert the customer to the severity of a hardware error in the system. This system alarm panel: Can be installed in the seismic rack only. Is controlled by software provided by OCS. The alarm panel supports SNMP alarm inputs. Alarm levels are: Alarm Level Critical Major Minor Power Idle Description A severe service-affecting condition has occurred. Immediate corrective action is required. A serious disruption of service, a malfunction, or a FRU failure has occurred. Immediate corrective action is required. A non-service-affecting condition has occurred. This alarm does not require immediate corrective action. A power fault has occurred. No alarms are active. NonStop BladeSystem NB50000c-cg and NB54000c-cg Hardware 55

156 Figure 42 System Alarm Panel Front Figure 43 System Alarm Panel Rear 56 NonStop BladeSystems Carrier Grade Overview

157 System Alarm Panel Connections System Alarm Panel Ground Connections A ground cable connects from the ground stud on the back of the alarm panel to the nearest cabinet grounding rail. System Alarm Panel Power Connections The alarm panel can connect to either a 2A breaker terminal on the breaker panel or a 2A GMT fuse terminal on the fuse panel. System Alarm Panel Connection to Maintenance Switch An Ethernet cable connects from the RJ45 connector on the back of the alarm panel to any available port on the maintenance switch. For details on connecting the alarm panel to the dedicated service LAN, see LAN Connection in the Alarm Panel Manual. Other System Alarm Panel Connections Two female DB-9 craft ports (standard EIA-232 serial port), one on the front and one on the back, support communications for configuring basic settings of the alarm panel. Only one of these ports can be used at a time. The system alarm panel can also be configured using a web interface. One DB-5 male connector on the back of the alarm panel supports Telco alarm input. Cables that connect to this port are supplied by the customer. One DB-5 male connector on the back of the alarm panel supports Telco alarm output. Cables that connect to this port are supplied by the customer. For detailed information about the system alarm panel, see the Alarm Panel Manual Craft Ports for Configuring the System Alarm Panel The craft port is a standard EIA-232 serial port with default settings of 9600 baud, 8 data bits, stop bit, no parity, and no hardware handshaking. Cables that connect to this port are supplied by the customer. The craft port is used only if the web interface cannot be used (when the IP address is unknown or DNS is not working). The web interface is the primary interface for configuring the alarm panel. System Alarm Panel Dry-Contact Connector for Telco Alarm Output One DB-5 male connector on the back of the alarm panel supports Telco alarm output. Cables that connect to this port are supplied by the customer. Only the dry-contact outputs are supported. They can be connected to external alarm devices. For information, see the Alarm Panel Manual CG Maintenance Switch The CG maintenance switch for this system is a GarrettCom Magnum 6K25 Fiber Switch. This switch: Has 24 0/00 Ethernet ports. These ports are labeled A through A8, B through B8, and C through C8 (instead of through 24). Uses two DC power inputs. Has a ground connection between the switch chassis and the cabinet. Figure 44 CG Maintenance Switch, Front NonStop BladeSystem NB50000c-cg and NB54000c-cg Hardware 57

158 Figure 45 CG Maintenance Switch, Back Cable Connections Connections to Other System Components The maintenance switch connects to these system components on a dedicated service LAN: Each c7000 CG enclosure through the two Onboard Administrators and through the two enclosure interconnect Ethernet switches Each CLIM through the ilo port and through the embedded Ethernet port for the OSM Management Interface (eth0) Each system alarm panel Each system console CG Maintenance Switch Ground Connection The ground stud is located at the back of the maintenance switch to the right of the power connectors.cable connects from this ground stud to the cabinet grounding rail. CG Maintenance Switch Power Connections The maintenance switch can connect to either a 2A fuse on the fuse panel or a 2A breaker on the breaker panel. The power terminals are at the back of the switch, on the right side. System Console A system console is an HP approved personal computer (PC) running maintenance and diagnostic software for NonStop BladeSystems. When supplied with a new NonStop BladeSystem, system consoles have factory-installed HP and third-party software for managing the system. You can install software upgrades from the HP NonStop System Console Installer DVD. NonStop Carrier Grade BladeSystems do not support rack-mounted system consoles. The system console is the same as the AC-powered system console used for the commercial NonStop BladeSystem. As of the J06.07, on DC-powered NonStop BladeSystems, the CLuster I/O Module (CLIM) are configured to run the DHCP, TFTP, and DNS services instead of the console. For more information, see DHCP, TFTP, and DNS Services (page 79). NOTE: The NonStop system console must be configured with some ports open. For more information, see the NonStop System Console Installer Guide. NonStop S-Series CO I/O Enclosures (Optional) Up to four NonStop S-series CO I/O enclosures (Groups -4) can be connected to the ServerNet switches in a c7000 enclosure (Group 00 only). The supported S-series CO I/O connections are for the Token-Ring ServerNet Adapter (TRSA) or 6763 Common Communications ServerNet Adapter 2 (CCSA 2). These adapters can coexist in the same system configuration. No other S-series ServerNet adapters are supported. These connections are made using an MTP-SC cable between the IOMF 2 FRUs (slots 50 and 55) in the S-series CO I/O enclosure and port 3 of the ServerNet switches in the c7000 enclosure. 58 NonStop BladeSystems Carrier Grade Overview

159 Each IOMF 2 connection supports up to 2 TRSAs or 2 CCSA 2s. Only ServerNet switch port 3 supports S-series CO I/O connections and all S-series connections are made to this one port. NOTE: If CLIMs are already connected to port 3, you must move them to another port on the ServerNet switches and reconfigure them. NonStop S-series CO I/O enclosures in NonStop Carrier Grade BladeSystems have these characteristics: New NonStop S-series CO I/O enclosures can be installed in seismic racks. A customer that has existing S-series I/O enclosures in other types of cabinets can connect those I/O enclosures to a NonStop Carrier Grade BladeSystem, but any alarm panels from older systems cannot be used. NonStop Carrier Grade BladeSystems support the alarm panel described in System Alarm Panel (page 55) only. No 46xxdisk drives are supported. Only the TRSA or CCSA 2 (with up to 4 SS7TE3 plug-in cards (PICs) is supported. No other configurations of the CCSA 2 are supported and no other ServerNet adapters are supported. The CO I/O enclosure assembly requires a total of 20 U of contiguous space. The seismic rack can contain only one NonStop S-series CO I/O enclosure assembly. CO I/O enclosures receive power directly from site power railand site power rail B. They do not connect to the fuse panel or to the breaker panel. NOTE: If NonStop S-series CO I/O enclosures are present, CLIMs cannot be connected to port 3 of the ServerNet switches in the c7000 CG enclosure. NonStop BladeSystem NB50000c-cg and NB54000c-cg Hardware 59

160 8 Site Preparation Guidelines NB50000c-cg and NB54000c-cg This chapter describes power, environmental, and space considerations for your site. For information about the physical specifications for system cabinets and components, see System Installation Specifications NB50000c-cg and NB54000c-cg (page 65). Modular Cabinet Power and I/O Cable Entry Power and I/O cables can enter the server from either the top or the bottom of the cabinets, depending on how the cabinets are ordered from HP and the routing of the power feeds at the site: Cabinets are optimized for top-down cabling. The design also accommodates an under-floor cabling system if the cabinets are installed on a computer room raised floor. Emergency Power-Off Connections NonStop Carrier Grade BladeSystems do not contain batteries capable of supplying more than 750 volt-amperes (VA) for more that five minutes, so they do not require connection to a site EPO switch. Electrical Power and Grounding Quality Proper design and installation of a power distribution system for a NonStop server requires specialized skills, knowledge, and understanding of appropriate electrical codes and the limitations of the power systems for computer and data processing equipment. For power and grounding specifications, see Enclosure AC Input (page 93) for the server you are installing. NOTE: All cabinets in a system must use the same type of site input power (either AC power or DC power). Power Quality This equipment is designed to operate reliably over a wide range of voltages and frequencies. However, damage can occur if these ranges are exceeded. Severe electrical disturbances can exceed the design specifications of the equipment. Common sources of such disturbances are: Fluctuations occurring within the facility s distribution system Utility service low-voltage conditions (such as sags or brownouts) Wide and rapid variations in input voltage levels Electrical storms Large inductive sources (such as motors and welders) Faults in the distribution system wiring (such as loose connections) Computer systems can be protected from the sources of many of these electrical disturbances by using: A dedicated power distribution system Power conditioning equipment Lightning arresters on power cables to protect equipment against electrical storms For steps to take to ensure proper power for the servers, consult with your HP site preparation specialist or power engineer. 60 Site Preparation Guidelines NB50000c-cg and NB54000c-cg

161 Power Consumption Because each cabinet houses a unique combination of enclosures, calculate the total power consumption for the hardware installed in each cabinet using the worksheet for the server you are installing. Then add together the total power consumption values for all the cabinets in the system. For enclosure power specifications and power worksheets, see Enclosure AC Input (page 93). Cooling and Humidity Control Do not rely on an intuitive approach to cooling design or to simply achieve an energy balance that is, summing up to the total power dissipation from all the hardware and sizing a comparable air conditioning capacity. Today s high-performance servers use semiconductors that integrate multiple functions on a single chip with very high power densities. These chips plus high-power-density mass storage and power supplies are mounted in ultra-thin server and storage enclosures, which are then deployed into computer racks in large numbers. This higher concentration of devices results in localized heat, which increases the potential for hot spots that can damage the equipment. Additionally, variables in the installation site layout can adversely affect air flows and create hot spots by allowing hot and cool air streams to mix. Studies have shown that above 70 F (20 C), every increase of 8 F (0 C) reduces long-term electronics reliability by 50%. Because of high heat densities and hot spots, an accurate assessment of air flow around and through the server equipment and specialized cooling design is essential for reliable server operation. Consult with your HP cooling consultant or your heating, ventilation, and air conditioning (HVAC) engineer. NOTE: Failure of site cooling with the server continuing to run can cause rapid heat buildup and excessive temperatures within the hardware. Excessive internal temperatures can result in full or partial system shutdown. Ensure that the site s cooling system remains fully operational when the server is running. Because each cabinet houses a unique combination of enclosures, calculate the total heat dissipation for the hardware installed in each cabinet using the worksheet for the server you are installing. Then add together the total heat dissipation values for all the cabinets in the system. For air temperature levels at the site, see the temperature specifications for the server you are installing. Cooling Airflow Direction Weight Flooring Each enclosure includes its own forced-air cooling fans or blowers. Air flow for each enclosure enters from the front of the cabinet and rack and exhausts at the rear. Because each cabinet houses a unique combination of enclosures, total weight must be calculated based on what is in the specific cabinet. For enclosure weights and weight worksheets, see System Installation Specifications NB50000c-cg and NB54000c-cg (page 65). These servers can be installed either on the site s floor with the cables entering from above the equipment or on raised flooring with power and I/O cables entering from underneath. Because cooling airflow through each enclosure is front-to-back, raised flooring is not required for system cooling. WARNING! An unsecured cabinet is prone to tipping. Seismic and Carrier Grade cabinets are designed to be bolted to the floor. Because the cabinet has no other provisions to prevent tipping, it must never be put to use without first being bolted to the floor. Failure to do so can result in product damage and serious personal injury or death. Cooling and Humidity Control 6

162 Seismic and Carrier Grade cabinets are designed to be bolted to the floor. In addition, Network Equipment-Building System (NEBS) standards require that cabinets be bolted securely to a solid concrete floor. However, the customer can choose to install this system in a computer room with a raised floor. If so, the typical computer room raised floor must be reinforced to accommodate the extra weight of the servers and to provide for the desired level of seismic protection. The site floor structure and any raised flooring (if used) must be able to support the total weight of the installed computer system as well as the weight of the individual cabinets and their enclosures as they are moved into position. To determine the total weight of each cabinet with its installed enclosures, see the weight worksheet in System Installation Specifications NB50000c-cg and NB54000c-cg (page 65) for the server you are installing. Describing how to build a raised floor that can accommodate the weight of the server, that meets all NEBS standards, and that complies with local regulations, is beyond the scope of this manual. These requirements must be investigated by the customer or the customer s installation provider. The customer assumes all risk in the using the NonStop Carrier Grade BladeSystem on a raised floor. For your site s floor system, consult with your HP site preparation specialist or an appropriate floor system engineer. Dust and Pollution Control NonStop servers do not have air filters. Any computer equipment can be adversely affected by dust and microscopic particles in the site environment. Airborne dust can blanket electronic components on printed circuit boards inhibiting cooling airflow and causing premature failure from excess heat, humidity, or both. Metallically conductive particles can short circuit electronic components. Tape drives and some other mechanical devices can experience failures resulting from airborne abrasive particles. For recommendations to keep the site as free of dust and pollution as possible, consult with your heating, ventilation, and air conditioning (HVAC) engineer or your HP site preparation specialist. Zinc Particulates Over time, fine whiskers of pure metal can form on electroplated zinc, cadmium, or tin surfaces such as aged raised flooring panels and supports. If these whiskers are disturbed, they can break off and become airborne, possibly causing computer failures or operational interruptions. This metallic particulate contamination is a relatively rare but possible threat. Kits are available to test for metallic particulate contamination, or you can request that your site preparation specialist or HVAC engineer test the site for contamination before installing any electronic equipment. Space for Receiving and Unpacking the System Identify areas that are large enough to receive and unpack the system from its shipping cartons and pallets. Be sure to allow adequate space to remove the system equipment from the shipping pallets using supplied ramps. Also be sure adequate personnel are present to remove each cabinet from its shipping pallet and to safely move it to the installation site. Ensure sufficient pathways and clearances for moving the server equipment safely from the receiving and unpacking areas to the installation site. Verify that door and hallway width and height as well as floor and elevator loading will accommodate not only the server equipment but also all required personnel and lifting or moving devices. If necessary, the customer can elect to enlarge or remove any obstructing doorway or wall. For specific guidelines, see the specifications for the cabinets used in the system you are installing. Operational Space When planning the layout of the server site, use the cabinet dimensions, door swing, and service clearances for the cabinet type(s) supported by the server you are installing. Because location of 62 Site Preparation Guidelines NB50000c-cg and NB54000c-cg

163 the lighting fixtures and electrical outlets affects servicing operations, consider an equipment layout that takes advantage of existing lighting and electrical outlets. Also consider the location and orientation of current or future air conditioning ducts and airflow direction and eliminate any obstructions to equipment intake or exhaust air flow. Space planning should also include the possible addition of equipment or other changes in space requirements. Depending on the current or future equipment installed at your site, layout plans can also include provisions for: Channels or fixtures used for routing data cables and power cables Access to air conditioning ducts, filters, lighting, and electrical power hardware Communications cables, patch panels, and switch equipment Power conditioning equipment Storage area or cabinets for supplies, media, and spare parts Communications Interference HP system compliance tests are conducted with HP supported peripheral devices and shielded cables, such as those received with the system. The system meets interference requirements of all countries in which it is sold. These requirements provide reasonable protection against interference with radio and television communications. Installing and using the system in strict accordance with HP instructions minimizes the chances that the system will cause radio or television interference. However, HP does not guarantee that the system will not interfere with radio and television reception. Take these precautions: Use only shielded cables. Install and route the cables per the instructions provided. Ensure that all cable connector screws are firmly tightened. Use only HP supported peripheral devices. Ensure that all panels and cover plates are in place and secure before system operation. Electrostatic Discharge NOTE: For your safety, only specially trained, authorized service providers can service the NS50000c-cg. The practices described in this section refer to any NonStop system and are provided for reference only. They are not intended to be performed by untrained personnel on a DC-powered system. Preventing electrostatic discharge To prevent damaging the system, be aware of the precautions you must follow to set up the system or handle parts.discharge of static electricity from a finger or other conductor may damage system boards or other static-sensitive devices. This type of damage can reduce the life expectancy of the device. To prevent electrostatic damage, observe the following precautions: Avoid hand contact by transporting and storing products in static-safe containers. Keep electrostatic-sensitive parts in their containers until they arrive at static-free workstations. Place parts on a grounded surface before removing them from containers. Avoid touching pins, leads, or circuitry Always be properly grounded when touching a static-sensitive component or assembly. Communications Interference 63

164 Grounding Methods to Prevent Electrostatic Discharge Several methods are used for grounding. Use one or more of the following methods when handling or installing electrostatic-sensitive parts: Use a wrist strap connected by a ground cord to a grounded workstation or computer chassis. Wrist straps are flexible straps with a minimum of megohm ±0 percent resistance in the ground cords. To provide proper ground, wear the strap snug against the skin. Use heel straps, toe straps, or boot straps at standing workstations. Wear the straps on both feet when standing on conductive floors or dissipating floor mats. Use conductive field service tools. Use a portable field service kit with a folding static-dissipating work mat. If you do not have any of the suggested equipment for proper grounding, have an authorized reseller install the part. For more information on static electricity or assistance with product installation, contact an authorized reseller. 64 Site Preparation Guidelines NB50000c-cg and NB54000c-cg

165 9 System Installation Specifications NB50000c-cg and NB54000c-cg This section provides specifications necessary for system installation planning. DC Power Distribution NB50000c-cg and NB54000c-cg Figure 46 DC Power Distribution for Sample Single Cabinet System Figure 47 Sample Power Distribution for Seismic Rack and Carrier Grade Cabinet DC Power Distribution NB50000c-cg and NB54000c-cg 65

166 Enclosure Power Loads NB50000c-cg and NB54000c-cg Enclosure Type Power Lines per Enclosure Typical Power Consumption (VA) Maximum Power Consumption (VA) 2 NB50000c-cg Server Blades: BL860c blade NB54000c-cg Server Blades: Server Blades: BL860c i2 blade 4 (6GB RAM) BL860c i2 blade (24GB RAM) BL860c i2 blade (32GB RAM) BL860c i2 blade (48GB RAM) BL860c i2 blade (64GB RAM) Common products used by both the NB50000c-cg and NB54000c-cg BLC7000 CG enclosure ,000 BLC700 CG enclosure 6 500,750 NonStop S-Series CO I/O Enclosure Fuse Panel BladeSystem ServerNet Switch (Standard I/O) BladeSystem ServerNet Switch (High I/O) DL385 G5 CLIM CG DL380 G6 CLIM CG M8390-2CG SAS disk enclosure, empty M839-24CG SAS disk enclosure, empty SAS 2.5 inches disk drive 46GB, 5k rpm SAS 2.5 inches disk drive 300GB, 5k rpm SAS 3.5 in disk drive Breaker panel CG Maintenance switch System Installation Specifications NB50000c-cg and NB54000c-cg

167 Enclosure Type Power Lines per Enclosure Typical Power Consumption (VA) Maximum Power Consumption (VA) 2 System Alarm panel SE DAT Unit Typical = measured at 22C ambient temp 2 Maximum = measured at 50C ambient temp 3 All BL860c server blades measured with.6ghz dual-core processor, 6GB RAM, and ServerNet Mezzanine Card 4 All BL860c i2 server blades measured with.73ghz quad-core processor and ServerNet mezzanine card. Breaker Panel and Fuse Panel Power Specifications NB50000c-cg and NB54000c-cg Table 3 Breaker Panel Power Specifications Specification Value Outputs Number of outputs with 80A rating (per power rail) Number of outputs with maximum 50A rating (per power rail) Maximum output load A continuous Input requirements Nominal voltage Operating voltage Max input rating Number of inputs Site branch circuit protection -48/-60 VDC -36VDC to -72VDC 240A per input 2 As appropriate for maximum total load rating. Table 4 Fuse Panel Power Specifications Specification Value Outputs Number of TPA (maximum 50A rating) outputs per power rail Number of GMT (maximum 5A rating) outputs per power rail Maximum output load A Input requirements Nominal voltage Operating voltage Max input rating Number of inputs Site branch circuit protection -48/-60 VDC -36VDC to -72VDC 80A per input 2 As appropriate for maximum total load rating. DC Power Distribution NB50000c-cg and NB54000c-cg 67

168 Dimensions and Weights NB50000c-cg and NB54000c-cg Seismic Rack Specifications NB50000c-cg and NB54000c-cg All information for connecting, grounding and installing the NonStop BladeSystem NB50000c-cg or NB54000c-cg is available to your qualified support provider. Specification U Height Width Depth Cable Entry Max Load Rack Weight Packaging and Pallet Weight Including side panels. Value 36U 26.7 in (67.8 cm) 39.4 in (00.0 cm) Top 200 lbs (544.3 kg) 500 lbs (226.8 kg) Original seismic rack is 500 lbs (226.8 kg) Seismic rack R2 is 485 lbs (220 kg ) 200 lbs (90.7 kg) Floor Space Requirements Minimum Recommended Measurement in cm in cm Front (appearance side) clearance Rear (service side) clearance Space between cabinets <.0 < 2.5 Cabinet pitch Unpacking area: When moving the cabinet off the shipping pallet, you need approximately 9 feet (2.74 meters) on one side of the cabinet to allow you to slide the pallet out from under the cabinet after the cabinet has been raised on the casters. Unit Sizes NB50000c-cg and NB54000c-cg Enclosure Type NonStop S-Series CO I/O Enclosure Fuse Panel c7000 CG enclosure (BLC7000 CG and BLC700 CG) DL385 G5 CLIM CG DL380 G6 CLIM CG CLIM CG Patch Panel M8390-2CG SAS disk enclosure M839-24CG SAS disk enclosure Breaker panel Height (U) Contact your service provider 2 (includes seismic brace) System Installation Specifications NB50000c-cg and NB54000c-cg

169 Enclosure Type CG Maintenance switch System Alarm panel 5344-SE DAT Unit Height (U) Enclosure Dimensions NB50000c-cg and NB54000c-cg Enclosure Type Height Width Depth in cm in cm in cm NonStop S-Series CO I/O Enclosure (including safety cover) Fuse Panel (without safety cover) c7000 CG enclosure (BLC7000 CG and BLC700 CG) DL385 G5 CLIM CG DL380 G6 CLIM CG in 69.2 CLIM CG Ethernet Patch Panel M8390-2CG SAS disk enclosure M839-24CG SAS disk enclosure Breaker panel (including safety cover) 26 (without safety cover) CG Maintenance switch System Alarm panel SE DAT Unit Dimensions and Weights NB50000c-cg and NB54000c-cg 69

170 Cabinet and Enclosure Weights Worksheet NB50000c-cg and NB54000c-cg The total weight of each cabinet is the sum the weight of the cabinet plus each enclosure installed in it. All weights are approximate. Use this worksheet in Rack Weight Worksheet (page 70) to calculate the weight. Table 5 Rack Weight Worksheet Weight Worksheet for Rack Number Enclosure Type Number of Enclosures Weight lb kg Total lb kg Breaker panel Fuse panel Alarm panel c7000 CG enclosure empty (BLC7000 CG and BLC700 CG) c7000 CG enclosure fully-loaded (BLC7000 CG and BLC700 CG) DL385 G5 CLIM CG DL380 G6 CLIM CG CLIM CG Patch Panel M8390-2CG SAS disk enclosure (empty) M839-24CG SAS disk enclosure (empty) SE DAT unit CG Maintenance switch NonStop S-series CO I/O enclosure assembly (including CIA/SAP, TICs, and air baffle) Total Payload Seismic Cabinet, 36 U Total Weight Maximum payload weight for the 36U cabinet: 200 lbs (544.3 kg). 2 Weight of R2 seismic rack Environmental Specifications NB50000c-cg and NB54000c-cg To calculate the heat dissipation for each seismic rack, use the worksheet for your system type. Either: Heat Dissipation Worksheet for Seismic Rack NB50000c-cg (page 7) Heat Dissipation Worksheet for Seismic Rack NB54000c-cg (page 72). 70 System Installation Specifications NB50000c-cg and NB54000c-cg

171 Heat Dissipation Specifications and Worksheets Carrier Grade Table 6 Heat Dissipation Worksheet for Seismic Rack NB50000c-cg Heat Dissipation Worksheet for Rack Number Enclosure Type Number Installed Typical Unit Heat (BTU/hour) Maximum Unit Heat (BTU/hour) Total (BTU/hour) Alarm Panel 7 34 BLC7000 CG BLC700 CG BL860c blade (for NB50000c-cg) BladeSystem ServerNet Switch (Standard I/O) BladeSystem ServerNet Switch (High I/O) DL385 G5 CLIM CG DL380 G6 CLIM CG M8390-2CG disk enclosure, empty M839-24CG disk enclosure, empty SAS 3.5 in disk drive SAS 2.5 in disk drive 46GB, 5k rpm 4 28 SAS 2.5 in disk drive 300GB, 5k rpm SE DAT unit 5 02 CG Maintenance switch 9 36 NonStop S-series CO I/O enclosure Enclosure Type Number Installed Unit Heat (Btu/hour with one line powered) Unit Heat (Btu/hour with both lines powered) Breaker Panel 77 x (breaker panel load /480) Fuse panel x (fuse panel load /60) Total Heat (Btu/hour) Assumes full load per side with breaker panel dissipating 20W per side 2 Breaker panel load is the total nominal current on both sides Environmental Specifications NB50000c-cg and NB54000c-cg 7

172 3 Maximum value. Assumes full load per side of 70W. 4 Fuse panel load is the total nominal current on both sides Table 7 Heat Dissipation Worksheet for Seismic Rack NB54000c-cg Heat Dissipation Worksheet for Rack Number Enclosure Type Number Installed Typical Unit Heat (BTU/hour) Maximum Unit Heat (BTU/hour) Total (BTU/hour) Alarm Panel 7 34 BLC7000 CG BLC700 CG BL860c i2 blade (for NB54000c-cg) BladeSystem ServerNet Switch (Standard I/O) BladeSystem ServerNet Switch (High I/O) DL385 G5 CLIM CG DL380 G6 CLIM CG M8390-2CG disk enclosure, empty M839-24CG disk enclosure, empty SAS 3.5 in disk drive SAS 2.5 in disk drive 46GB, 5k rpm 4 28 SAS 2.5 in disk drive 300GB, 5k rpm SE DAT unit 5 02 CG Maintenance switch 9 36 NonStop S-series CO I/O enclosure Enclosure Type Number Installed Unit Heat (Btu/hour with one line powered) Unit Heat (Btu/hour with both lines powered) Breaker Panel 77 x (breaker panel load /480) Fuse panel x (fuse panel load /60) Total Heat (Btu/hour) Assumes full load per side with breaker panel dissipating 20W per side 2 Breaker panel load is the total nominal current on both sides 72 System Installation Specifications NB50000c-cg and NB54000c-cg

173 3 Maximum value. Assumes full load per side of 70W. 4 Fuse panel load is the total nominal current on both sides Operating Temperature, Humidity, and Altitude Specification Value Temperature range Operating Non operating -5 to 50 ºC ambient temperature -40 to 70 ºC Relative humidity (non-condensing) 2 Operating Non operating 5 to 90% relative humidity 5 to 93% relative humidity Altitude Operating Seismic resistance 40 ºC from sea level to 6,000 ft. 30 ºC from 6,000 ft. to 3,000 ft. Earthquake Zone 4 Temperature ratings are shown for sea level. No direct sunlight allowed. 2 Storage maximum humidity of 93% is based on a maximum temperature of 40 C. Power Load Worksheet NB50000c-cg and NB54000c-cg Enclosures in Seismic Racks Power and current specifications for seismic racks are the sum of the power and current specifications for each enclosure that are installed in the cabinet. Use the worksheet in Power Load Worksheet for Seismic Rack (page 73) to calculate the power load for each rack and to indicate the breaker panel and fuse panel configurations. Table 8 Power Load Worksheet for Seismic Rack Power Load Worksheet for Rack Number Enclosure Type Number of Enclosures Maximum Watts per Enclosure Subtotal Maximum Amps per Enclosure at -36V Breaker Panel Output Breaker Size Fuse Panel 2 Fuse Size BLC7000 CG Output, 2, and 3 80A BLC700 CG Output, 2, and 3 80A DL385 G5 CLIM CG DL380 G6 CLIM CG M8390-2CG SAS disk enclosure M839-24CG SAS disk enclosure Power Load Worksheet NB50000c-cg and NB54000c-cg 73

174 Table 8 Power Load Worksheet for Seismic Rack (continued) Power Load Worksheet for Rack Number Enclosure Type Number of Enclosures Maximum Watts per Enclosure Subtotal Maximum Amps per Enclosure at -36V Breaker Panel Output Breaker Size Fuse Panel 2 Fuse Size DAT Tape System Alarm Panel Maintenance Switch NonStop S-Series CO I/O Enclosure N/A N/A N/A N/A Total Breaker panel output options are through 7 2 Fuse panels have a maximum of 4 TPA fuses and 5 GMT fuses 3 Current per feed with 3x power supplies running (one side powered) 4 Current per feed with 3x power supplies running (one side powered) 5 Choose breaker panel with breaker size 5A or fuse panel with 5A GMT or TPA fuse 6 Choose breaker panel with breaker size 5A or fuse panel with 5A GMT or TPA fuse 7 This enclosure is using 30A breaker or 30A TPA fuse 8 This enclosure is using 30A breaker or 30A TPA fuse 9 Choose breaker panel with breaker size 2A or fuse panel with GMT fuse Sample Configuration Carrier Grade This subsection contains completed planning information for a sample Carrier Grade BladeSystem that consists of: Eight processors (one c7000 CG enclosure) Two IP CLIM CG enclosures Two Storage CLIM CG enclosures Two Telco CLIM CG enclosures Two M8390-2CG SAS disk enclosures One 5344-SE DAT unit One alarm panel for each seismic rack One CG maintenance switch Rack contains: A c7000 CG enclosure in U- IP CLIM CG enclosures in U2-3 and in U4-5 Storage CLIM CG enclosures in U6-7 and in U8-9 M8390-2CG SAS disk enclosures in U20-2 and U22-23 Telco CLIM CG enclosures in U24-25 and in U26-27 A 5344-SE DAT unit enclosure in U30 A CG maintenance switch enclosure in U3 74 System Installation Specifications NB50000c-cg and NB54000c-cg

175 An alarm panel enclosure in U29 A fuse panel in U33-34 A breaker panel in U35-36 A CLIM CG Ethernet patch panel in U4 Table 9 Completed Weight Worksheet for Sample System Rack Number Weight Worksheet for Rack Number Enclosure Type Number of Enclosures Weight lb kg Total lb kg Breaker panel Fuse panel Alarm panel c7000 CG CLIM CG Ethernet Patch Panel DL385 G5 CLIM CG M8390-2CG SAS disk enclosure SE DAT unit CG Maintenance switch NonStop S-series CO I/O enclosure assembly (including CIA/SAP, TICs, and air baffle) Total Payload Seismic Cabinet, 36 U Total Weight Maximum payload weight for the 36U cabinet: 200 lbs (544.3 kg). 2 Weight of Original Seismic Cabinet Sample Configuration Carrier Grade 75

176 0 System Configuration Guidelines NB50000c-cg and NB54000c-cg The NB50000c-cg and NB54000c-cg are configured similarly to the NB50000c and NB54000c. See System Configuration Guidelines NB50000c and NB54000c (page 08) for details. Differences for the carrier grade version include: M8390-2CG or M839-24CG SAS disks c7000 and c700 CG enclosures CLIM Ethernet CG patch panel 5344-SE DAT tape drive Carrier grade versions of IP, Telco, and Storage CLIMs Carrier grade maintenance switch No IOAM enclosures No FCDM enclosures 76 System Configuration Guidelines NB50000c-cg and NB54000c-cg

177 A Maintenance and Support Connectivity Local monitoring and maintenance of the NonStop BladeSystem occurs over the dedicated service LAN. The dedicated service LAN provides connectivity between the system console and the maintenance infrastructure in the system hardware. Remote support is provided by OSM, which runs on the system console and communicates over the HP Instant Support Enterprise Edition (ISEE) infrastructure or an alternative remote access solution. NOTE: HP Insight Remote Support Advanced is now qualified for HP Integrity NonStop BladeSystems. Insight Remote Support Advanced is the go-forward remote support solution for NonStop systems, replacing the OSM Notification Director in both modem-based and HP Instant Support Enterprise Edition (ISEE) remote support solutions. For more information, please refer to Insight Remote Support Advanced for NonStop in the external Support and Services collection of NTL. Only components specified by HP can be connected to the dedicated LAN. No other access to the LAN is permitted. The dedicated service LAN uses a ProCurve Ethernet switch for connectivity between the c7000 enclosure, CLIMs, IOAM enclosures, and the system console. A maximum of eight NonStop systems can be connected to the dedicated service LAN. An important part of the system maintenance architecture, the system console is a Windows Server purchased from HP to run maintenance and diagnostic software for NonStop BladeSystems. Through the system console, you can: Monitor system health and perform maintenance operations using the HP NonStop Open System Management (OSM) interface View manuals and service procedures using the DVD that comes with your new server or RVU Run HP Tandem Advanced Command Language (TACL) sessions using terminal-emulation software Install and manage system software using the Distributed Systems Management/Software Configuration Manager (DSM/SCM) Make remote requests to and receive responses from a system using remote operation software Dedicated Service LAN A NonStop BladeSystem requires a dedicated LAN for system maintenance through OSM. Only components specified by HP can be connected to a dedicated LAN. No other access to the LAN is permitted. Fault-Tolerant LAN Configuration HP recommends that you use a fault-tolerant LAN configuration. A fault-tolerant configuration includes these connections to two maintenance switches and two system consoles as shown in example Example of a Fault-Tolerant LAN Configuration With Two Maintenance Switches (page 78): Two system consoles, one to each maintenance switch One connection from one Onboard Administrator (OA) in the c7000 enclosure to one maintenance switch, and another connection from the other Onboard Administrator to the second maintenance switch One connection from one Interconnect Ethernet switch in the c7000 enclosure to one maintenance switch, and another connection from the other Interconnect Ethernet switch to the second maintenance switch One connection from the maintenance switch to the other maintenance switch. Dedicated Service LAN 77

178 For every CLIM pair, connect the ilo and eth0 ports of the primary CLIM to one maintenance switch, and the ilo and eth0 ports of the backup CLIM to the second maintenance switch For IP or Telco CLIMs, the primary and backup CLIMs are defined, based on the CLIM-to-CLIM failover configuration For Storage CLIMs, the primary and backup CLIMs are defined, based on the disk path configuration NOTE: For more information about CLIM-to-CLIM failover, refer to the Cluster I/O Protocols (CIP) Configuration and Management Manual. One of the two IOAM enclosure ServerNet switch boards to each maintenance switch (optional) If CLIMs are used to configure the maintenance LAN, connect the CLIM that configures $ZTCP0 to one maintenance switch, and connect the other CLIM that configures $ZTCP to the second maintenance switch If G4SAs are used to configure the maintenance LAN, connect the G4SA that configures $ZTCP0 to one maintenance switch, and connect the other G4SA that configures $ZTCP to the second maintenance switch Figure 48 Example of a Fault-Tolerant LAN Configuration With Two Maintenance Switches 78 Maintenance and Support Connectivity

179 DHCP, TFTP, and DNS Services IP Addresses DHCP, TFTP, and DNS services are required for NonStop BladeSystems. As of J06.06 for AC-powered systems and J06.07 for DC-powered systems, these services can reside either on a pair of system consoles or a pair of CLuster I/O Modules (CLIMs). By default HP ships these services on: NonStop system consoles for AC-powered systems CLIMs for DC-powered systems. You can move these services from CLIMs to the NonStop system console or from the system console to CLIMs. Procedures for moving these services are located in the Service Information section of the Support and Service collection of NTL. For details, see: Changing the DHCP, DNS, or BOOTP Server from CLIMs to System Consoles Changing the DHCP, DNS, or BOOTP Server from System Consoles to CLIMs CAUTION: You must have two and only two sources of these services in the same dedicated service LAN. If these services are installed on any other sources, they must be disabled. To determine the location of these services, see Locating and Troubleshooting DHCP, TFTP, and DNS Services on the NonStop Dedicated Service LAN. You cannot have these services divided between a CLIM and a system console. This mixed configuration is not supported. NonStop BladeSystems require Internet protocol (IP) addresses for these components that are connected to the dedicated service LAN: c7000 enclosure ServerNet switches c7000 Onboard Administrators (OAs) System alarm panel (carrier grade NonStop BladeSystems only) IOAM enclosure ServerNet switch boards Server blade ilos Maintenance switches Intelligent PDUs (ipdus) (used in Intelligent racks only) CLIM Maintenance Interfaces (eth0) System consoles OSM Service Connection G4SAs (if used to implement the dedicated service LAN) UPS (optional) NOTE: Factory-default IP addresses for G4SAs are in the LAN Configuration and Management Manual. IP addresses for SWAN concentrators are in the WAN Subsystem Configuration and Management Manual. These components have default IP addresses that are preconfigured at the factory. You can change these preconfigured IP addresses to addresses appropriate for your LAN environment: Component Primary system console (rack-mounted or stand-alone) Backup system console (rack-mounted only) Location N/A N/A Default IP Address Dedicated Service LAN 79

180 Component Maintenance switch (First switch) Maintenance switch (Additional switches) Maintenance switch (Additional switches) ipdu ipdu ipdu ipdu ipdu ipdu ipdu ipdu ipdu ipdu ipdu ipdu ipdu ipdu ipdu ipdu ipdu ipdu ipdu ipdu ipdu Location N/A N/A N/A Intelligent Rack, lower U, front mounting rail Intelligent Rack, lower U, rear mounting rail Intelligent Rack, upper U, front mounting rail Intelligent Rack, upper U, rear mounting rail Intelligent Rack 2, lower U, front mounting rail Intelligent Rack 2, lower U, rear mounting rail Intelligent Rack 2, upper U, front mounting rail Intelligent Rack 2, upper U, rear mounting rail Intelligent Rack 3, lower U, front mounting rail Intelligent Rack 3, lower U, rear mounting rail Intelligent Rack 3, upper U, front mounting rail Intelligent Rack 3, upper U, rear mounting rail Intelligent Rack 4, lower U, front mounting rail Intelligent Rack 4, lower U, rear mounting rail Intelligent Rack 4, upper U, front mounting rail Intelligent Rack 4, upper U, rear mounting rail Intelligent Rack 5, lower U, front mounting rail Intelligent Rack 5, lower U, rear mounting rail Intelligent Rack 5, upper U, front mounting rail Intelligent Rack 5, upper U, rear mounting rail Intelligent Rack 6, lower U, front mounting rail Default IP Address Maintenance and Support Connectivity

181 Component ipdu ipdu ipdu ipdu ipdu ipdu ipdu System alarm panel (on carrier grade NonStop BladeSystems only) Onboard Administrators in c7000 enclosure CLIM ilos Server Blade ilos ServerNet switches in c7000 enclosure (OSM Low-Level Link) Interconnect Ethernet switches CLIM Maintenance Interfaces (eth0) Location Intelligent Rack 6, lower U, rear mounting rail Intelligent Rack 6, upper U, front mounting rail Intelligent Rack 6, upper U, rear mounting rail Intelligent Rack 7, lower U, front mounting rail Intelligent Rack 7, lower U, rear mounting rail Intelligent Rack 7, upper U, front mounting rail Intelligent Rack 7, upper U, rear mounting rail System alarm panel in first enclosure System alarm panel in additional enclosures CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at Default IP Address Assigned by DHCP server on the NonStop system console Assigned by DHCP server on the NonStop system console Assigned through Enclosure Bay IP Addressing (EBIPA) Assigned through Enclosure Bay IP Addressing (EBIPA) Assigned through Enclosure Bay IP Addressing (EBIPA) Dedicated Service LAN 8

182 Component IOAM enclosure (ServerNet switch boards) Location CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at CLIM at Default IP Address Maintenance and Support Connectivity

183 Component UPS (rack-mounted only) Onboard Administrator EBIPA settings: NonStop system console DHCP server settings: Location First UPS Additional UPSs Additional UPSs First enclosure device bay subnet mask First enclosure device bay IP addresses First enclosure interconnect bay subnet mask First enclosure interconnect bay IP addresses Second enclosure device bay subnet mask Second enclosure device bay IP addresses Second enclosure interconnect bay subnet mask Second enclosure interconnect bay IP addresses Primary system console starting IP address Primary system console ending IP address Primary system console subnet mask Backup system console starting IP address Backup system console ending IP address Backup system console subnet mask Default IP Address through through through through through through TCP/IP processes for OSM Service Connection: $ZTCP0 $ZTCP subnet mask subnet mask Dedicated Service LAN 83

184 Ethernet Cables Ethernet connections for a dedicated service LAN must use Category 5e or Category 6 unshielded twisted-pair (UTP) cables. For BladeSystem supported cables, refer to Cables for BladeSystems (page 88). SWAN Concentrator Restrictions Isolate any ServerNet wide area networks (SWANs) on the system. The system must be equipped with at least two LANs: one LAN for SWAN concentrators and one for the dedicated service LAN. Most SWAN concentrators are configured redundantly using two or more subnets. Those subnets also must be isolated from the dedicated service LAN. Do not connect SWANs on a subnet containing a DHCP. Dedicated Service LAN Links Using G4SAs You can implement system-up service LAN connectivity using G4SAs or IP CLIMs. The values in this table show the identification for G4SAs in slot 5 of both modules of an IOAM enclosure and connected to the maintenance switch: GMS for G4SA Location in IOAME G4SA PIF G4SA PIF TCP/IP Stack IP Configuration G025.0.A L02R $ZTCP0 IP: Subnet: %hffff0000 Hostname: osmlanx G035.0.A L03R $ZTCP IP: Subnet: %hffff0000 Hostname: osmlany NOTE: For a fault-tolerant dedicated service LAN, two G4SAs are required, with each G4SA connected to a separate maintenance switch. These G4SA can reside in modules 2 and 3 of the same IOAM enclosure or in module 2 of one IOAM enclosure and module 3 of a second IOAM enclosure. When the G4SA provides connection to the dedicated service LAN, use the slower 0/00 Mbps PIFrather than one of the high-speed 000 Mbps Ethernet ports of PIF C or D. Dedicated Service LAN Links Using IP CLIMs You can implement up-system service LAN connectivity using IP CLIMs, if the system has at least two IP CLIMs. The values in this table show the identification for the CLIMs in a NonStop BladeSystem and connected to the maintenance switch. In this table, a CLIM named N00258 is connected to the first fiber and eighth port of the ServerNet switch in Group 00, module 2, interconnect bay 5 of a c7000 enclosure: GMS for IP CLIM Location TCP/IP Stack $ZTCP0 $ZTCP IP Configuration IP: Subnet: %hffff0000 Hostname: osmlanx IP: Subnet: 84 Maintenance and Support Connectivity

185 GMS for IP CLIM Location TCP/IP Stack IP Configuration %hffff0000 Hostname: osmlany NOTE: For a fault-tolerant dedicated service LAN, two IP CLIMs are required, with each IP CLIM connected to a separate maintenance switch. Dedicated Service LAN Links Using Telco CLIMs You can implement up-system service LAN connectivity using Telco CLIMs, if the system has at least two Telco CLIMs. The values in this table show the identification for the Telco CLIMs in a NonStop BladeSystem and connected to the maintenance switch. For example, in this table a Telco CLIM named O00253 is connected to the first fiber and third port of the ServerNet switch in Group 00, module 2, interconnect bay 5 of a c7000 enclosure: GMS for Telco CLIM Location TCP/IP Stack $ZTCP0 $ZTCP IP Configuration IP: Subnet: %hffff00 Hostname: osmlanx IP: Subnet: %hffff00 Hostname: osmlany NOTE: For a fault-tolerant dedicated service LAN, two Telco CLIMs are required, with each CLIM connected to a separate maintenance switch. Initial Configuration for a Dedicated Service LAN New systems are shipped with an initial set of IP addresses configured. For a listing of these initial IP addresses, refer to IP Addresses (page 79). Factory-default IP addresses for the G4SAs are in the LAN Configuration and Management Manual. IP addresses for SWAN concentrators are in the WAN Subsystem Configuration and Management Manual. HP recommends that you change these preconfigured IP addresses to addresses appropriate for your LAN environment. You must change the preconfigured IP addresses on: A backup system console if you want to connect it to a dedicated service LAN that already includes a primary system console or other system console Any system console if you want to connect it to a dedicated service LAN that already includes a primary system console Keep track of all the IP addresses in your system so that no IP address is assigned twice. System Consoles NOTE: Because HP requires two sources of Halted State Services (HSS) files for NonStop BladeSystems, you must have two system consoles (primary and backup) configured on the dedicated service LAN. System Consoles 85

186 New system consoles are preconfigured with the required HP and third-party software. When upgrading to the latest RVU, you can install software upgrades from the HP NonStop System Console Installer DVD. Some system console hardware, including the Windows Server system unit, monitor, and keyboard, can be mounted in the cabinet. Other Windows Servers are installed outside the cabinet and require separate provisions or furniture to hold the Windows Server's hardware. System consoles communicate with NonStop BladeSystems over a dedicated service local area network (LAN) or a secure operations LAN. A dedicated service LAN is required for use of OSM Low-Level Link and Notification Director functionality, which includes configuring primary and backup dial-out points (referred to as the primary and backup system consoles, respectively). HP requires that you also configure the backup dedicated service LAN with a backup system console. NOTE: The NonStop system console must be configured with some ports open. For more information, see the NonStop System Console Installer Guide. System Console Configurations These system console configurations are possible: Primary and Backup System Consoles Managing One System (page 86) Primary and Backup System Consoles Managing Multiple Systems (page 87) Primary and Backup System Consoles Managing One System This configuration is recommended for a NonStop BladeSystem on a dedicated service LAN. For fault-tolerant redundancy, it includes a second maintenance switch, backup system console, and second modem (if a modem-based remote solution is used). The maintenance switches provide a dedicated LAN in which all systems use the same subnet. Example of a Fault-Tolerant LAN Configuration With Two Maintenance Switches (page 78) shows a fault-tolerant configuration without modems. NOTE: A subnet is a network division within the TCP/IP model. Within a given network, each subnet is treated as a separate network. Outside that network, the subnets appear as part of a single network. The terms subnet and subnetwork are used interchangeably. If a remote maintenance LAN connection is required, use the second network interface card (NIC) in the NonStop system console to connect to the operations LAN, and access the other devices in the maintenance LAN using Remote Desktop via the console. Because this configuration uses only one subnet, you must: Enable Spanning Tree Protocol (STP) in switches or routers that are part of the operations LAN. NOTE: Do not perform the next two bulleted items if your backup system console is shipped with a new NonStop BladeSystem. In this case, HP has already configured these items for you. Change the preconfigured DHCP configuration of the backup system console before you add it to the LAN. Change the preconfigured IP address of the backup system console before you add it to the LAN. CAUTION: Networks with more than one path between any two systems can cause loops that result in message duplication and broadcast storms that can bring down the network. If a second connection is used, refer to the documentation for the ProCurve maintenance switch and enable STP in the maintenance switches. STP ensures only one active path at any given moment between two systems on the network. In networks with two or more physical paths between two systems, STP ensures only one active path between them and blocks all other redundant paths. 86 Maintenance and Support Connectivity

187 Primary and Backup System Consoles Managing Multiple Systems If you want to manage more than one system from a fault-tolerant pair of system consoles, you can daisy chain the maintenance switches together. This configuration requires an IP address scheme to support it. Ask your HP service provider to refer to the Nonstop Dedicated Service LAN Installation and Configuration Guide for instructions on connecting multiple systems to the dedicated service LAN. System Consoles 87

188 B Cables for BladeSystems Cable Types, Connectors, Length Restrictions, and Product IDs Although a considerable cable length can exist between the modular enclosures in the system, HP recommends that cable length between each of the enclosures be as short as possible. TIP: For cable length restrictions for all NonStop BladeSystem interconnections, refer to Maximum Supported Cable Length for All Connections (page 9). The following table lists the available cables and their lengths: Connection From... Cable Type Connectors Length (meters) Product ID c7000 enclosure to MMF MTP-MTP.5 M c7000 enclosure (interconnection) 5 M M (max. length) M M M M M IB port on IB CLIM to customer-supplied IB switch Copper or Fiber QSFP This is a customer-supplied cable. ETH port on Storage CLIM with encryption to customer-supplied switch (only supported on Storage CLIMs with encryption.) CAT 6 UTP RJ-45 to RJ-45 This is a customer-supplied cable. c7000 ServerNet switch to c7000 ServerNet switch (cross-link connection) MMF LC-LC.24 N.A. c7000 enclosure to CLIM or IOAM MMF MTP-4LC 2 0 M M M M M M M (max. length) M c7000 ServerNet MMF MTP-SC 7 M switch to S-series I/O enclosure 5 M M Cables for BladeSystems

189 Connection From... Cable Type Connectors Length (meters) Product ID 25 (max. length) customer-supplied cable Maintenance LAN interconnect CAT 5e UTP RJ-45 to RJ M M M (max. length) M Maintenance LAN interconnect CAT 6 UTP RJ-45 to RJ M M M M M M M M (max. length) M DL385G5 Storage Copper SFF-8470 to SFF M CLIM to M SAS disk enclosure 4 M (max. length) M DL385G5 Storage CLIM CG to M8390-2CG SAS disk enclosure Copper SFF-8470 to SFF (max. length) M M DL380G6 CLIM Copper SFF-8470 to SFF M Storage CLIM CG to CG SAS tape 4 (max. length) M DL380G5 CLIM Storage CLIM CG to CG SAS tape Copper SFF-8470 to SFF M DL380G6 Storage Copper SFF-8088 to SFF M CLIM to M SAS disk enclosure 4 M (max. length) M DL380G6 Storage CLIM CG to M839-24CG SAS disk enclosure Copper SFF-8088 to SFF (max. length) M M M DL380G6 Storage CLIM CG to M8390-2CG SAS disk enclosure Copper SFF-8470 to SFF (max. length) M M M DL380G6 Storage CLIM to CG SAS tape Copper SFF-8470 to SFF M M (max. length) M Cable Types, Connectors, Length Restrictions, and Product IDs 89

190 Connection From... Cable Type Connectors Length (meters) Product ID M SAS disk enclosure to M SAS disk enclosure (daisy-chain) Copper SFF-8088 to SFF (max. length) M M M M CG SAS disk enclosure to M CG SAS disk enclosure (daisy-chain) Copper SFF-8470 to SFF (max. length) M M M Storage CLIM FC HBA to: ESS FC switch FC tape MMF LC-LC M M M M M M M M M M M M (max. length) M FCSA in IOAM Enclosure to: ESS FC switch MMF LC-LC 2 3 M M M M M M M M M M M M (max. length) The M SAS disk enclosure is also known as the MSA70 SAS disk enclosure. M The M SAS disk enclosure is also known as the D2700 SAS disk enclosure. 90 Cables for BladeSystems

191 Maximum Supported Cable Length for All Connections Table 20 Cable Length Restrictions Interconnection Type: Cable Type Connectors Max. Length From c7000 enclosure to c7000 enclosure MMF MTP-MTP 25m Interconnection Type: From BladeSystem ServerNet Switch to: Cable Type Connectors Max. Length CLIM MMF MTP-4LC 25m IOAM MMF MTP-4LC 25m IOMF2 (S-series I/O Enclosure) MMF MTP-SC 30m ServerNet 6780 Cluster Switch MMF LC-LC 25m ServerNet 6780 Cluster Switch SMF LC-LC 00m ServerNet 6770 Cluster Switch SMF LC-SC 80m IMPORTANT: Before connecting to ServerNet 6770/6780 Cluster switches, refer to the ServerNet Cluster Supplement for NonStop BladeSystems for detailed connection instructions. BladeCluster Interconnections: These interconnections follow BladeCluster cable length rules and restrictions. For details, refer to the BladeCluster Solution Manual. Interconnection Type: Storage Subsystem Interconnections: Cable Type Connectors Max. Length DL385G5 Storage CLIM to M SAS disk enclosure Copper SFF-8470 to SFF m DL380G6 Storage CLIM to M SAS disk enclosure Copper SFF-8088 to SFF m M SAS disk enclosure to M SAS disk enclosure (daisy-chain) Copper SFF-8088 to SFF m DL385G5 Storage CLIM CG to M8390-2CG SAS disk enclosure Copper SFF-8470 to SFF m DL385G5 Storage CLIM CG to CG SAS tape Copper SFF-8470 to SFF m DL380G6 Storage CLIM CG to M839-24CG Copper SFF-8088 to SFF m FCSA in IOAM Enclosure to: ESS FC switch MMF LC-LC 250m Storage CLIM FC HBA to: ESS FC switch FC tape MMF OM2 LC-LC Transfer Rate 2 Gbps Max. Length 250m 4 Gbps 50m* 8 Gbps 50m* *Customer-supplied cable Maximum Supported Cable Length for All Connections 9

192 Table 20 Cable Length Restrictions (continued) Interconnection Type: Cable Type Connectors Max. Length Storage Subsystem Interconnections (continued): Storage CLIM FC HBA to: ESS FC switch FC tape MMF OM3 LC-LC Transfer Rate 2 Gbps Max. Length 500m* 4 Gbps 380m* 8 Gbps 50m* *Customer-supplied cable 92 Cables for BladeSystems

193 C Operations and Management Using OSM Applications NOTE: HP Insight Remote Support Advanced is now qualified for Integrity NonStop BladeSystems. Insight Remote Support Advanced is the go-forward remote support solution for NonStop systems, replacing the OSM Notification Director in both modem-based and HP Instant Support Enterprise Edition (ISEE) remote support solutions. For more information, please refer to Insight Remote Support Advanced for NonStop in the external Support and Services collection of NTL. OSM client-based components are installed on new system console shipments and also delivered by an OSM installer on the HP NonStop System Console (NSC) Installer DVD. The NSC DVD also delivers all other client software required for managing and servicing NonStop servers. For installation instructions, refer to the NonStop System Console Installer Guide. OSM server-based components are incorporated in a single OSM server-based SPR, T0682 (OSM Service Connection Suite), that is installed on NonStop BladeSystems running the HP NonStop operating system. For information on how to install, configure and start OSM server-based processes and components, refer to the OSM Migration and Configuration Guide. The OSM components are: Product ID T0632 Component OSM Notification Director Task Performed Dial-in and dial-out services T0633 OSM Low-Level Link Provides down-system support Provides support to configure IP, Storage, and Telco CLIMs before they are operational in a NonStop BladeSystem Provides CLIM software updates T0634 OSM Console Tools OSM Certificate Tool OSM System Inventory Tool Provides Start menu shortcuts and default home pages for easy access to the OSM Service Connection and OSM Event Viewer (browser-based OSM applications that are not installed on the system console) Establishes certificate-based trust between the OSM server and the Onboard Administrators in a c7000 enclosure Retrieves hardware inventory from multiple NonStop BladeSystems Terminal Emulator File Converter Converts existing OSM Service Connection-related OutsideView (.cps) session files to MR-WIN6530 (.653) session files The OSM Notification Director is not required if Insight Remote Support Advanced is used as the remote support solution. For more information, refer to the Insight Remote Support Advanced for NonStop in the external Support and Service collection of NTL. System-Down OSM Low-Level Link In NonStop BladeSystems, the maintenance entity (ME) in the c7000 ServerNet switch or IOAM enclosures provides dedicated service LAN services via the OSM Low-Level Link for both OS coldload, system management, and hardware configuration when hardware is powered up but the OS is not running. System-Down OSM Low-Level Link 93

194 AC Power Monitoring NonStop BladeSystems require one of the following to support system operation through power transients or an orderly shutdown of I/O operations and processors during a power failure: Either of the optional, HP-supported single-phase UPS models: R5000 UPS or R5500 XR (each with one to two ERMs for additional battery power) or the HP-supported three-phase UPS model R2000/3 (with one to four ERMs for additional battery power) A user-supplied UPS installed in each modular cabinet A user-supplied site UPS If any of the above HP-supported UPS models is installed, it is connected to the system s dedicated service LAN via the maintenance switch where OSM monitors the power state. When properly configured, OSM power-failure support for NonStop BladeSystems includes: Detection and notification of power failure situations Monitoring the outage against a configurable ride-through time in order to avoid disruption if the power failure is short in duration If the ride-through time expires before the power returns, initiating a controlled shutdown of I/O operations and processors. (This shutdown does not include stopping or powering off the system, nor does it stop TMF or other applications. Customers are encouraged to execute scripts to shut down database activity before the processors are shut down.) You must perform these actions in the OSM Service Connection: Perform the Configure a Power Source as UPS action to configure the power rail (either A or B) connected to the UPS. Perform the Configure a Power Source as AC action, to configure the power rail (either B or A) connected to AC power. Perform the Verify Power Fail Configuration, located under the System object, to verify that power failure support has been properly configured and is in place for the system. NOTE: For NonStop BladeSystems, these actions are located under the Enclosure object. How OSM Power Failure Support Works When properly configured, OSM detects that one power rail is running on UPS and the other power rail has lost power, it logs an event indicating the beginning of the configured ride-through time period. OSM monitors whether AC power returns before the ride-through period ends, and: If AC power is restored before the ride-through period ends, the ride-through countdown terminates and OSM does not take further steps to prepare for an outage. If AC power is not restored before the ride-through period ends, OSM broadcasts a PFAIL_SHOUT message to all processors (the processor running OSM being the last one in the queue) to shut down the system's ServerNet routers and processors in a fashion designed to allow disk writes for items that are in transit through controllers and disks to complete. NOTE: Do not turn off the UPS as soon as the NonStop OS is down. The UPS continues to supply power until that supply is exhausted, and that time needs to be long enough for disk controllers and disks to complete disk writes. If a user-supplied rack-mounted UPS or a site UPS is used rather than the HP-supported UPS models mentioned above, the system is not notified of the power outage. The user is responsible for detecting power transients and outages and developing the appropriate actions, which might include a ride-through time based on the capacity of the site UPS and the power demands made on that UPS. The HP-supported UPS models and ERMs installed in modular cabinets do not support any devices that are external to the cabinets. External devices can include tape drives, external disk drives, LAN routers, and SWAN concentrators. Any external peripheral devices that do not have UPS support will fail immediately at the onset of a power failure. Plan for UPS support of any external 94 Operations and Management Using OSM Applications

195 peripheral devices that must remain operational as system resources. This support can come from a site UPS or individual units as necessary. NOTE: OSM does not make dynamic computations based on remaining capacity of the rack-mounted UPS. The ride-through time is statically configured in SCF for OSM use. For example, when power comes back before the initiated shutdown, but then fails again shortly afterward, the UPS has been depleted by some amount and does not last for the ride-through time until it is fully recharged. OSM does not account for multiple power failures that occur within the recharge time of the rack-mounted UPS. This information relates to handling power failures: To set ride-through time using SCF, refer to the SCF Reference Manual for the Kernel Subsystem. For the TACL SETTIME command, refer to the TACL Reference Manual. To set system time programmatically, refer to the Guardian Procedure Calls Reference Manual. Considerations for Ride-Through Time Configuration The goal in configuring the ride-through time is to allow the maximum time for power to be restored while at the same time allowing sufficient time for completion of disk writes for IOs that passed to the disk controllers before the ServerNet was shut down. Allowing enough time for sufficient completion of these tasks allows for a relatively clean shutdown from which TMF recovery is less time-consuming and difficult than if all power failed and disk writes did not complete. The maximum ride-through time for each system will vary, depending on system load, configuration, and the UPS capability. A rack-mounted UPS can often supply five minutes of power with some extra capacity for contingencies, provided the batteries are new and fully charged. This five minute figure is an estimate used for illustration in this discussion, not a guarantee for any specific configuration. You must ensure that the battery capacity for a fully-powered system allows for at least two minutes after OSM initiates the orderly shutdown to allow the disk cache to be flushed to nonvolatile media. Assuming the UPS has five minutes of power capacity, you would set the ride-through time for three minutes (UPS capacity of five minutes minus two minutes for OSM). Power can be extended by adding ERMs to the configuration; and power for UPS alone can extend beyond five minutes based on power consumption. To extend the ride-through time beyond three minutes, use this manual and your UPS documentation to calculate the expected power consumption, measure the site power consumption, factor in the ERM, if present, and make adjustments. Also consider air conditioning failures during a real power failure because increased ambient temperature typically causes the fans to run faster, which causes the system to draw more power. By allowing for the maximum power consumption and applying those figures to the UPS calculations provided in the UPS manuals, you can increase the ride-through time beyond three minutes. Considerations for Site UPS Configurations OSM cannot monitor a site UPS. The SCF configured ride-through time on a NonStop system has no effect if only a site UPS is used. With a site UPS instead of a rack-mounted UPS, the customer must perform manual system shutdown if the backup generators cannot be started. It is also possible to have a rack-mounted UPS in addition to a site UPS. Since the site UPS can supply a whole computer room or part of that room, including required cooling, from the perspective of OSM, site UPS power can supply the group 00 AC power. The group 00 UPS power configured in OSM, in this case, would still come from a rack-mounted UPS (one of the supported models). AC Power Monitoring 95

196 AC Power-Fail States These states occur when a power failure occurs and an optional HP model R5000 UPS, R5500 XR UPS, or HP R2000/3 UPS is installed in each cabinet within the system: System State NSK_RUNNING RIDE_THRU HALTED POWER_OFF Description NonStop operating system is running normally. OSM has detected a power failure and begins timing the outage. AC power returning terminates RIDE_THRU and puts the operating system back into an NSK_RUNNING state. At the end of the predetermined RIDE_THRU time, if AC has not returned, OSM executes a PFAIL_SHOUT and initiates an orderly shutdown of I/O operations and resources. Normal halt condition. Halted processors do not participate in power-fail handling. A normal power-on also puts the processors into the HALTED state. Loss of optic power from the NonStop Server Blade occurs, or the UPS batteries suppling the server blade are completely depleted. When power returns, the system is essentially in a cold-boot condition. 96 Operations and Management Using OSM Applications

197 D Default Startup Characteristics NB50000c and NB54000c Each NonStop BladeSystem ships with these default startup characteristics: NOTE: The configurations documented here are typical for most sites. Your system load paths might be different, depending upon how your system is configured. To determine the system disk configuration to use in the OSM Low-Level Link, refer to the system attributes in the OSM Service Connection. You can select a system disk configuration to use for the system load from the Configuration drop-down menu in the System Load dialog box in the OSM Low-Level Link. $SYSTEM disks residing in either SAS disk enclosures or FCDM enclosures: SAS Disk Enclosures Systems with only two to three Storage CLIMs and two SAS disk enclosures with the disks in these locations: CLIM X Location SAS Disk Enclosure Path Group Module Slot Port Fiber Enclosure Bay Primary Backup Mirror Mirror-Backup Systems with at least four Storage CLIMs and two SAS disk enclosures with the disks in these locations: CLIM X Location SAS Disk Enclosure Path Group Module Slot Port Fiber Enclosure Bay Primary Backup Mirror Mirror-Backup FCDM Enclosures Systems with one IOAM enclosure, two FCDMs, and two FCSAs with the disks in these locations: IOAM FCSA Fibre Channel Disk Module Path Group Module Slot SAC Shelf Bay Primary 0 2 Backup

198 Mirror Mirror-Backup Systems with two IOAM enclosures, two FCDMs, and two FCSAs with the disks in these locations: IOAM FCSA Fibre Channel Disk Module Path Group Module Slot SAC Shelf Bay Primary 0 2 Backup 3 Mirror 3 2 Mirror-Backup Systems with one IOAM enclosure, two FCDMs, and four FCSAs with the disks in these locations: IOAM FCSA Fibre Channel Disk Module Path Group Module Slot SAC Shelf Bay Primary 0 2 Backup 0 3 Mirror Mirror-Backup Systems with two IOAM enclosures, two FCDMs, and four FCSAs with the disks in these locations: IOAM FCSA Fibre Channel Disk Module Path Group Module Slot SAC Shelf Bay Primary 0 2 Backup 2 Mirror 3 2 Mirror-Backup Configured system load paths Enabled command interpreter input (CIIN) function If the automatic system load is not successful, additional paths for loading are available in the boot task. Using one load path, the system load task attempts to use another path and keeps trying until 98 Default Startup Characteristics NB50000c and NB54000c

199 all possible paths have been used or the system load is successful. These 6 paths are available for loading and are listed in the order of their use by the system load task: Load Path Description Source Disk Destination Processor ServerNet Fabric Primary $SYSTEM-P 0 X 2 Primary $SYSTEM-P 0 Y 3 Backup $SYSTEM-P 0 X 4 Backup $SYSTEM-P 0 Y 5 Mirror $SYSTEM-M 0 X 6 Mirror $SYSTEM-M 0 Y 7 Mirror-Backup $SYSTEM-M 0 X 8 Mirror-Backup $SYSTEM-M 0 Y 9 Primary $SYSTEM-P X 0 Primary $SYSTEM-P Y Backup $SYSTEM-P X 2 Backup $SYSTEM-P Y 3 Mirror $SYSTEM-M X 4 Mirror $SYSTEM-M Y 5 Mirror-Backup $SYSTEM-M X 6 Mirror-Backup $SYSTEM-M Y The command interpreter input file (CIIN) is automatically invoked after the first processor is loaded. The CIIN file shipped with new systems contains the TACL RELOAD * command, which loads the remaining processors. For default configurations of the Fibre Channel ports, Fibre Channel disk modules, and load disks, refer to Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module (page 24). For default configurations of the HBA SAS ports, SAS disk enclosures, and load disks, refer to Configurations for Storage CLIM and SAS Disk Enclosures (page 4). 99

200 E Site Power Cables Carrier Grade Site cables and lugs must be provided by the customer or the installation provider for the customer. HP does not provide these items to the customer. IMPORTANT: The power cables should be assembled by a certified electrician only. Required Documentation The information in this appendix assumes the customer or the customer s installation provider is already familiar with and has access to the National Fire Protection Associations (NFPAs) published National Electrical Code Handbook 2005 (NFPA 70). Specifically either of these tables from the handbook, depending on the Cable Rating that is selected at the site (60C-90C vs. 50C-250C jacketing): Allowable Ampacities of Insulated Conductors Rated 0 Through 2000 Volts, 60 C Through 90 C, Not More Than Three Current-Carrying Conductors in Raceway, Cable, or Earth, Based on Ambient Temperature of 30 C (Table 30.6) Allowable Ampacities of Insulated Conductors Rated 0 Through 2000 Volts, 50 C Through 250 C (302 F Through 482 F). Not More Than Three Current-Carrying Conductors in Raceway, Cable, or Earth, Based on Ambient Temperature of 40 C (04 F) (Table 30.8) Requirements for Site Power or Ground Cables The customer must ensure the site power cables meet the ampacity requirements for a NEBS 50 C environment. CAUTION: Use Table 30.6 or Table 30.8 along with Table 30.5(B)(2)(a). See Required Documentation (page 200) to verify the site power input cables meet the ampacity requirements for a NEBS 50 C environment. To ensure the site power input cables you plan to use meet the ampacity requirements for a NEBS 50 C environment: Depending on the site s Cable Rating requirements, use Table 30.6 or Table 30.8 (60 C-90 C or 50 C-250 C jacketing, respectively). See Required Documentation (page 200) to determine the correct cable size, based on the type and grade of cable you plan to use. To reach the NEBS 50 C environment, apply the correction factor within each table for the 50 C ambient temperature range (Table 30.6 is 46 C-50 C; Table 30.8 is 4 C-50 C) to the ampacity values in the Table to obtain the corrected maximum ampacity that is acceptable in the NEBS 50 C environment. If more than three current-carrying conductors will be run vertically through the sides of the cabinet, or lay in a site s raceway for longer than 24 inches, apply the correction factor from Table 30.5(B)(2)(a) to the ampacity values from Table 30.6 or Table 30.8 at 50 C. 200 Site Power Cables Carrier Grade

201 F Power Configurations for NonStop BladeSystems in G2 Racks NonStop BladeSystem Power Distribution G2 Rack Your NonStop BladeSystem in a G2 rack uses one of these power distribution types: NonStop BladeSystem Single-Phase Power Distribution G2 Rack (page 20) NonStop BladeSystem Three-Phase Power Distribution in a G2 Rack (page 23) NonStop BladeSystem Single-Phase Power Distribution G2 Rack CAUTION: There is an upgrade limitation with single-phase PDUs. Customers using a single-phase power distribution might have limited upgrade capabilities. The present single-phase rack configuration supports a fully-loaded c7000 enclosure NonStop BladeSystem solution. However, future solutions might be limited in a single-phase configuration. For further information, check with your HP service provider. These power configurations require careful attention to phase load balancing. For more information, refer to Phase Load Balancing (page 253). North America/Japan (NA/JPN) and International (INTL) are the supported regions for single-phase NonStop BladeSystems. For these regions, there are three different versions of the rack level PDU, depending on whether you are using modular or monitored PDUs. For c7000 single-phase power setup details, refer to the instructions for your PDU: Single-Phase Power Setup, Monitored PDUs G2 Rack (page 202) Single-Phase Power Setup in a G2 Rack, Modular PDU (page 29) The NonStop BladeSystem's single-phase, c7000 enclosure contains an AC Input Module that provides N + N redundant power distribution. This single-phase power module has six C9 power supply (PS) receptacles that provide direct AC power feeds to the c7000 enclosure: One group of three c7000 power feeds connects to a rack PDU that is powered from the main power grid. Another group of three power feeds connects to a rack PDU that is powered from either the backup power grid or from one of the single-phase UPS's installed in the rack. These PDUs and the rack UPS are dedicated to the c7000. For the other components in the rack, two other PDUs are available (main and backup) and a second, optional rack UPS is available. NonStop BladeSystem Power Distribution G2 Rack 20

202 Single-Phase Power Setup, Monitored PDUs G2 Rack Power set up depends on your region and whether your configuration includes a UPS. For North America/Japan (NA/JPN): NA/JPN: Monitored Single-Phase Power Setup in a G2 Rack (With Rack-Mounted R5000 UPS) (page 203) NA/JPN: Monitored Single-Phase Power Setup in a G2 Rack With Rack-Mounted R5500 XR UPS (page 202) NA/JPN: Monitored Single-Phase Power Setup in a G2 Rack (Without Rack-Mounted UPS) (page 205) For International (INTL): INTL: Monitored Single-Phase Power Setup in a G2 Rack With Rack-Mounted R5000 UPS (page 2) INTL: Monitored Single-Phase Power Setup in a G2 Rack With Rack-Mounted R5500XR UPS (page 22) INTL: Monitored Single-Phase Power Setup in a G2 Rack Without Rack-Mounted UPS (page 23) NA/JPN: Monitored Single-Phase Power Setup in a G2 Rack With Rack-Mounted R5500 XR UPS To setup the power feed connections as shown in Figure 49:. Connect two single-phase 30A power feeds to the rack-mounted UPS L6-30P input connector on each R5500 XR UPS. 2. Connect two single-phase 30A power feeds to the AF94A PDU NEMA L6-30P (30A, 3 wire) input connectors. 3. Connect the c7000 power supply (PS) C9 cables starting on the right-side of the c7000 input module: a. Connect the C9 cable from PS to the top right PDU, L3-3 receptables. b. Connect the C9 cable from PS2 to the bottom right PDU, L-3 receptables. c. Connect the C9 cable from PS3 to the bottom right PDU, L2 3 receptables. 4. Connect the c7000 power supply cables from the left-side of the c7000 input module: a. Connect the C9 cable from PS4 to the left PDU, L3-3 receptables. b. Connect the C9 cable from PS5 to the bottom left PDU, L-3 receptables. c. Connect the C9 cable from PS6 to the bottom left PDU, L2 3 receptables. 202 Power Configurations for NonStop BladeSystems in G2 Racks

203 Figure 49 North America/Japan Monitored Single-Phase Power Setup in a G2 Rack With Rack-Mounted R5500 XR UPS NA/JPN: Monitored Single-Phase Power Setup in a G2 Rack (With Rack-Mounted R5000 UPS) To setup the power feed connections as shown in Figure 50:. Connect two single-phase 30A power feeds to the rack-mounted UPS L6-30P input connector on each R5000 UPS. 2. Connect two single-phase 30A power feeds to the AF94A PDU NEMA L6-30P (30A, 3 wire) input connectors. 3. Connect the c7000 power supply (PS) C9 cables starting on the right-side of the c7000 input module: a. Connect the C9 cable from PS to the top right PDU, L3-3 receptables. b. Connect the C9 cable from PS2 to the bottom right PDU, L-3 receptables. c. Connect the C9 cable from PS3 to the bottom right PDU, L2 3 receptables. NonStop BladeSystem Power Distribution G2 Rack 203

204 4. Connect the c7000 power supply cables from the left-side of the c7000 input module: a. Connect the C9 cable from PS4 to the left PDU, L3-3 receptables. b. Connect the C9 cable from PS5 to the bottom left PDU, L-3 receptables. c. Connect the C9 cable from PS6 to the bottom left PDU, L2 3 receptables. Figure 50 North America/Japan Monitored Single-Phase Power Setup in a G2 Rack (With Rack-Mounted R5000 UPS) 204 Power Configurations for NonStop BladeSystems in G2 Racks

205 NA/JPN: Monitored Single-Phase Power Setup in a G2 Rack (Without Rack-Mounted UPS) To setup the single-phase power feed connections as shown in Figure 5:. Connect four single-phase 30A power feeds to the four AF94A PDU NEMA L6-30P (30A, 3 wire) input connectors. 2. Connect the c7000 power supply (PS) C9 cables starting on the right-side of the c7000 input module: a. Connect the C9 cable from PS to the top right PDU, L3-3 receptables. b. Connect the C9 cable from PS2 to the bottom right PDU, L-3 receptables. c. Connect the C9 cable from PS3 to the bottom right PDU, L2 3 receptables. 3. Connect the c7000 power supply cables from the left-side of the c7000 input module: a. Connect the C9 cable from PS4 to the left PDU, L3-3 receptables. b. Connect the C9 cable from PS5 to the bottom left PDU, L-3 receptables. c. Connect the C9 cable from PS6 to the bottom left PDU, L2 3 receptables. NonStop BladeSystem Power Distribution G2 Rack 205

206 Figure 5 North America/Japan Monitored Single-Phase Power Setup in a G2 Rack (Without Rack-Mounted UPS) NA/JPN: Monitored Single-Phase in a G2 Rack PDU Description Four half-height, single-phase power distribution units (PDUs) are installed to provide redundant power outlets for the c7000 enclosure and the components mounted in the modular cabinet. The PDUs are on opposite sides facing each other and are on swivel brackets that can be rotated to allow for servicing of components. Each PDU is inches long and has 27 AC receptacles, three circuit breakers, and an AC power cord. The NA/JPN single-phase PDUs are oriented with the AC power cords exiting the modular cabinet at either the top or bottom rear corners of the cabinet, depending on the site's power feed needs. For information about specific PDU input and output characteristics for these PDUs, which are factory-installed in modular cabinets, refer to NA/JPN: Input and Output Power Characteristics in a G2 Rack, Single-Phase Monitored PDUs and c7000s (page 209). 206 Power Configurations for NonStop BladeSystems in G2 Racks

207 Each single-phase PDU in a modular cabinet has: 24 AC receptacles per PDU (2 per segment) - IEC 320 C3 0A receptacle type 3 AC receptacles per PDU ( per segment) - IEC 320 C9 6A receptacle type 3 circuit-breakers Each PDU distributes site single-phase power to single-phase 200 to 240 V AC outlets for connecting the power cords from the components mounted in the modular cabinet. NA/JPN: AC Power Feeds in a G2 Rack, Monitored Single-Phase PDUs The AC power feed cables for the PDUs are mounted to exit the modular cabinet at either the top or bottom rear corners of the cabinet depending on what is ordered for the site's power feed. Figure 52 shows the power feed cables on the single-phase NA/JPN PDUs with AC feed at the bottom of the cabinet and the AC power outlets along the PDU. The single-phase power outlets face each other on opposite sides in the cabinet and are on swivel brackets that can be rotated to allow for servicing of components. NOTE: For visual clarity, the single-phase AC power feed illustrations show the PDUs facing outward as if their swivel brackets have been rotated. Typically, these PDUs would be on opposite sides facing each other. Figure 52 Bottom AC Power Feed, Single-Phase NA/JPN Monitored PDUs NonStop BladeSystem Power Distribution G2 Rack 207

208 Figure 53 shows the power feed cables on the single-phase PDUs with AC feed at the top of the cabinet. NOTE: For visual clarity, the single-phase AC power feed illustrations show the PDUs facing outward as if their swivel brackets have been rotated. Typically, these PDUs would be on opposite sides facing each other. Figure 53 Top AC Power Feed in a G2 Rack NA/JPN Single-Phase Monitored PDUs 208 Power Configurations for NonStop BladeSystems in G2 Racks

209 NA/JPN: Input and Output Power Characteristics in a G2 Rack, Single-Phase Monitored PDUs and c7000s The cabinet includes four half-height NA/JPN PDUs with these power characteristics: PDU input characteristics 200V to 240V AC, single-phase, 30A RMS, 3-wire 6.5 feet (2 m) attached power cord 50/60Hz NEMA L6-30P input plug as shown below PDU output characteristics 3 circuit-breaker-protected 20A load segments 24 AC receptacles per PDU (2 per segment) - IEC 320 C3 0A receptacle type 3 AC receptacles per PDU ( per segment) - IEC 320 C9 6A receptacle type NA/JPN: Branch Circuits and Circuit Breakers for a G2 Rack (With Single-Phase Monitored PDUs) Modular cabinets for NonStop BladeSystems that use a single-phase power configuration contain four half-height PDUs. In cabinets without the optional rack-mounted UPS, each of the four NA/JPN PDUs requires a separate branch circuit of these ratings: Region North America/Japan (NA/JPN) Volts (Phase-to-Phase) 200 to 240 Amps (see following CAUTION ) 30 CAUTION: Be sure the hardware configuration and resultant power loads of each cabinet within the system do not exceed the capacity of the branch circuit according to applicable electrical codes and regulations. Select circuit breaker ratings according to local codes and any applicable regulations for the circuit capacity. Note that circuit breaker ratings vary if your system includes the optional rack-mounted HP Model R5000 UPS or R5500 XR Integrated UPS. NonStop BladeSystem Power Distribution G2 Rack 209

210 NA/JPN: Circuit Breaker Ratings in a G2 Rack for Single-Phase R5000 UPS These ratings apply to systems with the optional rack-mounted HP Model R5000 UPS that is used for a single-phase NA/JPN power configuration: Version Operating Voltage Settings Power Out (VA/Watts) Input Plug UPS Input Rating North America/Japan (NA/JPN) 200/208 2, 220, 230, /4500 L6-30P Dedicated 30 Amp The UPS input requires a dedicated (unshared) branch circuit that is suitably rated for your specific UPS. 2 Factory-default setting For further information and specifications on the R5000 UPS, refer to the HP R5000 UPS User Guide. This guide is available at: For further information on the HP UPS Network Module supported for the R5000 UPS, refer to the HP UPS Network Module User Guide. This guide is available at: NA/JPN: Circuit Breaker Ratings for Single-Phase R5500 XR UPS in a G2 Rack These ratings apply to systems with the optional rack-mounted HP Model R5500 XR Integrated UPS that is used for a single-phase NA/JPN power configuration: Version Operating Voltage Settings Power Out (VA/Watts) Input Plug UPS Input Rating North America/Japan (NA/JPN) 200/208 2, 220, 230, /4500 L6-30P Dedicated 30 Amp The UPS input requires a dedicated (unshared) branch circuit that is suitably rated for your specific UPS. 2 Factory-default setting For further information and specifications on the R5500 XR UPS, refer to the HP R5500 UPS User Guide. This guide is available at: For further information about the HP UPS Management Module supported for the R5500 XR UPS, refer to the HP UPS Management Module User Guide located at: 20 Power Configurations for NonStop BladeSystems in G2 Racks

211 International (INTL) Monitored Single-Phase Power Configuration in a G2 Rack Power set up depends on your single-phase International power configuration type: INTL: Monitored Single-Phase Power Setup in a G2 Rack With Rack-Mounted R5000 UPS (page 2) INTL: Monitored Single-Phase Power Setup in a G2 Rack With Rack-Mounted R5500XR UPS (page 22) INTL: Monitored Single-Phase Power Setup in a G2 Rack Without Rack-Mounted UPS (page 23) INTL: Monitored Single-Phase Power Setup in a G2 Rack With Rack-Mounted R5000 UPS To setup the power feed connections as shown in Figure 54:. Connect two single-phase 32A power feeds to the rack-mounted UPS IEC P6 (32A, 3 wire/2 pole) input connector on each R5000 UPS. 2. Connect two single-phase 32A power feeds to the AF95A PDU IEC P6 (32A, 3 wire/2 pole). 3. Connect the c7000 power supply (PS) C9 cables starting on the right-side of the c7000 input module: a. Connect the C9 cable from PS to the top right PDU, L3-3 receptables. b. Connect the C9 cable from PS2 to the bottom right PDU, L-3 receptables. c. Connect the C9 cable from PS3 to the bottom right PDU, L2 3 receptables. 4. Connect the c7000 power supply cables from the left-side of the c7000 input module: a. Connect the C9 cable from PS4 to the left PDU, L3-3 receptables. b. Connect the C9 cable from PS5 to the bottom left PDU, L-3 receptables. c. Connect the C9 cable from PS6 to the bottom left PDU, L2 3 receptables. NonStop BladeSystem Power Distribution G2 Rack 2

212 Figure 54 International Monitored Single-Phase Power Setup in a G2 Rack With Rack-Mounted R5000 UPS INTL: Monitored Single-Phase Power Setup in a G2 Rack With Rack-Mounted R5500XR UPS To setup the power feed connections as shown in Figure 55:. Connect two single-phase 32A power feeds to the rack-mounted UPS IEC P6 (32A, 3 wire/2 pole) input connector on each R5500 XR UPS. 2. Connect two single-phase 32A power feeds to the AF95A PDU IEC P6 (32A, 3 wire/2 pole). 3. Connect the c7000 power supply (PS) C9 cables starting on the right-side of the c7000 input module: a. Connect the C9 cable from PS to the top right PDU, L3-3 receptables. b. Connect the C9 cable from PS2 to the bottom right PDU, L-3 receptables. c. Connect the C9 cable from PS3 to the bottom right PDU, L2 3 receptables. 22 Power Configurations for NonStop BladeSystems in G2 Racks

213 4. Connect the c7000 power supply cables from the left-side of the c7000 input module: a. Connect the C9 cable from PS4 to the left PDU, L3-3 receptables. b. Connect the C9 cable from PS5 to the bottom left PDU, L-3 receptables. c. Connect the C9 cable from PS6 to the bottom left PDU, L2 3 receptables. Figure 55 International Monitored Single-Phase Power Setup in a G2 Rack With Rack-Mounted R5500 XR UPS INTL: Monitored Single-Phase Power Setup in a G2 Rack Without Rack-Mounted UPS To setup the single-phase power feed connections as shown in Figure 56:. Connect four single-phase 32A power feeds to the four AF95A PDU IEC P6 (32A, 3 wire/2 pole) input connectors. 2. Connect the c7000 power supply (PS) C9 cables starting on the right-side of the c7000 input module: a. Connect the C9 cable from PS to the top right PDU, L3-3 receptables. b. Connect the C9 cable from PS2 to the bottom right PDU, L-3 receptables. c. Connect the C9 cable from PS3 to the bottom right PDU, L2 3 receptables. NonStop BladeSystem Power Distribution G2 Rack 23

214 3. Connect the c7000 power supply cables from the left-side of the c7000 input module: a. Connect the C9 cable from PS4 to the left PDU, L3-3 receptables. b. Connect the C9 cable from PS5 to the bottom left PDU, L-3 receptables. c. Connect the C9 cable from PS6 to the bottom left PDU, L2 3 receptables. Figure 56 International Monitored Single-Phase Power Setup in a G2 Rack Without Rack-Mounted UPS INTL: Single-Phase Monitored in a G2 Rack PDU Description Four half-height, single-phase power distribution units (PDUs) are installed to provide redundant power outlets for the c7000 enclosure and the components mounted in the modular cabinet. The PDUs are on opposite sides facing each other and are on swivel brackets that can be rotated to allow for servicing of components. Each PDU is inches long and has 27 AC receptacles, three circuit breakers, and an AC power cord. The INTL single-phase PDUs are oriented with the AC power cords exiting the modular cabinet at either the top or bottom rear corners of the cabinet, depending on the site's power feed needs. 24 Power Configurations for NonStop BladeSystems in G2 Racks

215 For information about specific PDU input and output characteristics for these PDUs, which are factory-installed in modular cabinets, refer to INTL: Input and Output Power Characteristics in a G2 Rack Single-Phase Monitored PDUs (page 25). Each single-phase PDU in a modular cabinet has: 24 AC receptacles per PDU (2 per segment) - IEC 320 C3 0A receptacle type 3 AC receptacles per PDU ( per segment) - IEC 320 C9 6A receptacle type 3 circuit-breakers Each PDU distributes site single-phase power to single-phase 200 to 240 V AC outlets for connecting the power cords from the components mounted in the modular cabinet. INTL: Input and Output Power Characteristics in a G2 Rack Single-Phase Monitored PDUs The International PDU power characteristics are: PDU input characteristics 200V to 240V AC, single-phase, 32A RMS, 3-wire 6.5 feet (2 m) attached harmonized power cord 50/60Hz IEC P6 3-pin, 32A input plug as shown below: PDU output characteristics 3 circuit-breaker-protected 20A load segments 24 AC receptacles per PDU (2 per segment) - IEC 320 C3 0A receptacle type 3 AC receptacles per PDU ( per segment) - IEC 320 C9 6A receptacle type INTL: AC Power Feeds in a G2 Rack, Single-Phase Monitored PDUs The AC power feed cables for the International monitored PDUs are mounted to exit the modular cabinet at either the top or bottom rear corners of the cabinet depending on what is ordered for the site's power feed. Figure 57 shows the power feed cables on these PDUs with AC feed at the bottom of the cabinet and the AC power outlets along the PDU. For clarity the PDUs are shown facing outward via their swivel brackets. The single-phase power outlets face each other on opposite sides in the cabinet and are on swivel brackets that can be rotated to allow for servicing of components. NOTE: For visual clarity, the single-phase AC power feed illustrations show the PDUs facing outward as if their swivel brackets have been rotated. Typically, these PDUs would be on opposite sides facing each other. NonStop BladeSystem Power Distribution G2 Rack 25

216 Figure 57 Bottom AC Power Feed in a G2 Rack, Single-Phase International Monitored PDUs Figure 58 shows the power feed cables on the single-phase International PDUs with AC feed at the top of the cabinet. NOTE: For visual clarity, the single-phase AC power feed illustrations show the PDUs facing outward as if their swivel brackets have been rotated. Typically, these PDUs would be on opposite sides facing each other. 26 Power Configurations for NonStop BladeSystems in G2 Racks

217 Figure 58 Top AC Power Feed in a G2 Rack, Single-Phase International Monitored PDUs INTL: Branch Circuits and Circuit Breakers for a G2 Rack Single-Phase Monitored PDUs Modular cabinets for NonStop BladeSystems that use a single-phase power configuration contain four half-height PDUs. In cabinets without the optional rack-mounted UPS, each of the four INTL PDUs requires a separate branch circuit of these ratings: Region Volts (Phase-to-Phase) International (INTL) 200 to 240 Category D circuit breaker is required. Amps (see following CAUTION ) 32 CAUTION: Be sure the hardware configuration and resultant power loads of each cabinet within the system do not exceed the capacity of the branch circuit according to applicable electrical codes and regulations. Select circuit breaker ratings according to local codes and any applicable regulations for the circuit capacity. Note that circuit breaker ratings vary if your system includes the optional rack-mounted HP Model R5000 UPS or R5500 XR Integrated UPS. NonStop BladeSystem Power Distribution G2 Rack 27

218 INTL: Circuit Breaker Ratings for R5500 XR Single-Phase UPS in a G2 Rack These ratings apply to systems with the optional rack-mounted HP Model R5500 XR Integrated UPS that is used for a INTL single-phase power configuration: Version Operating Voltage Settings Power Out (VA/Watts) Input Plug UPS Input Rating International (INTL) 200, 230 2, /5400 IEC Amp Dedicated 32 Amp If set at 200/208 Then 5000/4500 The UPS input requires a dedicated (unshared) branch circuit that is suitably rated for your specific UPS. 2 Factory-default setting For further information and specifications on the R5500 XR UPS, refer to the HP R5500 UPS User Guide. This guide is available at: For further information about the HP UPS Management Module supported for the R5500 XR UPS, refer to the HP UPS Management Module User Guide located at: INTL: Circuit Breaker Ratings for Single-Phase R5000 UPS in a G2 Rack These ratings apply to systems with the optional rack-mounted HP Model R5000 Integrated UPS that is used for a INTL single-phase power configuration: Version Operating Voltage Settings Power Out (VA/Watts) Input Plug UPS Input Rating International (INTL) 200, 230 2, /4500 IEC Amp Dedicated 32 Amp If set at 200/208 Then 5000/4500 The UPS input requires a dedicated (unshared) branch circuit that is suitably rated for your specific UPS. 2 Factory-default setting For further information and specifications on the R5000 UPS, refer to the HP UPS R5000 User Guide. This guide is available at: For further information on the HP UPS Network Module supported for the R5000 UPS, refer to the HP UPS Network Module User Guide. This guide is available at: 28 Power Configurations for NonStop BladeSystems in G2 Racks

219 Single-Phase Power Setup in a G2 Rack, Modular PDU Power set up is only supported for the NA/JPN region with or without a UPS: NA/JPN: Modular Single-Phase Power Setup in a G2 Rack With R5000 Rack-Mounted UPS (page 29) NA/JPN: Modular Single-Phase Power Setup in a G2 Rack With R5500XR Rack-Mounted UPS (page 22) NA/JPN: Single-Phase Modular Power Setup in a G2 Rack Without Rack-Mounted UPS (page 224) NA/JPN: Modular Single-Phase Power Setup in a G2 Rack With R5000 Rack-Mounted UPS To setup the power feed connections as shown in Figure 59:. Connect two single-phase 30A power feeds to the rack-mounted UPS L6-30P input connector on each R5000 UPS. 2. Connect two single-phase 30A power feeds to the modular PDU L6-30P (30A, 3 wire) input connectors. 3. Connect the modular PDUs to the extension bar PDUs starting with the modular PDUs located in U and U2 (bottom-exit power) or U4 and U42 (if your system uses top-exit power feeds). All the output cables for the modular PDU are C9 cables NOTE: PDU 3 is located behind PDU. PDU 4 is located behind PDU 2. PDU 3 and PDU 4 are accessed from the front of the rack. a. Facing the rear of the rack and PDU located in U or U4: Connect the cable from PDU, receptacle to the top right extension bar, L input receptacle. Connect the cable from PDU, receptacle 2 to the top right extension bar, L2 input receptacle. Connect the cable from PDU, receptacle 4 to the top right extension bar, L4 input receptacle. b. Facing the rear of the rack and PDU 2 located in U2 or U42: Connect the cable from PDU 2, receptacle 3 to the lower right extension bar, L3 input receptacle. c. Facing the front of the rack and PDU 3 (which is behind PDU ) in U or U4: Connect the cable from PDU 3, receptacle to the top left extension bar, L input receptacle. Connect the cable from PDU 3, receptacle 2 to the top left extension bar, L2 input receptacle. Connect the cable from PDU 3, receptacle 4 to the top left extension bar, L4 input receptacle. NonStop BladeSystem Power Distribution G2 Rack 29

220 d. Facing the front of the rack and PDU 4 (which is behind PDU 2) in U2 or U42: Connect the cable from PDU 4, receptacle 3 to the lower left extension bar, L3 input receptacle. 4. All the c7000 power supply (PS) cables are C9 cables. Connect these cables to the applicable receptacle on the modular PDU: a. Connect the c7000 power supply cables from the right-side of the c7000 input module to the modular PDU: Connect the PS cable to PDU 2, receptacle in U2 or U42. Connect the PS2 cable to PDU 2, receptacle 2 in U2 or U42. Connect the PS3 cable to PDU, receptacle 3 in U or U4. b. Connect the c7000 power supply cables from the left-side of the c7000 input module to the modular PDU: Connect the PS4 cable to PDU 4, receptacle in U2 or U42. Connect the PS5 cable to PDU 4, receptacle 2 in U2 or U42. Connect the PS6 cable to PDU 3, receptacle 3 in U or U4. 5. Connect the BladeSystem components as shown in the diagram following the best practice of distributing connections between PDUs on either side, especially when there are two power cords per component. 220 Power Configurations for NonStop BladeSystems in G2 Racks

221 Figure 59 North America/Japan Modular Single-Phase Power Setup in a G2 Rack With Rack-Mounted R5000 UPS NA/JPN: Modular Single-Phase Power Setup in a G2 Rack With R5500XR Rack-Mounted UPS To setup the power feed connections as shown in Figure 49:. Connect two single-phase 30A power feeds to the rack-mounted UPS L6-30P input connector on each R5500 XR UPS. 2. Connect two single-phase 30A power feeds to the modular PDU L6-30P (30A, 3 wire) input connectors. 3. Connect the modular PDUs to the extension bar PDUs starting with the modular PDUs located in U and U2 (bottom-exit power) or U4 and U42 (if your system uses top-exit power feeds). All the output cables for the modular PDU are C9 cables NOTE: PDU 3 is located behind PDU. PDU 4 is located behind PDU 2. PDU 3 and PDU 4 are accessed from the front of the rack. NonStop BladeSystem Power Distribution G2 Rack 22

222 a. Facing the rear of the rack and PDU located in U or U4: Connect the cable from PDU, receptacle to the top right extension bar, L input receptacle. Connect the cable from PDU, receptacle 2 to the top right extension bar, L2 input receptacle. Connect the cable from PDU, receptacle 4 to the top right extension bar, L4 input receptacle. b. Facing the rear of the rack and PDU 2 located in U2 or U42: Connect the cable from PDU 2, receptacle 3 to the lower right extension bar, L3 input receptacle. c. Facing the front of the rack and PDU 3 (which is behind PDU ) in U or U4: Connect the cable from PDU 3, receptacle to the top left extension bar, L input receptacle. Connect the cable from PDU 3, receptacle 2 to the top left extension bar, L2 input receptacle. Connect the cable from PDU 3, receptacle 4 to the top left extension bar, L4 input receptacle. d. Facing the front of the rack and PDU 4 (which is behind PDU 2) in U2 or U42: Connect the cable from PDU 4, receptacle 3 to the lower left extension bar, L3 input receptacle. 4. All the c7000 power supply (PS) cables are C9 cables. Connect these cables to the applicable receptacle on the modular PDU: a. Connect the c7000 power supply cables from the right-side of the c7000 input module to the modular PDU: Connect the PS cable to PDU 2, receptacle in U2 or U42. Connect the PS2 cable to PDU 2, receptacle 2 in U2 or U42. Connect the PS3 cable to PDU, receptacle 3 in U or U4. b. Connect the c7000 power supply cables from the left-side of the c7000 input module to the modular PDU: Connect the PS4 cable to PDU 4, receptacle in U2 or U42. Connect the PS5 cable to PDU 4, receptacle 2 in U2 or U42. Connect the PS6 cable to PDU 3, receptacle 3 in U or U4. 5. Connect the BladeSystem components as shown in the diagram following the best practice of distributing connections between PDUs on either side, especially when there are two power cords per component. 222 Power Configurations for NonStop BladeSystems in G2 Racks

223 Figure 60 North America/Japan Modular Single-Phase Power Setup in a G2 Rack With Rack-Mounted R5500 XR UPS NonStop BladeSystem Power Distribution G2 Rack 223

224 NA/JPN: Single-Phase Modular Power Setup in a G2 Rack Without Rack-Mounted UPS To setup the single-phase power feed connections as shown in Figure 6 (page 225):. Connect four single-phase 30A power feeds to the four modular PDU L6-30P (30A, 3 wire) input connectors. 2. Connect the modular PDUs to the extension bar PDUs starting with the modular PDUs located in U and U2 (bottom-exit power) or U4 and U42 (if your system uses top-exit power feeds). All the output cables for the modular PDU are C9 cables. NOTE: PDU 3 is located behind PDU. PDU 4 is located behind PDU 2. PDU 3 and PDU 4 are accessed from the front of the rack. a. Facing the rear of the rack and PDU located in U or U4: Connect the cable from PDU, receptacle to the top right extension bar, L input receptacle. Connect the cable from PDU, receptacle 2 to the top right extension bar, L2 input receptacle. Connect the cable from PDU, receptacle 4 to the top right extension bar, L4 input receptacle. b. Facing the rear of the rack and PDU 2 located in U2 or U42: Connect the cable from PDU 2, receptacle 3 to the lower right extension bar, L3 input receptacle. c. Facing the front of the rack and PDU 3 (which is behind PDU ) in U or U4: Connect the cable from PDU 3, receptacle to the top left extension bar, L input receptacle. Connect the cable from PDU 3, receptacle 2 to the top left extension bar, L2 input receptacle. Connect the cable from PDU 3, receptacle 4 to the top left extension bar, L4 input receptacle. d. Facing the front of the rack and PDU 4 (which is behind PDU 2) in U2 or U42: Connect the cable from PDU 4, receptacle 3 to the lower left extension bar, L3 input receptacle. 3. All the c7000 power supply (PS) cables are C9 cables. Connect these cables to the applicable receptacle on the modular PDU: a. Connect the c7000 power supply cables from the right-side of the c7000 input module to the modular PDU: Connect the PS cable to PDU 2, receptacle in U2 or U42. Connect the PS2 cable to PDU 2, receptacle 2 in U2 or U42. Connect the PS3 cable to PDU, receptacle 3 in U or U4. b. Connect the c7000 power supply cables from the left-side of the c7000 input module to the modular PDU: Connect the PS4 cable to PDU 4, receptacle in U2 or U42. Connect the PS5 cable to PDU 4, receptacle 2 in U2 or U42. Connect the PS6 cable to PDU 3, receptacle 3 in U or U4. 4. Connect the BladeSystem components as shown in the diagram following the best practice of distributing connections between PDUs on either side, especially when there are two power cords per component. 224 Power Configurations for NonStop BladeSystems in G2 Racks

225 Figure 6 North America/Japan Modular Single-Phase Power Setup in a G2 Rack Without Rack-Mounted UPS NonStop BladeSystem Power Distribution G2 Rack 225

226 NA/JPN: Single-Phase Modular in a G2 Rack PDU Description Four modular power distribution units (PDUs) are installed to provide redundant power outlets for the c7000 enclosure and the components mounted in the modular cabinet. Each U rack-mounted modular PDU comes with four modular PDU extension bars. Two modular PDUs share the same U space. For example, PDU and PDU 3 are in same U location of the rack (U0) and PDU 2 and PDU 4 share the U02 rack location. The NA/JPN single-phase modular PDUs are oriented with the AC power cords exiting the modular cabinet at either the top or bottom rear corners of the cabinet, depending on the site's power feed needs. For information about specific PDU input and output characteristics for these PDUs, which are factory-installed in modular cabinets, refer to NA/JPN: Input and Output Power Characteristics in a G2 Rack, Single-Phase Modular PDU and Extension Bars (page 228). Each single-phase PDU in a modular cabinet has: 28 AC receptacles (7 per extension bar) - IEC 320 C3 0A receptacle type 4 AC receptacles per modular PDU - IEC 320 C9 6A receptacle type 4 circuit-breakers Each PDU distributes site single-phase power to single-phase 200 to 240 V AC outlets for connecting the power cords from the components mounted in the modular cabinet. NA/JPN: AC Power Feeds, Single-Phase Modular PDUs The AC power feed cables for the PDUs are mounted to exit the modular cabinet at either the top or bottom rear corners of the cabinet depending on what is ordered for the site's power feed. Figure 62 shows the power feed cables on the single-phase NA/JPN modular PDUs with AC feed at the bottom of the cabinet and the AC power outlets along the PDU. 226 Power Configurations for NonStop BladeSystems in G2 Racks

227 Figure 62 Bottom AC Power Feed in a G2 Rack Single-Phase NA/JPN Modular PDUs Figure 63 shows the power feed cables on the modular NA/JPN single-phase PDUs with AC feed at the top of the cabinet. NonStop BladeSystem Power Distribution G2 Rack 227

228 Figure 63 Top AC Power Feed in a G2 Rack NA/JPN Single-Phase Modular PDUs NA/JPN: Input and Output Power Characteristics in a G2 Rack, Single-Phase Modular PDU and Extension Bars The cabinet includes four power distribution units (PDUs) with these power characteristics: PDU input characteristics 200V to 240V AC, single-phase 30A RMS, 3-wire 2 feet (3.6 m) attached power cord 50/60Hz NEMA L6-30P input plug as shown below PDU output characteristics 4 IEC 320 C9 receptacles per PDU with 5A circuit-breaker labels 228 Power Configurations for NonStop BladeSystems in G2 Racks

229 Extension bar input characteristics 200V to 240V AC, single-phase 24A RMS, 3-wire 50/60Hz IEC 320 C20 input plug 6.5 feet (2.0 m) attached power cord Extension bar output characteristics 7 IEC 320 C3 receptacles per PDU with 0A maximum per outlet NA/JPN: Branch Circuits and Circuit Breakers in a G2 Rack, Single-Phase Modular PDUs Modular cabinets for NonStop BladeSystems that use a single-phase modular power configuration contain four modular PDUs. In cabinets without the optional rack-mounted UPS, each of the four NA/JPN modular PDUs requires a separate branch circuit of these ratings: Region North America/Japan (NA/JPN) Volts (Phase-to-Phase) 200 to 240 Amps (see following CAUTION ) 30 CAUTION: Be sure the hardware configuration and resultant power loads of each cabinet within the system do not exceed the capacity of the branch circuit according to applicable electrical codes and regulations. Select circuit breaker ratings according to local codes and any applicable regulations for the circuit capacity. Note that circuit breaker ratings vary if your system includes either optional rack-mounted single-phase UPS (HP Model R5000 or R5500 XR Integrated UPS). NA/JPN: Circuit Breaker Ratings for Single-Phase R5000 UPS in a G2 Rack These ratings apply to systems with the optional rack-mounted HP Model R5500 XR Integrated UPS that is used for a single-phase modular NA/JPN power configuration: Version Operating Voltage Settings Power Out (VA/Watts) Input Plug UPS Input Rating North America/Japan (NA/JPN) 200/208 2, 220, 230, /4500 L6-30P Dedicated 30 Amp The UPS input requires a dedicated (unshared) branch circuit that is suitably rated for your specific UPS. 2 Factory-default setting For further information and specifications on the R5500 XR UPS, refer to the HP R5000 UPS User Guide. This guide is available at: NonStop BladeSystem Power Distribution G2 Rack 229

230 NA/JPN: Circuit Breaker Ratings for Single-Phase R5500 XR UPS These ratings apply to systems with the optional rack-mounted HP Model R5500 XR Integrated UPS that is used for a single-phase modular NA/JPN power configuration: Version Operating Voltage Settings Power Out (VA/Watts) Input Plug UPS Input Rating North America/Japan (NA/JPN) 200/208 2, 220, 230, /4500 L6-30P Dedicated 30 Amp The UPS input requires a dedicated (unshared) branch circuit that is suitably rated for your specific UPS. 2 Factory-default setting For further information and specifications on the R5500 XR UPS, refer to the HP R5500 UPS User Guide. This guide is available at: Power Configurations for NonStop BladeSystems in G2 Racks

231 NonStop BladeSystem Three-Phase Power Distribution in a G2 Rack North America/Japan (NA/JPN) and International (INTL) are the supported regions for three-phase NonStop BladeSystems. For these regions, there are four different versions of the rack level PDU, depending on whether you are using modular or monitored PDUs. For c7000 three-phase power setup details, refer to the instructions for your PDU: Three-Phase Power Setup in a G2 Rack, Monitored PDUs (page 23) Three-Phase Power Setup in a G2 Rack, Modular PDUs (page 240) Three-phase power configurations require 200V to 240V distribution and careful attention to phase load balancing. For more information, refer to Phase Load Balancing (page 253). The NonStop BladeSystem's three-phase c7000 enclosure contains an AC Input Module that provides N + N redundant power distribution for the power configurations. This power module comes with a pair of power cords that provide direct AC power feeds to the c7000 enclosure. One c7000 power feed connects to a rack PDU that is powered from the main power grid. The other c7000 power feed connects to a rack PDU that is powered from either the backup power grid or from the R2000/3 UPS installed in the rack. These PDUs and the rack UPS are dedicated to the c7000. For the other components in the rack, two other PDUs are available (main and backup) and a second, optional rack UPS is available. Three-Phase Power Setup in a G2 Rack, Monitored PDUs Power set up is based on your region and whether your configuration includes a UPS: NA/JPN: Monitored PDU in a G2 Rack : Three-Phase Power Setup With Rack-Mounted UPS (page 23) NA/JPN: Monitored PDU in a G2 Rack: Three-Phase Power Setup Without Rack-Mounted UPS (page 232) INTL: Monitored PDU: Three-Phase Power Setup in a G2 Rack With Rack-Mounted UPS (page 237) INTL: Monitored PDU: Three-Phase Power Setup Without Rack-Mounted UPS (page 237) NA/JPN: Monitored PDU in a G2 Rack : Three-Phase Power Setup With Rack-Mounted UPS To setup the power feed connections as shown in Figure 64:. Connect one 3-phase 60A power feed to the rack-mounted UPS IEC P9 (60A, 5 wire) input connector. 2. Connect one 3-phase 30A power feed to the AF504A PDU NEMA L5-30P (30A, 4 wire) input connector. NonStop BladeSystem Power Distribution G2 Rack 23

232 3. Connect one 3-phase 30A power feed to the c7000 enclosure's NEMA L5-30P (30A, 4 wire/3 pole) input connector. Figure 64 North America/Japan Monitored 3-Phase Power Setup in a G2 Rack With Rack-Mounted UPS NA/JPN: Monitored PDU in a G2 Rack: Three-Phase Power Setup Without Rack-Mounted UPS To setup the power feed connections as shown in Figure 65:. Connect two 3-phase 30A power feeds to the two AF504A PDU NEMA L5-30P (30A, 4 wire/3 pole) input connectors. 2. Connect two 3-phase 30A power feeds to the two NEMA L5-30P (30A, 4 wire/3 pole) input connectors within the c7000 enclosure. 232 Power Configurations for NonStop BladeSystems in G2 Racks

233 Figure 65 North America/Japan Monitored 3-Phase Power Setup Without Rack-Mounted UPS NA/JPN: Monitored Three-Phase in a G2 Rack PDU Description Two monitored three-phase NA/JPN power distribution units (PDUs) are installed to provide redundant power outlets for the components mounted in the modular cabinet. The PDUs are oriented inward, facing the components within the rack. Each PDU is 60 inches long and has 39 AC receptacles, three circuit breakers, and an AC power cord. The PDU is oriented with the AC power cord exiting the modular cabinet at either the top or bottom rear corners of the cabinet, depending on the site's power feed needs. For information about specific PDU input and output characteristics for these PDUs which are factory-installed in modular cabinets, refer to NA/JPN: Input and Output Power Characteristics in a G2 Rack, Three-Phase Monitored PDUs and c7000s (page 235). Each three-phase monitored PDU in a modular cabinet has: 36 AC receptacles per PDU (2 per segment) - IEC 320 C3 0A receptacle type 3 AC receptacles per PDU ( per segment) - IEC 320 C9 6A receptacle type 3 circuit-breakers These 200V to 240V AC, three-phase delta for NA/JPN PDUs receive power from the site AC power source. Each PDU distributes site three-phase power to 39 single-phase 200 to 240 V AC outlets for connecting the power cords from the components mounted in the modular cabinet. NonStop BladeSystem Power Distribution G2 Rack 233

234 NA/JPN: AC Power Feeds in a G2 Rack, Three-Phase Monitored PDUs The AC power feed cables for the PDUs are mounted to exit the modular cabinet at either the top or bottom rear corners of the cabinet depending on what is ordered for the site's power feed. Bottom AC Power Feed in a G2 Rack, Three-Phase (Monitored NA/JPN PDUs) (page 234) shows the power feed cables on monitored three-phase PDUs with AC feed at the bottom of the cabinet and the AC power outlets along the PDU. The monitored three-phase power outlets face in toward the components in the cabinet. Figure 66 Bottom AC Power Feed in a G2 Rack, Three-Phase (Monitored NA/JPN PDUs) Figure 67 shows the power feed cables on the three-phase monitored PDUs with AC feed at the top of the cabinet. 234 Power Configurations for NonStop BladeSystems in G2 Racks

235 Figure 67 Top AC Power Feed in a G2 Rack, Three-Phase (Monitored NA/JPN PDUs) NA/JPN: Input and Output Power Characteristics in a G2 Rack, Three-Phase Monitored PDUs and c7000s The cabinet includes two power distribution units (PDUs) with these power characteristics: PDU input characteristics 200V to 240V AC, 3-phase delta, 30A RMS, 4-wire 6.5 feet (2 m) attached power cord 50/60Hz NEMA L5-30P input plug as shown below: PDU output characteristics 3 circuit-breaker-protected 3.86A load segments 36 AC receptacles per PDU (2 per segment) - IEC 320 C3 0A receptacle type 3 AC receptacles per PDU ( per segment) - IEC 320 C9 6A receptacle type NonStop BladeSystem Power Distribution G2 Rack 235

236 The cabinet includes a c7000 input with these power characteristics: c7000 input characteristics 200V to 208V AC, 3-phase delta, 30A RMS, 4-wire 0 feet (3.5 m) attached power cord 50/60Hz NEMA L5-30P input plug as shown below: NA/JPN: Branch Circuits and Circuit Breakers in a G2 Rack, Monitored Three-Phase Modular cabinets for the NonStop BladeSystem that use a three-phase power configuration with modular or monitored three-phase PDUs contain two PDUs. In cabinets without the optional rack-mounted UPS, each of the two PDUs requires a separate branch circuit of these ratings: Region North America/Japan (NA/JPN) PDU North America/Japan (NA/JPN) c7000 Volts (Phase-to-Phase) 200 to to 208 Amps (see following CAUTION ) CAUTION: Be sure the hardware configuration and resultant power loads of each cabinet within the system do not exceed the capacity of the branch circuit according to applicable electrical codes and regulations. Select circuit breaker ratings according to local codes and any applicable regulations for the circuit capacity. Note that circuit breaker ratings vary if your system includes the optional rack-mounted HP Model R2000/3 Integrated UPS. NA/JPN: Circuit Breaker Ratings, Monitored Three-Phase UPS in a G2 Rack These ratings apply to systems with the optional rack-mounted HP Model R2000/3 Integrated UPS that is used for a three-phase power configuration: Version Operating Voltage Settings Power Out (VA/Watts) Input Plug UPS Input Rating North America/Japan (NA/JPN) /2000 IEC P9 5-pin, 60 Amp Dedicated 60 Amp The UPS input requires a dedicated (unshared) branch circuit that is suitably rated for your specific UPS. 2 Factory-default setting 236 Power Configurations for NonStop BladeSystems in G2 Racks

237 For further information and specifications on the R2000/3 UPS (2kVA model), refer to the HP 3 Phase UPS User Guide for the 2kVA model. This guide is available at: For information about the UPS management module supported for the R2000/3 UPS, refer to the HP UPS Management Module User Guide located at: INTL: Monitored PDU: Three-Phase Power Setup in a G2 Rack With Rack-Mounted UPS To setup the power feed connections as shown in International Monitored 3-Phase Power Setup in a G2 Rack With Rack-Mounted UPS (page 237):. Connect one 3-phase 32A power feed to the rack-mounted UPS IEC P6 (32A, 5 wire/4 pole) input connector. 2. Connect one 3-phase 6A power feed to the AF508A PDU IEC309 56P6 (6A, 5 wire/4 pole) input connector. 3. Connect one 3-phase 6A power feed to the c7000 enclosure's IEC309 56P6 (6A, 5 wire/4 pole) input connector. Figure 68 International Monitored 3-Phase Power Setup in a G2 Rack With Rack-Mounted UPS INTL: Monitored PDU: Three-Phase Power Setup Without Rack-Mounted UPS To setup the power feed connections as shown in Figure 69: NonStop BladeSystem Power Distribution G2 Rack 237

238 . Connect two 3-phase 6A power feeds to the two AF508A PDU IEC309 56P6 (6A, 5 wire/4 pole) input connectors. 2. Connect two 3-phase 6A power feeds to the two IEC309 56P6 (6A, 5 wire/4 pole) input connectors within the c7000 enclosure. Figure 69 International Monitored 3-Phase Power Setup Without in a G2 Rack Rack-Mounted UPS INTL: Three-Phase Monitored in a G2 Rack PDU Description Two monitored three-phase INTL power distribution units (PDUs) are installed to provide redundant power outlets for the components mounted in the modular cabinet. The PDUs are oriented inward, facing the components within the rack. Each PDU is 60 inches long and has 39 AC receptacles, three circuit breakers, and an AC power cord. The PDU is oriented with the AC power cord exiting the modular cabinet at either the top or bottom rear corners of the cabinet, depending on the site's power feed needs. For information about specific PDU input and output characteristics for PDUs factory-installed in modular cabinets, refer to INTL: Input and Output Power Characteristics, Three-Phase Monitored PDU and c7000 (page 239). Each three-phase modular PDU in a modular cabinet has: 36 AC receptacles per PDU (2 per segment) - IEC 320 C3 0A receptacle type 3 AC receptacles per PDU ( per segment) - IEC 320 C9 6A receptacle type 3 circuit-breakers These 380V to 45V AC, three-phase wye International PDUs receive power from the site AC power source. 238 Power Configurations for NonStop BladeSystems in G2 Racks

239 Each PDU distributes site three-phase power to 39 single-phase 200 to 240 V AC outlets for connecting the power cords from the components mounted in the modular cabinet. INTL: Input and Output Power Characteristics, Three-Phase Monitored PDU and c7000 The power characteristics of the monitored three-phase INTL PDU are: PDU input characteristics 380 to 45 V AC, 3-phase Wye, 6A RMS, 5-wire 6.5 feet (2 m) attached harmonized power cord 50/60Hz IEC309 56P6 5-pin, 6A input plug as shown below: PDU output characteristics 3 circuit-breaker-protected 6A load segments 36 AC receptacles per PDU (2 per segment) - IEC 320 C3 0A receptacle type 3 AC receptacles per PDU ( per segment) - IEC 320 C9 6A receptacle type The cabinet includes a c7000 input with these power characteristics: c7000 input characteristics 346 to 45 VAC line to line; 200 to 240 V AC line to neutral; 3phase Wye, 6A RMS, 5-wire 0 feet (3.5 m) attached harmonized power cord 50/60Hz IEC309 56P6 5-pin, 6A input plug as shown below: INTL: Branch Circuits and Circuit Breakers in a G2 Rack, Three-Phase Modular cabinets for the NonStop BladeSystem that use a three-phase power configuration with monitored three-phase PDUs contain two PDUs. NonStop BladeSystem Power Distribution G2 Rack 239

240 In cabinets without the optional rack-mounted UPS, each of the two PDUs requires a separate branch circuit of these ratings: Region International PDU Volts (Phase-to-Phase) 380 to 45 Amps (see following CAUTION ) 6 International c7000 Category D circuit breaker is required. 2 Category D circuit breaker is required. 346 to CAUTION: Be sure the hardware configuration and resultant power loads of each cabinet within the system do not exceed the capacity of the branch circuit according to applicable electrical codes and regulations. Select circuit breaker ratings according to local codes and any applicable regulations for the circuit capacity. Note that circuit breaker ratings vary if your system includes the optional rack-mounted HP Model R2000/3 Integrated UPS. INTL: Circuit Breaker Ratings for Three-Phase UPS in a G2 Rack These ratings apply to systems with the optional rack-mounted HP Model R2000/3 Integrated UPS that is used for an INTL three-phase power configuration: Version Operating Voltage Settings Power Out (VA/Watts) Input Plug UPS Input Rating International (380-45) 2000/2000 IEC Amp Dedicated 32 Amp The UPS input requires a dedicated (unshared) branch circuit that is suitably rated for your specific UPS. 2 Factory-default setting For further information and specifications on the R2000/3 UPS (2kVA model), refer to the HP 3 Phase UPS User Guide for the 2kVA model. This guide is available at: For information about the UPS management module supported for the R2000/3 UPS, refer to the HP UPS Management Module User Guide located at: Three-Phase Power Setup in a G2 Rack, Modular PDUs Power set up is based on your region and whether your configuration includes a UPS: NA/JPN: Modular PDU in a G2 Rack: Three-Phase Power Setup With Rack-Mounted UPS (page 24) NA/JPN: Modular PDU: Three-Phase Power Setup Without Rack-Mounted UPS (page 24) INTL: Modular PDU: Three-Phase Power Setup in a G2 Rack (With Rack-Mounted UPS) (page 246) INTL: Modular PDU: Three-Phase Power Setup in a G2 Rack (Without Rack-Mounted UPS) (page 247) 240 Power Configurations for NonStop BladeSystems in G2 Racks

241 NA/JPN: Modular PDU in a G2 Rack: Three-Phase Power Setup With Rack-Mounted UPS To setup the power feed connections as shown in Figure 70:. Connect one 3-phase 60A power feed to the rack-mounted UPS IEC P9 (60A, 5 wire) input connector. 2. Connect one 3-phase 30A power feed to the AF52A PDU NEMA L5-30P (30A, 4 wire) input connector. 3. Connect one 3-phase 30A power feed to the c7000 enclosure's NEMA L5-30P (30A, 4 wire) input connector. Figure 70 North America/Japan Modular 3-Phase Power Setup in a G2 Rack With Rack-Mounted UPS NA/JPN: Modular PDU: Three-Phase Power Setup Without Rack-Mounted UPS To setup the power feed connections as shown in Figure 7:. Connect two 3-phase 30A power feeds to the two AF52A PDU NEMA L5-30P (30A, 4 wire/3 pole) input connectors. 2. Connect two 3-phase 30A power feeds to the two NEMA L5-30P (30A, 4 wire/3 pole) input connectors within the c7000 enclosure. NonStop BladeSystem Power Distribution G2 Rack 24

242 Figure 7 North America/Japan Modular 3-Phase Power Setup in a G2 Rack Without Rack-Mounted UPS NA/JPN: Description of Modular Three-Phase PDU in G2 Rack Two three-phase modular power distribution units (PDUs) are installed to provide redundant power outlets for the components mounted in the modular cabinet. Each U rack-mounted modular PDU comes with four modular PDU extension bars. The PDUs are oriented facing each other within the rack. Each PDU has 28 AC receptacles, six circuit breakers, and an AC power cord. The PDU is oriented with the AC power cord exiting the modular cabinet at either the top or bottom rear corners of the cabinet, depending on the site's power feed needs. For information about specific PDU input and output characteristics for PDUs factory-installed in modular cabinets, refer to NA/JPN: Input and Output Power Characteristics, Three-Phase Modular PDU, c7000, and Extension Bars in a G2 Rack (page 244). Each three-phase modular PDU in a modular cabinet has: 28 AC receptacles per PDU (7 per extension bar) - IEC 320 C3 0A receptacle type 6 circuit-breakers The 208V AC, three-phase delta for North America/Japan (NA/JPN) PDU receives power from the site AC power source: Each PDU distributes site three-phase power to 34 single-phase 200 to 240 V AC outlets for connecting the power cords from the components mounted in the modular cabinet. NA/JPN: Modular PDU in G2 Rack: AC Power Feeds to Cabinets The AC power feed cables for the NA/JPN modular PDUs are mounted to exit the modular cabinet at either the top or bottom rear corners of the cabinet depending on what is ordered for the site's power feed. Bottom AC Power Feed in G2 Rack, Three Phase (NA/JPN Modular PDUs) (page 243) shows the power feed cables on modular three-phase PDUs with AC feed at the bottom of the cabinet and the output connections for the three-phase modular PDU. 242 Power Configurations for NonStop BladeSystems in G2 Racks

243 Figure 72 Bottom AC Power Feed in G2 Rack, Three Phase (NA/JPN Modular PDUs) Top AC Power Feed in G2 Rack, Three-Phase (NA/JPN Modular PDU) (page 244) shows the three-phase modular PDUs with AC feed at the top of the cabinet. NonStop BladeSystem Power Distribution G2 Rack 243

244 Figure 73 Top AC Power Feed in G2 Rack, Three-Phase (NA/JPN Modular PDU) NA/JPN: Input and Output Power Characteristics, Three-Phase Modular PDU, c7000, and Extension Bars in a G2 Rack The cabinet includes two power distribution units (PDUs) with these power characteristics: PDU input characteristics 208V AC, 3-phase delta, 4.52A RMS, 4-wire 2 feet (3.6 m) attached power cord 50/60Hz NEMA L5-30 input plug as shown below: PDU output characteristics 6 IEC 320 C9 receptacles per PDU with 20A circuit-breaker labels (L, L2, L3, L4, L5, and L6) 244 Power Configurations for NonStop BladeSystems in G2 Racks

245 Extension bar input characteristics 00V to 240V AC, 3-phase delta, 6A RMS, 4-wire 50/60Hz IEC 320 C20 input plug 6.5 feet (2.0 m) attached power cord Extension bar output characteristics 7 IEC 320 C3 receptacles per PDU with 0A maximum per outlet The cabinet includes a c7000 input with these power characteristics: c7000 input characteristics 200V to 208V AC, 3-phase delta, 30A RMS, 4-wire 0 feet (3.5 m) attached power cord 50/60Hz NEMA L5-30P input plug as shown below: NA/JPN: Branch Circuits and Circuit Breakers, Modular Three-Phase in G2 Rack Modular cabinets for the NonStop BladeSystem that use a three-phase power configuration with modular three-phase PDUs contain two PDUs. In cabinets without the optional rack-mounted UPS, each of the two PDUs requires a separate branch circuit of these ratings: Region North America/Japan (NA/JPN) PDU North America/Japan (NA/JPN) c7000 Volts (Phase-to-Phase) 200 to to 208 Amps (see following CAUTION ) CAUTION: Be sure the hardware configuration and resultant power loads of each cabinet within the system do not exceed the capacity of the branch circuit according to applicable electrical codes and regulations. Select circuit breaker ratings according to local codes and any applicable regulations for the circuit capacity. Note that circuit breaker ratings vary if your system includes the optional rack-mounted HP Model R2000/3 Integrated UPS. NonStop BladeSystem Power Distribution G2 Rack 245

246 NA/JPN: Circuit Breaker Ratings, Modular Three-Phase UPS in G2 Rack These ratings apply to systems with the optional rack-mounted HP Model R2000/3 Integrated UPS that is used for a three-phase power configuration: Version Operating Voltage Settings Power Out (VA/Watts) Input Plug UPS Input Rating North America/Japan (NA/JPN) /2000 IEC P9 5-pin, 60 Amp Dedicated 60 Amp The UPS input requires a dedicated (unshared) branch circuit that is suitably rated for your specific UPS. 2 Factory-default setting For further information and specifications on the R2000/3 UPS (2kVA model), refer to the HP 3 Phase UPS User Guide for the 2kVA model. This guide is available at: For information about the UPS management module supported for the R2000/3 UPS, refer to the HP UPS Management Module User Guide located at: INTL: Modular PDU: Three-Phase Power Setup in a G2 Rack (With Rack-Mounted UPS) To setup the power feed connections as shown in Figure 74:. Connect one 3-phase 32A power feed to the rack-mounted UPS IEC P6 (32A, 5 wire/4 pole) input connector. 2. Connect one 3-phase 6A power feed to the AF53A PDU IEC309 56P6 (6A, 5 wire/4 pole) input connector. 3. Connect one 3-phase 6A power feed to the c7000 enclosure's IEC309 56P6 (6A, 5 wire/4 pole) input connector. 246 Power Configurations for NonStop BladeSystems in G2 Racks

247 Figure 74 International Modular 3-Phase Power Setup in a G2 Rack (With Rack-Mounted UPS) INTL: Modular PDU: Three-Phase Power Setup in a G2 Rack (Without Rack-Mounted UPS) To setup the power feed connections as shown in Figure 75:. Connect two 3-phase 6A power feeds to the two AF53A PDU IEC309 56P6 (6A, 5 wire/4 pole) input connectors. 2. Connect two 3-phase 6A power feeds to the two IEC309 56P6 (6A, 5 wire/4 pole) input connectors within the c7000 enclosure. NonStop BladeSystem Power Distribution G2 Rack 247

248 Figure 75 International Modular 3-Phase Power Setup in a G2 Rack (Without Rack-Mounted UPS) Description of INTL: Three-Phase Modular PDU in a G2 Rack Two three-phase INTL modular power distribution units (PDUs) are installed to provide redundant power outlets for the components mounted in the modular cabinet. Each U rack-mounted modular PDU comes with four modular PDU extension bars. The PDUs are oriented facing each other within the rack. Each PDU has 28 AC receptacles, six circuit breakers, and an AC power cord. The PDU is oriented with the AC power cord exiting the modular cabinet at either the top or bottom rear corners of the cabinet, depending on the site's power feed needs. For information about specific PDU input and output characteristics for PDUs factory-installed in modular cabinets, refer to Enclosure AC Input G2 Rack (page 252). Each three-phase modular PDU in a modular cabinet has: 28 AC receptacles per PDU (7 per extension bar) - IEC 320 C3 0A receptacle type 6 circuit-breakers The 380V to 45V AC, three-phase wye for International (INTL) PDU receives power from the site AC power source. Each PDU distributes site three-phase power to 34 single-phase 200 to 240 V AC outlets for connecting the power cords from the components mounted in the modular cabinet. INTL: Modular PDU in a G2 Rack: AC Power Feeds, Three-Phase The AC power feed cables for the PDUs are mounted to exit the modular cabinet at either the top or bottom rear corners of the cabinet depending on what is ordered for the site's power feed. Bottom AC Power Feed in G2 Rack, Three Phase (NA/JPN Modular PDUs) (page 243) shows the power feed cables on modular three-phase PDUs with AC feed at the bottom of the cabinet and the output connections for the three-phase modular PDU. 248 Power Configurations for NonStop BladeSystems in G2 Racks

249 Figure 76 Bottom AC Power Feed in a G2 Rack, Three Phase (INTL Modular PDUs) Top AC Power Feed in G2 Rack, Three-Phase (NA/JPN Modular PDU) (page 244) shows the three-phase modular PDUs with AC feed at the top of the cabinet. NonStop BladeSystem Power Distribution G2 Rack 249

250 Figure 77 Top AC Power Feed in a G2 Rack, Three-Phase (INTL Modular PDU) INTL: Input and Output Power Characteristics, Three-Phase Modular PDU, c7000, and Extension Bars in a G2 Rack The cabinet with a G2 rack includes two power distribution units (PDUs) with these power characteristics: PDU input characteristics 380 to 45 V AC, 3-phase Wye, 6A RMS, 5-wire 2 feet (3.6 m) attached power cord 50/60Hz IEC309 5-pin, 4 pole, 6A input plug as shown below: PDU output characteristics 6 AC IEC 320 C9 receptacles per PDU with 20A circuit-breaker labels (L, L2, L3, L4, L5, and L6) 250 Power Configurations for NonStop BladeSystems in G2 Racks

251 Extension bar input characteristics 200V to 240V AC, 3-phase delta, 6A RMS, 4-wire 50/60Hz IEC 320 C20 input plug 6.5 feet (2.0 m) attached power cord Extension bar output characteristics 7 AC IEC 320 C3 receptacles per PDU with 0A maximum per outlet The cabinet with a G2 rack includes a c7000 input with these power characteristics: c7000 input characteristics 346 to 45 VAC line to line; 200 to 240 V AC line to neutral; 3phase Wye, 6A RMS, 5-wire 200 to 240 VAC loads wired phase to neutral 0 feet (3.5 m) attached harmonized power cord 50/60Hz IEC309 56P6 5-pin, 6A input plug as shown below: INTL: Modular PDU: Branch Circuits and Circuit Breakers, Three-Phase in a G2 Rack Modular cabinets (with a G2 rack) use a three-phase power configuration for the modular three-phase PDUs and contain two PDUs. Cabinets with a G2 rack and without the optional rack-mounted UPS, require a separate branch circuit of these ratings for each of the two PDUs: Region International PDU Volts (Phase-to-Phase) 380 to 45 Amps (see following CAUTION ) 6 International c7000 Category D circuit breaker is required. 2 Category D circuit breaker is required. 346 to CAUTION: Be sure the hardware configuration and resultant power loads of each cabinet within the system do not exceed the capacity of the branch circuit according to applicable electrical codes and regulations. Select circuit breaker ratings according to local codes and any applicable regulations for the circuit capacity. Note that circuit breaker ratings vary if your system includes the optional rack-mounted HP Model R2000/3 Integrated UPS. NonStop BladeSystem Power Distribution G2 Rack 25

252 Circuit Breaker Ratings for Three-Phase UPS, INTL Modular PDUs in a G2 Rack These ratings apply to systems with the optional rack-mounted HP Model R2000/3 Integrated UPS that is used for an INTL three-phase power configuration: Version Operating Voltage Settings Power Out (VA/Watts) Input Plug UPS Input Rating International (380-45) 2000/2000 IEC Amp Dedicated 32 Amp The UPS input requires a dedicated (unshared) branch circuit that is suitably rated for your specific UPS. 2 Factory-default setting For further information and specifications on the R2000/3 UPS (2kVA model), refer to the HP 3 Phase UPS User Guide for the 2kVA model. This guide is available at: For information about the UPS management module supported for the R2000/3 UPS, refer to the HP UPS Management Module User Guide located at: Enclosure AC Input G2 Rack NOTE: For instructions on grounding the modular cabinet's G2 rack by using the HP Rack Grounding Kit (AF074A), ask your service provider to refer to the instructions in the HP 0000 G2 Series Rack Options Installation Guide. This manual is available at: Enclosures (IP CLIM, IOAM enclosure, and so forth) require: Specification Nominal input voltage Voltage range Nominal line frequency Frequency ranges Number of phases Value 200/208/220/230/240 V AC RMS V AC 50 or 60 Hz Hz or Hz c7000 enclosures require: Specification Value 3-phase (NA/JPN) 3-phase (INTL) -phase Voltage range VAC line to line, 3-phase Delta VAC line to line, VAC line to neutral VAC Nominal line frequency 50 or 60 Hz 50 or 60 Hz 50 or 60 Hz 252 Power Configurations for NonStop BladeSystems in G2 Racks

253 Specification Value 3-phase (NA/JPN) 3-phase (INTL) -phase Frequency ranges Hz or Hz Hz or Hz Hz or Hz Number of phases 3 3 Phase Load Balancing For three-phase cabinets, each PDU is wired such that there are three load segments with groups of outlets alternating between load segments, going up and down the PDU. Factory-installed enclosures, other than the c7000, are connected to the PDUs on alternating load segments to facilitate phase load balancing. The c7000 has its own three-phase input, with each phase (International) or pairs of phases (North America/Japan) associated with one of the c7000 power supplies. When the c7000 is operating in Dynamic Power Saving Mode, the minimum number of power supplies are enabled to redundantly power the enclosure. This mode increases power supply efficiency, but leaves the phases or phase pairs associated with the disabled power supplies unloaded. For multiple-cabinet installations, in order to balance phase loads when Dynamic Power Saving Mode is enabled, HP recommends rotating the phases from one cabinet to the next. For example, if the first cabinet is wired A-B-C, the next cabinet should be wired B-C-A, and the next C-A-B, and so on. For single-phase cabinets, follow the guidelines for phase load balancing by alternating load segment assignments of module power cords to prevent overloading the PDU load segments and circuit breakers. Enclosure AC Input G2 Rack 253

254 Safety and Compliance This section contains three types of required safety and compliance statements: Regulatory compliance Waste Electrical and Electronic Equipment (WEEE) Safety Regulatory Compliance Statements The following regulatory compliance statements apply to the products documented by this manual. FCC Compliance This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to part 5 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment. This equipment generates, uses, and can radiate radio-frequency energy and, if not installed and used in accordance with the instruction manual, may cause interference to radio communications. Operation of this equipment in a residential area is likely to cause harmful interference in which case the user will be required to correct the interference at his own expense. Any changes or modifications not expressly approved by Hewlett-Packard Computer Corporation could void the user s authority to operate this equipment. Canadian Compliance This class A digital apparatus meets all the requirements of the Canadian Interference-Causing Equipment Regulations. Cet appareil numérique de la classe A respecte toutes les exigences du Règlement sur le matériel brouilleur du Canada. Korea KCC Compliance 254 Safety and Compliance

255 Taiwan (BSMI) Compliance Japan (VCCI) Compliance This is a Class A product. In a domestic environment this product may cause radio interference in which case the user may be required to take corrective actions. European Union Notice Products with the CE Marking comply with both the EMC Directive (2004/08/EC) and the Low Voltage Directive (2006/95/EC) issued by the Commission of the European Community. Compliance with these directives implies conformity to the following European Norms (the equivalent international standards are in parenthesis): EN55022 (CISPR 22) Electromagnetic Interference EN55024 (IEC , 3, 4, 5, 6, 8, ) Electromagnetic Immunity EN (IEC ) Power Line Harmonics EN (IEC ) Power Line Flicker EN (IEC60950-) Product Safety Republic of Turkey Notice Türkiye Cumhuriyeti: EEE Yönetmeliğine Uygundur. 255

256 Laser Compliance This product may be provided with an optical storage device (that is, CD or DVD drive) and/or fiber optic transceiver. Each of these devices contains a laser that is classified as a Class or Class M Laser Product in accordance with US FDA regulations and the IEC The product does not emit hazardous laser radiation. WARNING: Use the controls or adjustments or performance of procedures other than those specified herein or in the laser product s installation guide may result in hazardous radiation exposure. To reduce the risk of exposure to hazardous radiation: LASER RADIATION - DO NOT VIEW DIRECTLY WITH OPTICAL INSTRUMENTS. CLASS M LASER PRODUCT. Do not try to open the module enclosure. There are no user-serviceable components inside. Do not operate controls, make adjustments, or perform procedures to the laser device other than those specified herein. Allow only HP Authorized Service technicians to repair the module. The Center for Devices and Radiological Health (CDRH) of the U.S. Food and Drug Administration implemented regulations for laser products on August 2, 976. These regulations apply to laser products manufactured from August, 976. Compliance is mandatory for products marketed in the United States. SAFETY CAUTION The following icon or caution statements may be placed on equipment to indicate the presence of potentially hazardous conditions: DUAL POWER CORDS CAUTION: THIS UNIT HAS MORE THAN ONE POWER SUPPLY CORD. DISCONNECT ALL POWER SUPPLY CORDS TO COMPLETELY REMOVE POWER FROM THIS UNIT." "ATTENTION: CET APPAREIL COMPORTE PLUS D'UN CORDON D'ALIMENTATION. DÉBRANCHER TOUS LES CORDONS D'ALIMENTATION AFIN DE COUPER COMPLÈTEMENT L'ALIMENTATION DE CET ÉQUIPEMENT". DIESES GERÄT HAT MEHR ALS EIN NETZKABEL. VOR DER WARTUNG BITTE ALLE NETZKABEL AUS DER STECKDOSE ZIEHEN. Any surface or area of the equipment marked with these symbols indicates the presence of electric shock hazards. The enclosed area contains no operator-serviceable parts. WARNING: To reduce the risk of injury from electric shock hazards, do not open this enclosure. NOT FOR EXTERNAL USE CAUTION: NOT FOR EXTERNAL USE. ALL RECEPTACLES ARE FOR INTERNAL USE ONLY. ATTENTION: NE PAS UTILISER A L EXTERIEUR DE L EQUIPEMENT IMPORTANT: TOUS LES RECIPIENTS SONT DESTINES UNIQUEMENT A UN USAGE INTERNE. VORSICHT: ALLE STECKDOSEN DIENEN NUR DEM INTERNEN GEBRAUCH. HIGH LEAKAGE CURRENT To reduce the risk of electric shock due to high leakage currents, a reliable grounded (earthed) connection should be checked before servicing the power distribution unit (PDU). 256 Safety and Compliance

257 Observe the following limits when connecting the product to AC power distribution devices: For PDUs that have attached AC power cords or are directly wired to the building power, the total combined leakage current should not exceed 5 percent of the rated input current for the device. HIGH LEAKAGE CURRENT, EARTH CONNECTION ESSENTIAL BEFORE CONNECTING SUPPLY HOHER ABLEITSTROM. VOR INBETRIEBNAHME UNBEDINGT ERDUNGSVERBINDUNG HERSTELLEN COURANT DE FUITE E LEVE. RACCORDEMENT A LA TERRE INDISPENSABLE AVANT LE RACCORDEMENT AU RESEAU FUSE REPLACEMENT CAUTION For continued protection against risk of fire, replace fuses only with fuses of the same type and the same rating. Disconnect power before changing fuses. Waste Electrical and Electronic Equipment (WEEE) Information about the Waste Electrical and Electronic Equipment (WEEE) directive is available from the HP NonStop Technical Library at Under HP NonStop Technical Library, select HP Integrity NonStop Server Safety and Compliance > Safety and Regulatory Information>Waste Electrical and Electronic Equipment (WEEE). Important Safety Information Safety information is available from the HP NonStop Technical Library at go/nonstop-docs. Under HP NonStop Technical Library, select HP Integrity NonStop Server Safety and Compliance > Safety and Regulatory Information>Safety and Compliance. To open the safety information in a language other than English, select the language. Local HP support can also help direct you to your safety information. Table of Toxic and Hazardous Substances/Elements and Their Contents (as required by China s Management Methods for Controlling Pollution by Electronic Products) 257

258 258 Safety and Compliance

HPE Integrity NonStop i BladeSystem Planning Guide

HPE Integrity NonStop i BladeSystem Planning Guide HPE Integrity NonStop i BladeSystem Planning Guide Part Number: 545740-024 Published: May 207 Edition: J06.3 and subsequent J-series RVUs Copyright 203, 207 Hewlett Packard Enterprise Development LP The

More information

HP Integrity NonStop NS14000 Series Planning Guide

HP Integrity NonStop NS14000 Series Planning Guide HP Integrity NonStop NS14000 Series Planning Guide HP Part Number: 543635009 Published: March 2012 Edition: H06.13 and subsequent Hseries RVUs Copyright 2006, 2012 HewlettPackard Development Company, L.P.

More information

HP NonStop MXDM User Guide for SQL/MX Release 3.2

HP NonStop MXDM User Guide for SQL/MX Release 3.2 HP NonStop MXDM User Guide for SQL/MX Release 3.2 HP Part Number: 691119-001 Published: August 2012 Edition: J06.14 and subsequent J-series RVUs; H06.25 and subsequent H-series RVUs Copyright 2012 Hewlett-Packard

More information

HPE SIM for NonStop Manageability

HPE SIM for NonStop Manageability HPE SIM for NonStop Manageability Part Number: 575075-005R Published: January 2016 Edition: J06.03 and subsequent J-series RVUs, H06.03 and subsequent H-series RVUs, and G06.15 and subsequent G-series

More information

HPE NonStop Development Environment for Eclipse 6.0 Debugging Supplement

HPE NonStop Development Environment for Eclipse 6.0 Debugging Supplement HPE NonStop Development Environment for Eclipse 6.0 Debugging Supplement Part Number: 831776-001 Published: June 2016 Edition: L16.05 and subsequent L-series RVUs, J06.20 and subsequent J-series RVUs Copyright

More information

DSM/SCM Messages Manual

DSM/SCM Messages Manual DSM/SCM Messages Manual Abstract This manual provides cause, effect, and recovery information for messages and errors that you might encounter while using the Distributed Systems Management/Software Configuration

More information

OSM Service Connection User's Guide

OSM Service Connection User's Guide OSM Service Connection User's Guide Part Number: 879519-001 Published: March 017 Edition: J06.03 and subsequent J-series RVUs and H06.03 and subsequent H-series RVUs. Copyright 016, 017 Hewlett Packard

More information

NonStop Development Environment for Eclipse 7.0 Debugging Supplement

NonStop Development Environment for Eclipse 7.0 Debugging Supplement NonStop Development Environment for Eclipse 7.0 Debugging Supplement Part Number: 831776-002 Published: August 2017 Edition: L15.02 and all subsequent L-series RVUs, J06.18 and all subsequent J-series

More information

HP Integrity NonStop NS-Series Planning Guide

HP Integrity NonStop NS-Series Planning Guide HP Integrity NonStop NS-Series Planning Guide Abstract This guide describes the Integrity NonStop NS-series system hardware and provides examples of system configurations to assist in planning for installation

More information

HPE Direct-Connect External SAS Storage for HPE BladeSystem Solutions Deployment Guide

HPE Direct-Connect External SAS Storage for HPE BladeSystem Solutions Deployment Guide HPE Direct-Connect External SAS Storage for HPE BladeSystem Solutions Deployment Guide This document provides device overview information, installation best practices and procedural overview, and illustrated

More information

HPE BladeSystem c3000 Enclosure Quick Setup Instructions

HPE BladeSystem c3000 Enclosure Quick Setup Instructions HPE BladeSystem c3000 Enclosure Quick Setup Instructions Part Number: 446990-007 2 Site requirements Select an installation site that meets the detailed installation site requirements described in the

More information

Native Inspect Manual

Native Inspect Manual Native Inspect Manual Part Number: 528122-015R Published: November 2015 Edition: H06.23 and subsequent H-series RVUs, J06.12 and subsequent J-series RVUs, and L15.02 and subsequent L-series RVUs Copyright

More information

HP Database Manager (HPDM) User Guide

HP Database Manager (HPDM) User Guide HP Database Manager (HPDM) User Guide HP Part Number: 597527-001 Published: March 2010 Edition: HP Neoview Release 2.4 Service Pack 2 Copyright 2010 Hewlett-Packard Development Company, L.P. Legal Notice

More information

BITUG 2013 NonStop Big Sig HP NonStop update

BITUG 2013 NonStop Big Sig HP NonStop update BITUG 2013 NonStop Big Sig HP NonStop update Mark Pollans Sr. Worldwide Product Manager, HP December 2013 Forward-looking statements This is a rolling (up to three year) roadmap and is subject to change

More information

Code Profiling Utilities Manual

Code Profiling Utilities Manual Code Profiling Utilities Manual Part Number: P04195-001 Published: April 2018 Edition: L15.02 and all subsequent L-series RVUs, J06.03 and all subsequent J-series RVUs, and H06.03 and all subsequent H-series

More information

Spooler Plus Programmer s Guide

Spooler Plus Programmer s Guide Spooler Plus Programmer s Guide Abstract This manual describes the Spooler Plus subsystem, its uses, and its applications. This guide is intended for application programmers who want to use the Spooler

More information

Safeguard Administrator s Manual

Safeguard Administrator s Manual Safeguard Administrator s Manual Part Number: 862340-003a Published: June 2017 Edition: L15.02, J06.03, H06.08, and G06.29, and later L-series, J-series, H-series, and G-series RVUs. 2011, 2017 Hewlett

More information

HP Virtual Connect for c-class BladeSystem Setup and Installation Guide

HP Virtual Connect for c-class BladeSystem Setup and Installation Guide HP Virtual Connect for c-class BladeSystem Setup and Installation Guide Version 4.01 Abstract This document contains setup, installation, and configuration information for HP Virtual Connect. This document

More information

HP Cluster Platform Server and Workstation Overview

HP Cluster Platform Server and Workstation Overview HP Cluster Platform Server and Workstation Overview HP Part Number: A-CPSOV-H Published: March 009 Copyright 009 Hewlett-Packard Development Company, L.P. The information contained herein is subject to

More information

QuickSpecs HP BladeSystem Breaker Panel

QuickSpecs HP BladeSystem Breaker Panel Overview The HP BladeSystem Breaker Panel is a dual input panel, where each input is rated for 240Amp, -36VDC to -72VDC. There are seven breakers on each input of the panel; where the first three are pre-configured

More information

Sun StorageTek. 1U Rackmount Media Tray Reference Guide. Sun Doc Part Number: Second edition: December 2007

Sun StorageTek. 1U Rackmount Media Tray Reference Guide. Sun Doc Part Number: Second edition: December 2007 Sun StorageTek nl 1U Rackmount Media Tray Reference Guide Sun Doc Part Number: 875 4297 10 Second edition: December 2007 Legal and notice information Copyright 2007 Hewlett Packard Development Company,

More information

Aruba JL311A Cable Guard Installation Instructions

Aruba JL311A Cable Guard Installation Instructions Aruba JL3A Cable Guard Installation Instructions Abstract Use this guide to assist in the basic installation of the Aruba JL3A Cable Guard on select Aruba 8 port switches. *500-97* Part Number: 500-97

More information

HPE Virtual Connect for c-class BladeSystem Setup and Installation Guide Version 4.60

HPE Virtual Connect for c-class BladeSystem Setup and Installation Guide Version 4.60 HPE Virtual Connect for c-class BladeSystem Setup and Installation Guide Version 4.60 Abstract This document contains setup, installation, and configuration information for HPE Virtual Connect. This document

More information

Over 70% of servers within a data center are not connected to Fibre Channel SANs for any of the following reasons:

Over 70% of servers within a data center are not connected to Fibre Channel SANs for any of the following reasons: Overview The provides modular multi-protocol SAN designs with increased scalability, stability and ROI on storage infrastructure. Over 70% of servers within a data center are not connected to Fibre Channel

More information

HP ProLiant Agentless Management Pack (v 3.2) for Microsoft System Center User Guide

HP ProLiant Agentless Management Pack (v 3.2) for Microsoft System Center User Guide HP ProLiant Agentless Management Pack (v 3.2) for Microsoft System Center User Guide Abstract This guide provides information on using the HP ProLiant Agentless Management Pack for System Center version

More information

HP BladeSystem c-class Virtual Connect Support Utility Version Release Notes

HP BladeSystem c-class Virtual Connect Support Utility Version Release Notes HP BladeSystem c-class Virtual Connect Support Utility Version 1.9.1 Release Notes Abstract This document provides release information for the HP BladeSystem c-class Virtual Connect Support Utility Version

More information

HP OpenVMS Software-Based iscsi Initiator Technology Demonstration Kit Configuration and User s Guide

HP OpenVMS Software-Based iscsi Initiator Technology Demonstration Kit Configuration and User s Guide HP OpenVMS Software-Based iscsi Initiator Technology Demonstration Kit Configuration and User s Guide November 2007 This manual describes how to configure and use the HP OpenVMS Software-Based iscsi Initiator

More information

The HP BladeSystem p-class 1U power enclosure: hot-plug, redundant power for a server blade enclosure

The HP BladeSystem p-class 1U power enclosure: hot-plug, redundant power for a server blade enclosure The HP BladeSystem p-class 1U power enclosure: hot-plug, redundant power for a server blade enclosure technology brief Abstract... 3 Introduction... 3 Components of the enclosure... 3 Hot-plug, redundant

More information

HPE Integrity MC990 X Server Getting Started Guide

HPE Integrity MC990 X Server Getting Started Guide HPE Integrity MC990 X Server Getting Started Guide Abstract This guide describes the HPE Integrity MC990 X Server computer system and provides information to get started using it. Part Number: 855699-003

More information

HP ProLiant BL35p Server Blade

HP ProLiant BL35p Server Blade Data sheet The new HP ProLiant BL35p two-way Server Blade delivers uncompromising manageability, maximum compute density and breakthrough power efficiencies to the high-performance data centre. The ProLiant

More information

HP 3PAR OS Messages and Operators Guide

HP 3PAR OS Messages and Operators Guide HP 3PAR OS 3.1.1 Messages and Operators Guide Abstract This guide is for system administrators and experienced users who are familiar with the storage systems, understand the operating system(s) they are

More information

Powering HP BladeSystem c7000 Enclosures

Powering HP BladeSystem c7000 Enclosures Powering HP BladeSystem c7000 Enclosures HOWTO Abstract... 2 Power draw of HP BladeSystem c7000 Enclosure... 2 Power Distribution...2 Uninterruptible Power Supplies... 6 For more information... 9 Call

More information

HP BladeSystem Matrix Compatibility Chart

HP BladeSystem Matrix Compatibility Chart HP BladeSystem Matrix Compatibility Chart For supported hardware and software, including BladeSystem Matrix firmware set 1.01 Part Number 512185-003 December 2009 (Third Edition) Copyright 2009 Hewlett-Packard

More information

HP Power Calculator Utility: a tool for estimating power requirements for HP ProLiant rack-mounted systems

HP Power Calculator Utility: a tool for estimating power requirements for HP ProLiant rack-mounted systems HP Power Calculator Utility: a tool for estimating power requirements for HP ProLiant rack-mounted systems technology brief Abstract... 2 Introduction... 2 Key parameters... 2 Input line voltage... 2 Device

More information

HPE BladeSystem c-class Virtual Connect Support Utility Version Release Notes

HPE BladeSystem c-class Virtual Connect Support Utility Version Release Notes HPE BladeSystem c-class Virtual Connect Support Utility Version 1.12.0 Release Notes Abstract This document provides release information for the HPE BladeSystem c-class Virtual Connect Support Utility

More information

HPE ProLiant Gen9 Troubleshooting Guide

HPE ProLiant Gen9 Troubleshooting Guide HPE ProLiant Gen9 Troubleshooting Guide Volume II: Error Messages Abstract This guide provides a list of error messages associated with HPE ProLiant servers, Integrated Lights-Out, Smart Array storage,

More information

HP StorageWorks 4000/6000/8000 Enterprise Virtual Array connectivity for Sun Solaris installation and reference guide

HP StorageWorks 4000/6000/8000 Enterprise Virtual Array connectivity for Sun Solaris installation and reference guide HP StorageWorks 4000/6000/8000 Enterprise Virtual Array connectivity for Sun Solaris installation and reference guide Part number: 5697-5263 First edition: May 2005 Legal and notice information Copyright

More information

HP P6300/P6500 EVA Fibre Channel Controller Replacement Instructions

HP P6300/P6500 EVA Fibre Channel Controller Replacement Instructions HP P6300/P6500 EVA Fibre Channel Controller Replacement Instructions About this document For the latest documentation, go to http:// www.hp.com/support/manuals, and select your product. The information

More information

HP UPS R/T3000 G2. Overview. Precautions. Kit contents. Installation Instructions

HP UPS R/T3000 G2. Overview. Precautions. Kit contents. Installation Instructions HP UPS R/T3000 G2 Installation Instructions Overview The HP UPS R/T3000 G2 features a 2U rack-mount with convertible tower design and offers power protection for loads up to a maximum of 3300 VA/3000 W

More information

Installation and Getting Started Guide. HP ProCurve 600/610 External Power Supplies. PoE. Power over Ethernet Devices

Installation and Getting Started Guide. HP ProCurve 600/610 External Power Supplies.   PoE. Power over Ethernet Devices Installation and Getting Started Guide HP ProCurve 600/610 External Supplies www.hp.com/go/hpprocurve PoE over Ethernet Devices HP ProCurve 600/610 External Supplies Installation and Getting Started Guide

More information

HPE Performance Optimized Datacenter (POD) Networking Guide

HPE Performance Optimized Datacenter (POD) Networking Guide HPE Performance Optimized Datacenter (POD) Networking Guide Abstract This document provides networking guidance for the various HPE PODs. Part Number: 663145-003R November 2015 Edition: 4 Copyright 2011,

More information

HPE Synergy Configuration and Compatibility Guide

HPE Synergy Configuration and Compatibility Guide HPE Synergy Configuration and Compatibility Guide Abstract This guide describes HPE Synergy hardware configuration options and compatibility. Hewlett Packard Enterprise assumes you are qualified in the

More information

HP StoreOnce 4900 (44TB) and (60TB) Capacity Expansion Kit

HP StoreOnce 4900 (44TB) and (60TB) Capacity Expansion Kit HP StoreOnce 4900 (44TB) and (60TB) Capacity Expansion Kit Installation Instructions Abstract This document explains how to install the HP StoreOnce 4900 (44TB) and (60TB) Capacity Expansion Kit, apply

More information

HP StorageWorks 4x00/6x00/8x00 Enterprise Virtual Array hardware configuration guide

HP StorageWorks 4x00/6x00/8x00 Enterprise Virtual Array hardware configuration guide HP StorageWorks 4x00/6x00/8x00 Enterprise Virtual Array hardware configuration guide Part number: 5697-7338 Fifth edition: February 2008 Legal and notice information Copyright 2005-2008 Hewlett-Packard

More information

HP ProLiant m300 1P C2750 CPU 32GB Configure-to-order Server Cartridge B21

HP ProLiant m300 1P C2750 CPU 32GB Configure-to-order Server Cartridge B21 Overview HP Moonshot System with the HP ProLiant m300 server cartridge was created for cost-effective Dynamic Content Delivery, frontend Web and online analytics. The ProLiant m300 server cartridge has

More information

DLL Programmer s Guide for TNS/E Systems

DLL Programmer s Guide for TNS/E Systems DLL Programmer s Guide for TNS/E Systems Abstract This guide describes how application programmers can use the DLL facilities provided on TNS/E systems and recommends good practices in using them. Product

More information

HP Integrity NonStop NS16000 Server Data sheet

HP Integrity NonStop NS16000 Server Data sheet Data sheet The HP Integrity NonStop NS16000 Server is a top-of-the-line enterprise computing solution that is designed to deliver unprecedented levels of application availability with absolute data integrity.

More information

HP Cluster Platform Overview

HP Cluster Platform Overview HP Cluster Platform Overview Abstract This document describes the benefits, hardware support, software support, and installation considerations of HP Cluster Platform systems. HP Part Number: 5697-2030

More information

Looking for a simple and intuitive solution that allows you to access and manage servers across your entire network?

Looking for a simple and intuitive solution that allows you to access and manage servers across your entire network? Overview Looking for a simple and intuitive solution that allows you to access and manage servers across your entire network? allow you to view and manage up to 256 rackmount servers across your data center

More information

Looking for a simple and intuitive solution that allows you to access and manage servers across your entire network?

Looking for a simple and intuitive solution that allows you to access and manage servers across your entire network? Overview Looking for a simple and intuitive solution that allows you to access and manage servers across your entire network? allow you to view and manage up to 256 rackmount servers across your data center

More information

HP LeftHand P4000 Virtual SAN Appliance in an HP BladeSystem environment solution guide

HP LeftHand P4000 Virtual SAN Appliance in an HP BladeSystem environment solution guide HP LeftHand P4000 Virtual SAN Appliance in an HP BladeSystem environment solution guide AT459-96002 Part number: AT459-96002 First edition: April 2009 Legal and notice information Copyright 2009 Hewlett-Packard

More information

StoreOnce 6500 (88TB) System Capacity Expansion Guide

StoreOnce 6500 (88TB) System Capacity Expansion Guide StoreOnce 6500 (88TB) System Capacity Expansion Guide Abstract This document explains how to install the StoreOnce 6500 System Capacity Expansion Kit, apply the new license, and add the new storage to

More information

HP Uninterruptible Power Supplies Overview

HP Uninterruptible Power Supplies Overview NOTE: Additional information on these and additional HP UPS products at: http://h18004.www1.hp.com/products/servers/proliantstorage/power-protection/index.html Description HP rack mountable UPSs, designed

More information

HP Integrity NonStop NS14200 Server

HP Integrity NonStop NS14200 Server HP Integrity NonStop NS14200 Server Data sheet HP Integrity NonStop NS-series servers offer the highest levels of service of any platform, while extending the framework of the HP Adaptive Infrastructure

More information

HPE Intelligent Power Distribution Unit Installation Instructions

HPE Intelligent Power Distribution Unit Installation Instructions HPE Intelligent Power Distribution Unit Installation Instructions Important safety information For important safety, environmental, and regulatory information, see Safety and Compliance Information for

More information

EMC SYMMETRIX VMAX 40K SYSTEM

EMC SYMMETRIX VMAX 40K SYSTEM EMC SYMMETRIX VMAX 40K SYSTEM The EMC Symmetrix VMAX 40K storage system delivers unmatched scalability and high availability for the enterprise while providing market-leading functionality to accelerate

More information

HPE Code Coverage Tool Reference Manual for HPE Integrity NonStop NS-Series Servers

HPE Code Coverage Tool Reference Manual for HPE Integrity NonStop NS-Series Servers HPE Code Coverage Tool Reference Manual for HPE Integrity NonStop NS-Series Servers Abstract This manual describes the HPE Code Coverage Tool for HPE Integrity NonStop NS-Series Servers. It addresses application

More information

EMC SYMMETRIX VMAX 40K STORAGE SYSTEM

EMC SYMMETRIX VMAX 40K STORAGE SYSTEM EMC SYMMETRIX VMAX 40K STORAGE SYSTEM The EMC Symmetrix VMAX 40K storage system delivers unmatched scalability and high availability for the enterprise while providing market-leading functionality to accelerate

More information

QuickSpecs. HP Rack G2 Series - top view

QuickSpecs. HP Rack G2 Series - top view DA - 12402 Worldwide Version 6 3.8.2007 Page 1 Overview HP sets the new standard for performance and value in the enterprise with the new 10000 G2 Series Rack family. This new enterprise-class rack combines

More information

QuickSpecs. What's New. Models. ProLiant Essentials Server Migration Pack - Physical to ProLiant Edition. Overview

QuickSpecs. What's New. Models. ProLiant Essentials Server Migration Pack - Physical to ProLiant Edition. Overview Overview Upgrading or replacing your existing server? Migration is now an option! Replicate the server you are replacing using the software, the only product of its kind from a server vendor that provides

More information

N3240 Installation and Setup Instructions

N3240 Installation and Setup Instructions IBM System Storage N3240 Installation and Setup Instructions Covering the N3240 model GA32-2203-01 Notices Mail comments to: IBM Corporation Attention Department GZW 9000 South Rita Road Tucson, AZ 85744-0001

More information

HP Direct-Connect External SAS Storage for HP BladeSystem Solutions Deployment Guide

HP Direct-Connect External SAS Storage for HP BladeSystem Solutions Deployment Guide HP Direct-Connect External SAS Storage for HP BladeSystem Solutions Deployment Guide Part number: 510464 003 Third edition: November 2009 Legal and notice information Copyright 2008-2009 Hewlett-Packard

More information

HP 527 Dual Radio ac Unified Wired-WLAN Walljack

HP 527 Dual Radio ac Unified Wired-WLAN Walljack HP 527 Dual Radio 802.11ac Unified Wired-WLAN Walljack Installation Guide Part number: 5998-7087a Document version: 6W101-20150318 Legal and notice information Copyright 2015 Hewlett-Packard Development

More information

HP NonStop SQL/MX 2.3.x to SQL/MX 3.0 Database and Application Migration Guide

HP NonStop SQL/MX 2.3.x to SQL/MX 3.0 Database and Application Migration Guide HP NonStop SQL/MX 2.3.x to SQL/MX 3.0 Database and Application Migration Guide Abstract This manual explains how to migrate databases and applications from HP NonStop SQL/MX 2.3.x to SQL/MX Release 3.0

More information

HP Integrity NonStop NS-Series Site Preparation Guide

HP Integrity NonStop NS-Series Site Preparation Guide HP Integrity NonStop NS-Series Site Preparation Guide Abstract This manual provides the physical, electrical, and environmental specifications for use by HP professional service personnel for site preparation,

More information

HPE 3PAR OS GA Patch 12

HPE 3PAR OS GA Patch 12 HPE 3PAR OS 3.3.1 GA Patch 12 Upgrade Instructions Abstract This upgrade instructions document is for installing Patch 12 on the HPE 3PAR Operating System Software OS-3.3.1.215-GA. This document is for

More information

The Virtualized Server Environment

The Virtualized Server Environment CHAPTER 3 The Virtualized Server Environment Based on the analysis performed on the existing server environment in the previous chapter, this chapter covers the virtualized solution. The Capacity Planner

More information

Microsoft Windows Server 2008 On Integrity Servers Overview

Microsoft Windows Server 2008 On Integrity Servers Overview Overview The HP Integrity servers based on Intel Itanium architecture provide one of the most reliable and scalable platforms currently available for mission-critical Windows Server deployments. Windows

More information

HPE PSR300-12A & PSR300-12D1

HPE PSR300-12A & PSR300-12D1 HPE PSR300-12A & PSR300-12D1 Power Supplies User Guide 5998-1604s Part number: 5998-1604s Document version: 6PW103-20160405 Copyright 2015, 2016 Hewlett Packard Enterprise Development LP The information

More information

HP0-S15. Planning and Designing ProLiant Solutions for the Enterprise. Download Full Version :

HP0-S15. Planning and Designing ProLiant Solutions for the Enterprise. Download Full Version : HP HP0-S15 Planning and Designing ProLiant Solutions for the Enterprise Download Full Version : http://killexams.com/pass4sure/exam-detail/hp0-s15 QUESTION: 174 Which rules should be followed when installing

More information

HP ProLiant DL165 G7 Server

HP ProLiant DL165 G7 Server HP ProLiant DL165 G7 Server Installation Instructions Part Number 601464-003 Identifying server components Front panel components Figure 1 Front Panel Components / 4 3.5 LFF HDD Item Description 1 Thumbscrews

More information

HP 10500/ G Unified Wired-WLAN Module

HP 10500/ G Unified Wired-WLAN Module HP 10500/7500 20G Unified Wired-WLAN Module Fundamentals Configuration Guide Part number: 5998-3914 Software version: 2308P29 (HP 10500/7500 20G Unified Wired-WLAN Module) Document version: 6W102-20131112

More information

Tower to Rack and Rack to Tower System Conversion Guide

Tower to Rack and Rack to Tower System Conversion Guide Tower to Rack and Rack to Tower System Conversion Guide HP Workstation zx6000 HP Server rx2600 Manufacturing Part Number : A7857-90017 Edition E0802 Copyright 2002 Hewlett-Packard Company. Legal Notices

More information

HP UPS R/T3000 ERM. Overview. Precautions. Installation Instructions

HP UPS R/T3000 ERM. Overview. Precautions. Installation Instructions HP UPS R/T3000 ERM Installation Instructions Overview The ERM consists of two battery packs in a 2U chassis. The ERM connects directly to a UPS R/T3000 or to another ERM. Up to two ERM units can be connected.

More information

HP Virtual Connect Enterprise Manager

HP Virtual Connect Enterprise Manager HP Virtual Connect Enterprise Manager Data Migration Guide HP Part Number: 487488-001 Published: April 2008, first edition Copyright 2008 Hewlett-Packard Development Company, L.P. Legal Notices Confidential

More information

HP 3PAR OS MU3 Patch 18 Release Notes

HP 3PAR OS MU3 Patch 18 Release Notes HP 3PAR OS 3.2.1 MU3 Patch 18 Release Notes This release notes document is for Patch 18 and intended for HP 3PAR Operating System Software 3.2.1.292 (MU3). HP Part Number: QL226-98326 Published: August

More information

hp uninterruptible power system r12000 xr models installation instructions

hp uninterruptible power system r12000 xr models installation instructions hp uninterruptible power system r000 xr models installation instructions Overview These instructions show how to install an uninterruptible power system (UPS). For detailed information about the UPS, refer

More information

HP 3PAR OS MU1 Patch 11

HP 3PAR OS MU1 Patch 11 HP 3PAR OS 313 MU1 Patch 11 Release Notes This release notes document is for Patch 11 and intended for HP 3PAR Operating System Software HP Part Number: QL226-98041 Published: December 2014 Edition: 1

More information

HP BladeSystem c-class Enclosure Troubleshooting Guide

HP BladeSystem c-class Enclosure Troubleshooting Guide HP BladeSystem c-class Enclosure Troubleshooting Guide Part Number 460224-002 July 2009 (Second Edition) Copyright 2007, 2009 Hewlett-Packard Development Company, L.P. The information contained herein

More information

MX480 3D Universal Edge Router

MX480 3D Universal Edge Router MX480 3D Universal Edge Router Hardware Guide Published: 2013-08-29 Juniper Networks, Inc. 1194 North Mathilda Avenue Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net This product includes

More information

HP D6000 Disk Enclosure Direct Connect Cabling Guide

HP D6000 Disk Enclosure Direct Connect Cabling Guide HP D6000 Disk Enclosure Direct Connect Cabling Guide Abstract This document provides cabling examples for when an HP D6000 Disk Enclosure is connected directly to a server. Part Number: 682251-001 September

More information

LOB Guide. Version 2.4.0

LOB Guide. Version 2.4.0 Version 2.4.0 Table of Contents 1. About This Document........................................................................... 4 1.1. Intended Audience.........................................................................

More information

Host and storage system rules

Host and storage system rules Host and storage system rules Host and storage system rules are presented in these chapters: Heterogeneous server rules on page 185 MSA storage system rules on page 235 HPE StoreVirtual storage system

More information

HP MSA Family Installation and Startup Service

HP MSA Family Installation and Startup Service Technical data HP MSA Family Installation and HP Services Service benefits Allows your IT resources to stay focused on their core tasks and priorities Reduces implementation time, impact, and risk to your

More information

HP StorageWorks Enterprise Virtual Array 4400 to 6400/8400 upgrade assessment

HP StorageWorks Enterprise Virtual Array 4400 to 6400/8400 upgrade assessment HP StorageWorks Enterprise Virtual Array 4400 to 6400/8400 upgrade assessment Part number: 5697-8185 First edition: June 2009 Legal and notice information Copyright 2009 Hewlett-Packard Development Company,

More information

HPE WBEM Providers for OpenVMS Integrity servers Release Notes Version 2.2-5

HPE WBEM Providers for OpenVMS Integrity servers Release Notes Version 2.2-5 HPE WBEM Providers for OpenVMS Integrity servers Release Notes Version 2.2-5 January 2016 This release note describes the enhancement, known restrictions, and errors found in the WBEM software and documentation,

More information

Z8 G4 Site Prep Guide Table of contents

Z8 G4 Site Prep Guide Table of contents Z8 G4 Site Prep Guide Table of contents Introduction... 2 Power consumption and cooling considerations... 2 Power consumption limitations... 2 Uninterruptible power supply (UPS)... 3 Power consumption

More information

HP Database Manager (HPDM) User Guide

HP Database Manager (HPDM) User Guide HP Database Manager (HPDM) User Guide HP Part Number: 613120-001 Published: July 2010 Edition: HP Neoview Release 2.5 Copyright 2010 Hewlett-Packard Development Company, L.P. Legal Notice Confidential

More information

BL8x0c i2: Overview, Setup, Troubleshooting, and Various Methods to Install OpenVMS

BL8x0c i2: Overview, Setup, Troubleshooting, and Various Methods to Install OpenVMS OpenVMS Technical Journal V15 BL8x0c i2: Overview, Setup, Troubleshooting, and Various Methods to Install OpenVMS Aditya B S and Srinivas Pinnika BL8x0c i2: Overview, Setup, Troubleshooting, and Various

More information

HP 10GbE Pass-Thru Module

HP 10GbE Pass-Thru Module Overview The is designed for c-class BladeSystem and HP Integrity Superdome 2 customers requiring a nonblocking, one -to-one connection between each server and the network. The pass-thru module provides

More information

QuickSpecs. HP Rack Series. Overview. HP Rack Series - 36U

QuickSpecs. HP Rack Series. Overview. HP Rack Series - 36U Overview - 42U - 36U - 22U DIM "A" 78.838 [2002.50] DIM "A" 68.338 [1735.78] DIM "A" 43.838 [1113.48] DIM "B" 24.125 [612.77] DIM "B" 24.125 [612.77] DIM "B" 24.125 [612.77] DA - 10995 Worldwide Version

More information

The HP Blade Workstation Solution A new paradigm in workstation computing featuring the HP ProLiant xw460c Blade Workstation

The HP Blade Workstation Solution A new paradigm in workstation computing featuring the HP ProLiant xw460c Blade Workstation The HP Blade Workstation Solution A new paradigm in workstation computing featuring the HP ProLiant xw460c Blade Workstation Executive overview...2 HP Blade Workstation Solution overview...2 Details of

More information

HP Proliant DL360 G6 Carrier-Grade Server Read Before Install

HP Proliant DL360 G6 Carrier-Grade Server Read Before Install HP Proliant DL360 G6 Carrier-Grade Server Read Before Install Carrier-Grade Instructions HP Part Number: COOPE-RRTF1 Published: September 2010 Edition: 1 Copyright 2010 Hewlett-Packard Development Company,

More information

Compaq Ultra-Dense Server Deployment in Telecommunications (Telco) Racks

Compaq Ultra-Dense Server Deployment in Telecommunications (Telco) Racks Guidelines June 2000 Prepared by: Storage Products Group Compaq Computer Corporation Contents Introduction...3 Rack Warnings...4 Telco Rack Option Kit for Compaq ProLiant DL360 Server Installation...5

More information

HP StorageWorks MSA/P2000 Family Disk Array Installation and Startup Service

HP StorageWorks MSA/P2000 Family Disk Array Installation and Startup Service HP StorageWorks MSA/P2000 Family Disk Array Installation and Startup Service HP Services Technical data The HP StorageWorks MSA/P2000 Family Disk Array Installation and Startup Service provides the necessary

More information

HP ProLiant MicroServer

HP ProLiant MicroServer HP ProLiant MicroServer Installation Sheet Part Number 615715-004 Panel door components Item Component 1 16 screws for HDD installation 2 4 screws for ODD installation 3 Screw driver Rear panel components

More information

HPE Factory Express Customized Integration with Onsite Startup Service

HPE Factory Express Customized Integration with Onsite Startup Service Data sheet HPE Factory Express Customized Integration with Onsite Startup Service HPE Lifecycle Event Services HPE Factory Express Customized Integration with Onsite Startup Service (formerly known as

More information

HPE NonStop Remote Server Call (RSC/MP) Messages Manual

HPE NonStop Remote Server Call (RSC/MP) Messages Manual HPE NonStop Remote Server Call (RSC/MP) Messages Manual Abstract This manual provides a listing and description of RSC/MP messages that are reported on the HPE NonStop host and on RSC/MP workstations.

More information

Installing the Cisco MDS 9020 Fabric Switch

Installing the Cisco MDS 9020 Fabric Switch CHAPTER 2 This chapter describes how to install the Cisco MDS 9020 Fabric Switch and its components, and it includes the following information: Pre-Installation, page 2-2 Installing the Switch in a Cabinet

More information

QuickSpecs. What's New. Models. HP ProLiant Essentials Performance Management Pack version 4.5. Overview. Retired

QuickSpecs. What's New. Models. HP ProLiant Essentials Performance Management Pack version 4.5. Overview. Retired Overview ProLiant Essentials Performance Management Pack (PMP) is a software solution that detects, analyzes, and explains hardware bottlenecks on HP ProLiant servers. HP Integrity servers and HP Storage

More information